From checker at panix.com Sun Jan 1 00:30:12 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 19:30:12 -0500 (EST) Subject: [Paleopsych] Meme 055: The Origins of the Nudity Taboo Message-ID: Meme 055: The Origins of the Nudity Taboo sent 5.12.31 It is a familiar story: 21 And the Lord God caused a deep sleep to fall upon Adam, and he slept: and he took one of his ribs, and closed up the flesh instead thereof; 22 And the rib, which the Lord God had taken from man, made he a woman, and brought her unto the man. 23 And Adam said, This is now bone of my bones, and flesh of my flesh: she shall be called Woman, because she was taken out of Man. 24 Therefore shall a man leave his father and his mother, and shall cleave unto his wife: and they shall be one flesh. 25 And they were both naked, the man and his wife, and were not ashamed. CHAPTER 3 1 Now the serpent was more subtil than any beast of the field which the Lord God had made. And he said unto the woman, Yea, hath God said, Ye shall not eat of every tree of the garden? 2 And the woman said unto the serpent, We may eat of the fruit of the trees of the garden: 3 But of the fruit of the tree which is in the midst of the garden, God hath said, Ye shall not eat of it, neither shall ye touch it, lest ye die. 4 And the serpent said unto the woman, Ye shall not surely die: 5 For God doth know that in the day ye eat thereof, then your eyes shall be opened, and ye shall be as gods, knowing good and evil. 6 And when the woman saw that the tree was good for food, and that it was pleasant to the eyes, and a tree to be desired to make one wise, she took of the fruit thereof, and did eat, and gave also unto her husband with her; and he did eat. 7 And the eyes of them both were opened, and they knew that they were naked; and they sewed fig leaves together, and made themselves aprons. 8 And they heard the voice of the Lord God walking in the garden in the cool of the day: and Adam and his wife hid themselves from the presence of the Lord God amongst the trees of the garden. 9 And the Lord God called unto Adam, and said unto him, Where art thou? 10 And he said, I heard thy voice in the garden, and I was afraid, because I was naked; and I hid myself. 11 And he said, Who told thee that thou wast naked? Hast thou eaten of the tree, whereof I commanded thee that thou shouldest not eat? 12 And the man said, The woman whom thou gavest to be with me, she gave me of the tree, and I did eat. 13 And the Lord God said unto the woman, What is this that thou hast done? And the woman said, The serpent beguiled me, and I did eat. 14 And the Lord God said unto the serpent, Because thou hast done this, thou art cursed above all cattle, and above every beast of the field; upon thy belly shalt thou go, and dust shalt thou eat all the days of thy life: 15 And I will put enmity between thee and the woman, and between thy seed and her seed; it shall bruise thy head, and thou shalt bruise his heel. 16 Unto the woman he said, I will greatly multiply thy sorrow and thy conception; in sorrow thou shalt bring forth children; and thy desire shall be to thy husband, and he shall rule over thee. 17 And unto Adam he said, Because thou hast hearkened unto the voice of thy wife, and hast eaten of the tree, of which I commanded thee, saying, Thou shalt not eat of it: cursed is the ground for thy sake; in sorrow shalt thou eat of it all the days of thy life; 18 Thorns also and thistles shall it bring forth to thee; and thou shalt eat the herb of the field; 19 In the sweat of thy face shalt thou eat bread, till thou return unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou return. 20 And Adam called his wife's name Eve; because she was the mother of all living. 21 Unto Adam also and to his wife did the Lord God make coats of skins, and clothed them. 22 And the Lord God said, Behold, the man is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever: 23 Therefore the Lord God sent him forth from the garden of Eden, to till the ground from whence he was taken. 24 So he drove out the man; and he placed at the east of the garden of Eden Cherubims, and a flaming sword which turned every way, to keep the way of the tree of life. So amongst the knowledge of good and evil that came from eating the forbidden fruit is that nudity is shameful. (How much knowledge came is not stated in the text, but the Lord God felt it necessary to reveal other knowledge later, the most famous being the Ten Commandments, indeed up to 626 Mitzvot by the end of the Old Testament and a few more in the New, repealing several of the ones from the Old Testament as no longer being pertinent under the New Covenant. Gary North takes the unusual position, which I call "eastern," that what was not repealed remains in effect, why most Christians take the "western" position that everything not explicitly continued in the New Testament was repealed. What has puzzled me for a good while is that I can think of no sociobiological explanation for the nudity taboo. Humans did not wear clothes in the EEA! Nevertheless, my children started objecting to seeing their parents naked around the time they hit puberty. It was not part of their upbringing, and our nudity was entirely casual. When Hillary was touting health care reform, I suggested (without anyone contradicting me) that a better way to improve health would be to require that we go naked when it is hot. We'd start paying far more attention to our appearance. If no explanation is forthcoming, perhaps it was divine intervention after all that instilled this particular bit of knowledge of good and evil. The Bayesian "priors" of nonbelievers would have to be drastically readjusted. And the Lord God would have instilled this shame of being naked into all humans, not just those who lived in the Near East. In the article below, you'll find a not at all untypical anti-Christian rant. I am in no position to counter what it says about Japanese families jumping into a hot tub together, and it is definitely the case that Greek athletes (male, not female, if there were any) performed in the raw. But the general argument is bogus. In all societies (are there any full exceptions?), completely casual nudity does not exist. Otherwise, I'd go naked at the office when it got too hot, as would some of my co-workers. Outside, I'd wear running shoes and a jock strap. But only in a society in which casual nudity is the norm. I have no intention of making a big issue of this or joining a nudist colony. It is also the case that nowhere is homosexuality allowed free reign. It takes the efforts of scholars to decipher the "code words" for homosexual activities in the writings of the Greeks, which did allow for certain homosexual behavior under tightly controlled circumstances. But homosexual activity was never just casual. That's why "code words" got used. One notion in theology is that God reveals rules of conduct to men who cannot figure these rules out for themselves, given the state of knowledge at the time. A theist holds that men in fact announced these rules and attributed them to a god or gods, in every religion except his own religion. An atheist, one who believes in one less god than a monotheist, thinks all the commands were man-made. The world's religions have a huge overlap in their rules, as has been observed many times, but also rules that are unique to each religion. St. Paul's letters spend a great deal of time discussion Jewish-Gentile relations and what Jewish and Gentile converts must do differently. The latter did not have to obey all the Jewish laws, but Paul became exercised over whether newly converted pagans could go ahead and eat meat that had been ritually sacrificed by those who were still pagan. He offered arguments against it, though not quoting the arguably Jewish specific Second Commandment ("Thou shalt have no other gods before me") itself, but he was not adamant. Paul was adamant, though, about homosexuality, and explicitly carried over the Old Testament injunctions. Not surprisingly, theological liberals interpret this out of existence. (I got Jacques Berlinerbrau, _The Secular Bible: Why Nonbelievers Must Take Religion Seriously_ (Cambridge UP, 2005) for Christmas, but I have not read it yet.) Well, here goes the rant against Christianity. What remains is the puzzle of why Adam and Eve were ashamed. BOOK VIII .RESTORING "FAMILY VALUES" (THIRD SUB-PART, ASPECTS OF CHANGE) http://www.agnostic.org/BIBLEI.htm [from a group called The Agnostic Church] But Just Whose Family Values? (Continued) A. Side Effects Of Change The key values identified for change, if actually altered by mankind, will naturally result in a number of changes to other values which Western Civilization generally holds. This Section is intended to discuss some of those values which will also change. 1. No Nudity Taboo Christianity is so obsessed with the thought that enjoyment of sex is bad that many of the so-called "Family Values" of the Christian "Right" are actually designed to suppress any thoughts of sex as an enjoyable activity.1 One of those values which should naturally be discarded is the nudity taboo. It is almost comical to watch modern parents raise their children within the bounds of the current system of taboos. It is OK for a very small child to go to the bathroom with a parent of the opposite sex, but once the child is old enough to be potty trained, the child must go to the proper bathroom by itself. It is OK for prepubescent children of the opposite sex to sleep in the same bed, but if puberty approaches for either one, it is taboo. This taboo even manifests itself in our building codes, which specify a maximum age by which opposite sex children can sleep in the same bedroom. The essential thought of the Christian "Right" is that if we keep our children from viewing any nudity and sex, or finding out about such subjects in any way, we will quite naturally prevent our children from having sex before marriage. This creates an essential tension between knowledge and freedom. The freedoms of all people, including adults, must be restricted in order to prevent any young person from coming into contact with any depictions of nudity or sexual activity. Presently, our laws draw certain artificial lines at various ages, currently 13, 17, and/or 18,2 allowing increased freedom for them to view movies depicting nudity and/or sexual activity once they achieve those ages. Of course this is silly because social statistics show that roughly 85% of eighteen year old children are not virgins, meaning that most eighteen year olds have had sex. Other cultures have not created such a widespread nudity taboo. For example, it is considered normal in Japan for an entire family to jump into a hot tub together, in the nude, and there is a similar lack of nudity taboos in such things as public baths. To me, things like this clearly show that nudity taboos are totally artificial. If the values of our culture allow, and even strongly encourage, children as young as age seven to pair up as couples and get married to one another, long before puberty is even an issue, then there is no longer any reason to maintain a nudity taboo. Accordingly, the nudity taboo should, quite naturally, be assigned to the trash as part of the altered system of values proposed in this book. This does not mean that I am advocating the so-called "nudist life style." There are good and valid reasons why we should not expose our skin to the sun any more than is absolutely necessary. The nudists essentially accept the trade off of increasingly bad skin as they get older in return for their pleasure in thumbing their noses at the system by going around in the nude. The only reason I will mention the nudists is that it is usually accepted that they are a valid alternative life style, and thus they are proof that there is not a direct correlation between nudity and sexual activity. 2 . Unisex Bathrooms Our present system of bathroom facilities is strongly based on the nudity taboo. If the nudity taboo is discarded, then there is no longer any real reason to continue to build separate bathroom facilities for each sex. This concept is already partially implemented in our society. For instance, it is less common to see portable toilets labeled for one sex or the other. Many gas stations have converted to unisex rest room facilities, simply putting the international logos for both sexes on the same door. I have not observed any really strong reaction against these trends, so I believe it is time to simply state that it will be better for all of us to recognize this concept and agree to share and share alike.3 In the long run, construction costs will be lower for buildings, convenience will be improved, and some level of aggravation will be removed from our lives if we simply agree that any public rest room are unisex, no matter what label happens to be on the door. This change is simply a natural consequence of abolishing the nudity taboo, and as I have pointed out, it is already widely accepted in our society. 3 . No Pleasure Taboos An important side effect of all of this will be to remove most, if not all, of the many proscriptions of pleasurable activity which now exist in our system of laws. There will no longer be any need to prohibit sexual activity according to age, because our society will ensure that most people are part of a stable marriage long before they are physically mature enough to have sex. Similarly, we should remove the age limits on the drinking of alcoholic beverages so that our young people can get their training on what their own personal limits for the consumption of alcohol are long before they would be in any significant physical danger from overindulgence. Part of the culture of Western Civilization is that alcoholic beverages are heavily taxed because drinking is a "sin" and, to the extent which our society has elected to tolerate such "sin," taxing it heavily acts as a disincentive to "sin." Thus, the easiest way for any politician to propose raising revenue for the government is to propose a "sin tax" of some sort, and alcohol is usually right at the top of the list. These things all derive from the basic Christian concept that pleasure is sinful, and thus indulgence in pleasure must be discouraged for the good of your eternal soul. One of the side effects of adopting a Dionysian component into our culture will be the fact that pleasure is now not only acceptable, it is encouraged as a natural part of being human. In other words, it is no longer possible for you to be considered as being totally human unless you regularly experience pleasure. Abolishing all of the taboos against pleasure, and discarding all of the guilt which Christianity has traditionally associated with pleasure, will constitute a truly fundamental change in our moral philosophy. But there is no doubt in my mind that this change is long overdue. _____________________________________________________ 1 Christianity has been attempting to prevent sexual contact from the earliest days of the church. See, for example, some of the letters of Paul in the New Testament which even advise against marriage under the false assumption that Jesus will return any day now. 2 These ages are derived from the current movie rating system of G, PG, PG-13, R, and NC-17. 3 The disparate nature of public rest room facilities at sports venues has been a recurring topic in the media, particularly with respect to concerts, where the sexual division of the audience is at least much more equal, if not skewed in favor of a female majority. [I am sending forth these memes, not because I agree wholeheartedly with all of them, but to impregnate females of both sexes. Ponder them and spread them.] From checker at panix.com Sun Jan 1 02:43:07 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 21:43:07 -0500 (EST) Subject: [Paleopsych] Getting (Too) Dirty in Bed Message-ID: Getting (Too) Dirty in Bed By Emily Gertz, Grist Magazine Posted on December 9, 2005, Printed on December 9, 2005 http://www.alternet.org/story/29218/ So you're an Enlightened Green Consumer. You buy organic food and carry it home from the local market in string bags. Your coffee is shade-grown and fair-trade, your water's solar-heated, and your car is a hybrid. But what about the playthings you're using for grown-up fun between those organic cotton sheets -- how healthy and environmentally sensitive are they? Few eco-conscious shoppers consider the chemicals used to create their intimate devices. Yes, those things -- from vibrators resembling long-eared bunny rabbits to sleeves and rings in shapes ranging from faux female to flower power. If these seem like unmentionables, that's part of the problem: while some are made with unsafe materials, it's tough to talk about that like, well, adults. But it's necessary. Unlike other plastic items that humans put to biologically intimate use -- like medical devices or chew-friendly children's toys -- sex toys go largely unregulated and untested. And some in the industry say it's time for that to change. Love Stinks Many popular erotic toys are made of polyvinyl chlorides (PVC) -- plastics long decried by eco-activists for the toxins released during their manufacture and disposal -- and softened with phthalates, a controversial family of chemicals. These include invitingly soft "jelly" or "cyberskin" items, which have grown popular in the last decade or so, says Carol Queen, Ph.D., "staff sexologist" for the San Francisco-based adult toy boutique Good Vibrations . "It's actually difficult for a store today to carry plenty of items and yet avoid PVC," Queen says. "Its use has gotten pretty ubiquitous among the large purveyors, because it's cheap and easy to work with." In recent years, testing has revealed the potentially serious health impacts of phthalates. Studies on rats and mice suggest that exposure could cause cancer and damage the reproductive system. Minute levels of some phthalates have been linked to sperm damage in men, and this year, two published studies linked phthalate exposure in the womb and through breast milk to male reproductive issues. A study in 2000 by German chemist Hans Ulrich Krieg found that 10 dangerous chemicals gassed out of some sex toys available in Europe, including diethylhexyl phthalates. Some had phthalate concentrations as high as 243,000 parts per million -- a number characterized as "off the charts" by Davis Baltz of the health advocacy group Commonweal. "We were really shocked," Krieg told the Canadian Broadcasting Corporation's Marketplace in a 2001 report on the sex-toy industry. "I have been doing this analysis of consumer goods for more than 10 years, and I've never seen such high results." The danger, says Baltz, is that heat, agitation, and extended shelf life can accelerate the leaching of phthalates. "In addition, [phthalates are] lipophilic, meaning they are drawn to fat," he says. "If they come into contact with solutions or substances that have lipid content, the fat could actually help draw the phthalates out of the plastic." Janice Cripe, a former buyer for Blowfish -- a Bay Area-based online company whose motto is "Good Products for Great Sex" -- confirms the instability of jelly toys: "They would leak," she says. "They'd leach this sort of oily stuff. They would turn milky" and had a "kind of plasticky, rubbery odor." She stopped ordering many jelly toys during her time at Blowfish, even though their lower prices made them popular. So what's being done to protect consumers? Well, nothing. While the U.S., Japan, Canada, and the European Union have undertaken various restrictions regarding phthalates in children's toys, no such rules exist for adult toys. In order to be regulated in the U.S. under current law, sex toys would have to present what the federal government's Consumer Product Safety Commission calls a "substantial product hazard" -- essentially, a danger from materials or design that, in the course of using the product as it's made to be used, could cause major injury or death. But if you look at the packaging of your average mock penis or ersatz vagina, it's probably been labeled as a "novelty," a gag gift not intended for actual use. That's an important semantic dodge that allows less scrupulous manufacturers to elude responsibility for potentially harmful materials, and to evade government regulation. If you stick it somewhere it wasn't meant to go, well -- caveat emptor, baby! It's a striking lack of oversight for a major globalized industry. The Guardian recently estimated that 70 percent of the world's sex toys are manufactured in China, and the CBC's 2001 report suggested the North American market might be worth $400 million to $500 million. More detailed figures can be hard to come by. "In the U.S., all of the companies that manufacture adult novelties, whether they're mom-and-pop or large corporations, are privately held," explains Philip Pearl, publisher and editor in chief of AVN Adult Novelty Business, a trade magazine. "None are required to publish financial information, and none do." Queen thinks the lack of agreed-upon standards is a major problem. She and the staff at Good Vibrations have often had to fall back on marginally relevant regulations. "I remember trying in the early '90s to track down information on an oil used on beautiful hand-carved wooden dildos -- was it safe to put into the body?" she says. "The closest comparison we could find was the regulation governing wooden salad utensils!" Taking Things Into Their Own Hands Metis Black, president of U.S.-based erotic-toy manufacturer Tantus Silicone, has written on the health risks of materials for Adult Novelty Business. "Self-regulation -- eventually we've got to do it," says Black, who adds that creating safe toys is what got her into the business about seven years ago. "Just like children's teething toys, we're going to have to start doing the dialogue" within the industry, Black says, to "discuss what's in toys and how it affects customers." Otherwise, she feels, government regulators will step in. While the industry wrestles with such issues, some manufacturers and suppliers aren't waiting for regulations. Tony Levine, founder of Big Teaze Toys, says he's made his products -- including the cutely discreet, soft-plastic vibrator I Rub My Duckie -- phthalate-free from the start. "While working at Mattel as a toy designer, I was made very aware of the concerns of using only safe materials for children's products," he says. "This training has stuck with me ... We take great pride in using only the materials which meet strict toxicity safety standards for both the U.S. and the E.U." Meanwhile, if customers select jelly playthings at Babeland, a retailer with stores in Los Angeles, New York City, and Seattle, the staff gives them a tip sheet on phthalates, and recommends using a condom with the toy. "Our goal is to help people make an educated choice, and give out as much information as we can find -- without alarming people," says Abby Weintraub, an associate manager at the company's Soho store. Babeland staff also steer willing customers toward phthalate-free alternatives, such as hard plastic, or the silicone substitute VixSkin. Some manufacturers are also using thermoplastic elastomers instead of PVC. Vibratex recently reformulated the popular Rabbit Habit dual-action vibrator -- made famous on Sex and the City -- with this material. Vibratex co-owner Daniel Martin says the company has always used "superior grade," stable PVC formulations, and still considers the products safe, but acknowledges that customers are eager for phthalate-free tools. While alternative materials can be more expensive, Weintraub says when people have the option of choosing them, many do. The owners of the Smitten Kitten , a Minneapolis-based retailer, opted not to carry jellies, cyberskins, or other potentially toxic toys at all when they opened about two years ago. "They're dangerous to human health, to the environment," says co-owner Jennifer Pritchett. "It's part of our philosophy to put good things in the world, and it's counter to that to sell things that are toxic." No Sex Please, We're Skittish So what are the other alternatives for eco-conscious pleasure-seekers? The most ecologically correct choices may be metal or hardened glass dildos -- which, with their elegant, streamlined shapes (and sometimes hefty price tags) can double as modernist sculptures if you grow weary of their sensual charms. "The glass is going to be more lasting, possibly safer, and less toxic than something that's plastic," confirms Babeland marketing manager Rebecca Suzanne. And the eco-choices don't stop there. If you want to do your part for conservation while getting a buzz, go for the Solar Vibe , a bullet vibrator that comes wired to a small solar panel. Some vibrators come with rechargeable power packs, says Suzanne, "which is a little bit better alternative to the typical battery-run toy, where you just toss the batteries ... into the landfill." What about accessories? The Smitten Kitten takes pride in its "animal-friendly" inventory of bondage and fetish gear. "We have some floggers that are made of nylon rope ... natural rope, and rubber," says Pritchett. "The same with the paddles, collars, cuffs, and whatnot. Totally leather-free, animal-product-free." A few manufacturers are bringing green values directly to the adult-toy market via products that might not be out of place in the cosmetics aisle of a natural-foods mega-retailer. Offerings include Body Wax's candles made from soy and essential oils, and Sensua Organic's fruit-flavored or unflavored lubes -- one of a few lubricant lines touting either organic or all-natural formulations. "People enjoy having the option," says Weintraub. "It's like, 'I use organic face wash. Maybe I want to use organic lube, too.'" Pritchett feels health and eco-conscious retailers are a shopper's best ally for staying safe and healthy. "So many of us are used to shopping for organic food, or ecologically safe building products, or cosmetics," she says. When people realize it's possible to shop for sex toys the same way, "you can see a light bulb go off -- they realize it's a consumer relationship and they can and should demand better products." Choosing the most eco-correct erotic toy can seem fraught with compromises -- more akin to picking the most fuel-efficient automobile than buying a bunch of organic kale. With no government assessment or regulation on the immediate horizon, it's up to you, the consumer, to shop carefully and select a tool that's health-safe, fits your budget, and gets your rocks off. Meanwhile, pack up that old mystery-material toy and send it back to the manufacturer with a note that they can stick it where the sun don't shine. Emily Gertz has written on environmental politics, business, and culture for Grist, BushGreenwatch, and other independent publications. She is a regular contributor to WorldChanging. From checker at panix.com Sun Jan 1 02:43:22 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 21:43:22 -0500 (EST) Subject: [Paleopsych] UPI: New Map Of Asia Lacks US Message-ID: New Map Of Asia Lacks US http://www.spacewar.com/news/superpowers-05zd.html Former Malaysian premier Mahathir Mohamad points at his anti-war badge after a press conference at his office in Putrajaya, 07 December 2005. Australia's hard-won entry into the inaugural East Asia summit was soured 07 December after former Malaysian premier Mahathir Mohamad said Canberra would likely be bossy and dilute the grouping's clout. Malaysia Out AFP photo. By Martin Walker Washington (UPI) Dec 08, 2005 The United States will not take part in next week's East Asia summit, but, to paraphrase a former secretary of state's phrase about the Balkan wars, the Americans most certainly have a dog in this fight. There is a fight under way at the summit, albeit a polite and diplomatic tussle. The Japanese, with discreet but potent American backing, have already ensured that the original plan of the former Malaysian premier for a purely Asian summit was blocked. Australia and New Zealand will now be taking part in the forum, to the fury of the still-influential Mahathir Mohammed. "We are not going to have an East Asian summit. We are going to have an East Asia-Australasia summit," Mahathir told a specially convened news conference last week to complain that the presence of Australia and New Zealand subverted his dream of a genuinely Asian forum. "Now Australia is basically European and it has made clear to the rest of the world it is the deputy sheriff to America and therefore, Australia's view would represent not the East but the views reflecting the stand of America," Mahathir added. There was also some reluctance, discreetly fostered by China, to admit India to what was intended to be an East Asian club, but India (like Russia, but not the United States) was prepared to sign the Association of South-East Asian Nations' Treaty of Amity and Cooperation in Southeast Asia, which ASEAN nations call "the admission ticket" to the summit. A report in China's People's Daily noted this week that Russia's inclusion in the club was "simply a matter of time," and Russian will hold a separate bilateral meeting with ASEAN immediately before the summit. But it remains significant that the United States, as the region's security guarantor for decades and as its biggest market, is not welcome. The summit is clearly emerging as an important building block in the new economic, security and political structure of Asia that is evolving, and for obvious reasons this structure is heavily influenced by China's explosive economic growth, the new reality to which the whole of Asia is learning to adapt. As China's People's Daily noted this week, to explain the problems of drafting a joint communique, the Kuala Lumpur Declaration, from the summit: "According to insiders, some countries including Thailand sided with China over the claim that 'this entity must take ASEAN + 3 (Japan, China, Republic of Korea) as its core' and demanded no mention of community in the draft. While others led by Japan hope to write into the draft 'to build a future East Asia Community' and include the names of the 16 countries. By doing so, ASEAN diplomats believe, Japan is trying to drag countries outside this region such as Australia and India into the community to serve as a counterbalance to China. "To grab the upper hand at the meeting, analysts say, Japan would most probably dish out the 'human rights' issue and draw in the United States, New Zealand and Australia to build up U.S., Japan-centered Western dominance," the People's Daily added. "At the same time, it will particularly highlight the differences in political and economic systems between developed countries such as Australia, New Zealand and the ROK and developing ones including China and Vietnam, in an attempt to crumble away cooperative forces and weaken Chinese influence in East Asia." The summit, to be held in Kula Lumpur, Malaysia, on Dec. 14, will include Australia, New Zealand, China, India, Japan, the Republic of Korea and the 10 members of ASEAN -- Singapore, Malaysia, Thailand, Myanmar, Philippines, Indonesia, Cambodia, Laos, Vietnam and Brunei. The eventual goals of the summit are huge. Japan's Foreign Minister Taro Aso said this week in a speech in Tokyo that: "Japan believes we should bring into being the East Asia Free Trade Area and the East Asia Investment Area in order to move us even one step closer to regional economic integration." Eventually, he has in mind (as do many of the ASEAN countries) something similar to the process of integration through trade that created over the past 50 years the present European Union. It will take a long time, and endless negotiations, and Aso's speech also laid out the immediate agenda for economic integration. "In Asia, the fact is that there are multiple factors inhibiting investment, including the existence of direct restrictions on investment, insufficient domestic legal frameworks, difficulties in the implementation of laws, inadequacy of the credit system, and others, particularly the complete inadequacy of protections for intellectual property rights," Aso said. India, with backing from Australia, sees the summit paving the way for an eventual Asian free-trade zone, though it remains cool to any grander designs for security or political integration along EU lines. China, which has said little about the kind of community it wants to see, mainly wants to ensure that no Asian gathering takes place without its increasingly overwhelming presence. So what is emerging, in America's absence, looks to be three distinct camps of a potentially uncomfortable assembly. The Australians and Indians and Japanese, and some of the more Western-minded ASEAN members, want to focus on economic cooperation and trade, but within the overall framework of the World Trade Organization, plus useful collaboration in areas like common action against avian flu. This group also wants to retain the current role of the United States as the region's key security guarantor. Then there is China, which evidently assumes that its economic prowess will eventually ensure that the East Asian summit, the region's economy and its security system are all dominated by Beijing, and not necessarily in an aggressive way. Still, Beijing wants this process to develop on China's own terms, for example this week ruling out the usual trilateral meeting with Japan and South Korea because of its complaints that Japan is not sufficiently remorseful for its actions in World War II. And finally there are the original ASEAN members, uncomfortably aware that they are now part of something far bigger than all of them. They understandably dread the prospect of great power rivalry between China and India, or between China and the United States, and hope that trade links and diplomatic structures like the summit process will ensure that such rivalries do not get out of hand. Some local analysts think that because of these fundamental differences the East Asian summit process is unlikely to endure. One Malaysian scholar has called it "an empty shell unable to yield any substantial results," and Indonesia's Jakarta Post published a decidedly gloomy editorial this week. "What we will actually see is not what East Asian leaders have long dreamed of, that is an integrated regional framework of cooperation, but a community marked rather by suspicion, distrust, individualism and perhaps unwillingness to sacrifice a minimum of national autonomy for the sake of pursuing collective and collaborative action," the paper commented. If that gloomy forecast holds good, that would not displease the United States, instinctively suspicious of any international body designed to exclude it. But if this East Asia summit process, filled with reliable American friends, fails to prosper, something much less welcome to Washington, and perhaps more to the taste of America's critics like Malaysia's Mahathir, will almost certainly emerge to fill the vacuum. From checker at panix.com Sun Jan 1 02:43:32 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 21:43:32 -0500 (EST) Subject: [Paleopsych] Live Science: Happiness in Old Age Depends on Attitude Message-ID: Happiness in Old Age Depends on Attitude http://www.livescience.com/humanbiology/051212_aging_happy.html By Robert Roy Britt LiveScience Managing Editor posted: 12 December 2005 01:16 pm ET Happiness in old age may have more to do with attitude than actual health, a new study suggests. Researchers examined 500 Americans age 60 to 98 who live independently and had dealt with cancer, heart disease, diabetes, mental health conditions or a range of other problems. The participants rated their own degree of successful aging on scale of 1-10, with 10 being best. Despite their ills, the average rating was 8.4. "What is most interesting about this study is that people who think they are aging well are not necessarily the (healthiest) individuals," said lead researcher Dilip Jeste of the University of California at San Diego. "In fact, optimism and effective coping styles were found to be more important to successfully aging than traditional measures of health and wellness," Jeste said. "These findings suggest that physical health is not the best indicator of successful aging???attitude is." The finding may prove important for the medical community, which by traditional measures would have considered only 10 percent of the study members to be aging successfully. "The commonly used criteria suggest that a person is aging well if they have a low level of disease and disability," Jeste said. "However, this study shows that self-perception about aging can be more important than the traditional success markers." Health and happiness may indeed be largely in the mind. A study released last year found that people who described themselves as highly optimistic a decade ago had lower rates of death from cardiovascular disease and lower overall death rates than strong pessimists. Research earlier this year revealed that the sick and disabled are often as happy as anyone else. The new study also showed that people who spent time each day socializing, reading or participating in other hobbies rated their aging satisfaction higher. "For most people, worries about their future aging involve fear of physical infirmity, disease or disability," Jeste said. "However, this study is encouraging because it shows that the best predictors of successful aging are well within an individual's control." The results, announced today, were reported at a meeting of the American College of Neuropsychopharmacology. When Money Does Buy Happiness Loss of Loved One Really Can Cause Broken Heart Hang in There: The 25-Year Wait for Immortality From checker at panix.com Sun Jan 1 02:43:45 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 21:43:45 -0500 (EST) Subject: [Paleopsych] Live Science: Human Gene Changes Color of Fish Message-ID: Human Gene Changes Color of Fish http://www.livescience.com/animalworld/051215_fish_color.html [No bawling about the dangers of racism here, either.] By [33]Bjorn Carey LiveScience Staff Writer posted: 15 December 2005 02:05 pm ET Scientists have changed mutated, golden-colored zebrafish to a standard dark-striped, yellowish-white variety by inserting the genetic information for normal pigmentation into young fish. In an interesting twist, they also found that inserting a similar human version of the pigment gene [34]resulted in the same color change. As with humans, zebrafish skin color is determined by pigment cells, which contain pigment granules called melanosomes. The number, size and darkness of melanosomes per pigment cell influence the color of skin. For example, people of European descent have fewer, smaller, and lighter melanosomes than people of West African ancestry, and Asians fall somewhere in between. The golden zebrafish variant had fewer, smaller, and less heavily pigmented melanosomes than normal fish. The mutation Keith Cheng of Penn State College of Medicine and his colleagues determined that a dysfunctional, mutated gene was not producing the protein needed to make melanosomes. "They have a mutation in the gene which causes the protein machinery to say `stop,'" Cheng told LiveScience. Cheng's team found that when they inserted the normal version of the gene into two-day-old embryos of the golden fish, they were able to produce melanosomes, which darkened their skin to the normal color within a few days. Next, the researchers searched HapMap, an online database of human genetic variation, and found a similar gene for melanosome production in humans. So they inserted the human gene into golden zebrafish embryos and again changed their skin color to the darker version. "We presume that they got darker because of similar function of the inserted gene which normally produces the more abundant, larger, and darker melanosomes," Cheng said. Human mutation? It appears that like the golden zebrafish, light-skinned Europeans also have a mutation in the gene for melanosome production, resulting in less pigmented skin. Scientists suspect variations of this gene may also cause blue eyes and light hair color in some humans. However, Cheng said, it's important to point out that the mutation in the human and zebrafish genes is different--while the zebrafish version fails completely to produce the protein to make melanosomes, the mutated human version still works, just not quite as well. The discovery could lead to advancements in targeting a treatment for malignant melanoma--the most deadly form of skin cancer--as well as research on ways to modify skin color without damaging it by tanning or the use of harsh chemical lighteners. This research is detailed in the Dec. 16 issue of the journal Science. * [35]Pollution Blamed for Intersex Fish * [36]The Real Reason Animals Flaunt Size and Color * [37]Bragging Rights: The Smallest Fish Ever * [38]Fluorescent Fish Aids Medical Research [39][051215_zebrafish_00.jpg] [40]The normal zebrafish above has darker stripes than the golden zebrafish below. The insets show that the golden zebrafish has fewer, smaller and less dense pigment-filled compartments called melanosomes than the normal zebrafish. References 34. http://www.livescience.com/php/multimedia/imagedisplay/img_display.php?pic=051215_zebrafish_02.jpg&cap=The+normal+zebrafish+above+has+darker+stripes+than+the+%ECgolden%EE+zebrafish+below.+The+insets+show+that+the+%ECgolden%EE+zebrafish+has+fewer,+smaller+and+less+dense+pigment-filled+compartments+called+melanosomes+than+the+normal+zebrafish.+Credit%3A+%A9+Science 35. http://www.livescience.com/environment/intersex_fish_041221.html 36. http://www.livescience.com/animalworld/ap_050319_deer_antlers.html 37. http://www.livescience.com/animalworld/041027_Smallest_Fish.html 38. http://www.livescience.com/imageoftheday/siod_050901.html 39. http://www.livescience.com/php/multimedia/imagedisplay/img_display.php?pic=051215_zebrafish_02.jpg&cap=The+normal+zebrafish+above+has+darker+stripes+than+the+%93golden%94+zebrafish+below.+The+insets+show+that+the+%93golden%94+zebrafish+has+fewer%2C+smaller+and+less+dense+pigment-filled+compartments+called+melanosomes+than+the+normal+zebrafish.+Credit%3A+%A9+Science 40. http://www.livescience.com/php/multimedia/imagedisplay/img_display.php?pic=051215_zebrafish_02.jpg&cap=The+normal+zebrafish+above+has+darker+stripes+than+the+%93golden%94+zebrafish+below.+The+insets+show+that+the+%93golden%94+zebrafish+has+fewer%2C+smaller+and+less+dense+pigment-filled+compartments+called+melanosomes+than+the+normal+zebrafish.+Credit%3A+%A9+Science From checker at panix.com Sun Jan 1 02:43:54 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 21:43:54 -0500 (EST) Subject: [Paleopsych] Boston Consulting Group: Brain Size, Group Size, and Language Message-ID: Brain Size, Group Size, and Language http://www.bcg.com/strategy_institute_gallery/gorilla2.jsp Summary Evidence in primates suggests that the size of social groups is constrained by cognitive capacity as measured by brain size. After a point, the number and nature of group relationships becomes too complex and groups tend to grown unstable and fission. Based on these projections, human beings should reach a "natural" cognitive limit when group size reaches about 150. There is extensive empirical evidence of social groupings of about this size in the anthropological literature. It is suggested that language arose as a means of enabling social interactions in large groups as a more efficient substitute for one-on-one social grooming in primates. ------------- Brain Size, Group Size, and Language Why do people have such big brains? After all, it is very expensive to maintain this organ - while it only accounts for about 2% of adult body weight, the human brain consumes about 20% of total energy output. Professor Robin Dunbar has advanced a theory relating brain size in primates to the size of social groups and to the evolution of language in humans. This work supports some of the suppositions of the "Machiavellian Intelligence Hypothesis" which states that intelligence first evolved for social purposes (see Byrne, Richard &Whiten, Andrew (1988) Machiavellian Intelligence . Oxford: Clarendon Press.) This rebuts competing arguments linking brain size to more sophisticated food gathering and extended home range size. Brain size as a determinant of group size in primates Dunbar plots data on brain size (the measure he uses is the ratio of the neocortex - the "thinking" part of the brain - to total brain size) versus observed social group sizes for 36 genera of primates. He obtains a very good fit for the data (r-squared = 0.764, P Log10(N) = 0.093 + 3.389Log10(CR) Where N is the mean group size, and CR is the neocortex ratio. The results are plotted on a log-log scale. The fact that brain size in primates is closely related to group size implies that animals have to be able to keep track of an increased information load represented by these larger social groups. Note that the relationship of brain size to group size in the model is not linear, which it would be if the cohesion of the group depended only on each individual's relationship to all the other members of the group. The fact the relationship is logarithmic implies that the task of information processing is more complex: each animal has to keep track not just of its own relationships to every member of the group but also the third party relationships among other group members. At some point, the complexity of these relationships exceeds the animals' mental ability to deal with them. Several primate societies, such as chimpanzees, are known to become unstable and to fission when the group size exceeds a certain level. It may be that there is upper limit on group size set by cognitive constraints. Implications for human group size While the "natural" group size for humans is not known, the size of our brains is (neocortex ratio = 4.1). Plotting this as the independent variable on Dunbar's regression line yields a group size of 147.8 for homo sapiens. Is there any empirical evidence for natural human group sizes of about 150? Based on his scan of the anthropological literature, Dunbar concludes that there is. One source of evidence is from modern day hunter-gatherer societies (whose way of life best approximates that of our late Pleistocene ancestors of 250,000 years ago - the period when our current brain size is thought to have evolved). Based on this data for groups in Australia, the South Pacific, Africa, North and South America, there appear to be three distinct size classes: overnight camps of 30-50 people, tribes of 500-2,500 individuals, and an intermediate group - either a more permanent village or a defined clan group - of 100-200 people. The mean size of this intermediate group in Dunbar's (admitted small) sample is 148.4, which matches remarkably well with the prediction of the neocortex size model. This grouping also had the lowest coefficient of variation, which we would expect if this group size truly subject to an internal constraint (i.e., cognitive capacity), whereas smaller and larger groupings are more unstable. While these intermediate size groups may be dispersed over a wide area much of the time, they gather regularly for group rituals and develop bonds based on direct personal contact. These groups come together for mutual support in times of threat. Other examples of communal groups of this size abound. The Hutterites, a fundamentalist group that lives and farms communally in South Dakota and Manitoba, regards 150 as the upper limit for the size of a farming community. When the group reaches this size, it is split into two daughter communities. Professional armies, dating from Roman times to the modern day, maintain basic units - the "company" - that typically consists of 100-200 soldiers. Modern psychological studies also demonstrate the size of typical "friendship networks" in this same range. These examples provide further evidence of natural group size constraints. Once the number of individuals rises much beyond the limit of 150, social cohesion can no longer be maintained exclusively through a peer network. In order for stability to be maintained in larger communities, they invariably require some sort of hierarchical structure. Groups and the evolution of language Dunbar points out that primate groups are held together by social grooming, which is necessarily a one-on-one activity and can absorb a good deal of the animals' time. In order to maintain these bonds in groups of 200 individuals would require us to devote about 57% of the day to social grooming. Dunbar proposes that the maintenance of these social bonds in humans was made possible through the evolution of language, which emerged as a more efficient means for "grooming" - since one can talk to several others at once. Dunbar's model predicts a conversation group size for humans (as a substitute for grooming) of 3.8. He then sites evidence that this is indeed about the size actually observed in human conversation groups. Conversations tend to partition into new conversational cliques of about four individuals. Furthermore, studies have shown that a high percentage of ordinary conversations (over 60%) is devoted to discussing personal relationships and social experience - i.e., gossip. Based on Robin Dunbar 1992, 1993 Contributed by David Gray, 2000 Dunbar, R.I.M. (1992), 'Neocortex Size as a Constraint on Group Size in Primates', Journal of Human Evolution 20, 469-493 Dunbar, R.I.M. (1993), 'Coevolution of neocortical size, group size and language in humans', Behavioral and Brain Sciences 16, 681-735. * Group size effects the dynamics of social networks - a community ethos is more likely to arise in human groups smaller than 150 * Network formation depends on social interaction - effective networks arise from regular personal contact that creates a shared sense of community * Networks can be costly to maintain - time and resources are required to maintain the social ties that support a network * Hierarchy becomes important as group size grows - more complex societies require authoritarian structures to clarify and enforce social relationships Keywords: Social networks, primates, intelligence, group size, gossip, grooming, hunter-gatherer societies, Hutterites, army company, fission, bonds, friendship, hierarchy, peers, authority, evolution, language From checker at panix.com Sun Jan 1 02:44:08 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 21:44:08 -0500 (EST) Subject: [Paleopsych] Public Interest: Charles Murray: Measuring abortion Message-ID: Charles Murray: Measuring abortion http://www.findarticles.com/p/articles/mi_m0377/is_158/ai_n8680977/print [It seems that every issue of The Public Interest is available at link 2.] [1]FindArticles > [2]Public Interest > [3]Wntr, 2005 > [4]Article Measuring abortion Charles Murray SEX and Consequences: Abortion, Public Policy and the Economics of Fertility is a model of contemporary social science discourse, revealing in one book both how the enterprise should be conducted and its vulnerability to tunnel vision on the big issues. Phillip B. Levine, a professor of economics at Wellesley College, sets out in Sex and Consequences to explore the thesis that the role of abortion is akin to the role of insurance. Legal abortion provides protection from a risk (having an unwanted child), just as auto insurance provides financial protection against the risk of an accident. Legalizing abortion has a main effect of reducing unwanted births, just as auto insurance has a main effect of reducing individuals' losses from auto accidents. But abortion faces the same problems of moral hazard as other kinds of insurance. Just as a driver with complete insurance may be more likely to have an accident, a woman who has completely free access to abortion may be more likely to have an accidental pregnancy. Levine hypothesizes that legislated restrictions on abortion might serve the same purpose as deductibles do on auto insurance--they alter behavior without having much effect on net outcomes. Thus a state with some restrictions on abortion may have no more unwanted births than a state without restrictions, even though the number of abortions is smaller in the restrictive state. The restrictions raise the costs of abortion, and women moderate their behavior to reduce the odds of an unwanted pregnancy. Levine develops his model carefully and with nuance, and eventually wends his way back to conclusions about its empirical validity (it is broadly consistent with the evidence). But the chapters between the presentation of the model and the conclusions about it are not limited to the insurance thesis. They constitute a comprehensive survey of the quantitative work that has been done on the behavioral effects of abortion, incorporating analysis of the abortion experience worldwide as well as in the United States. THE book's virtues are formidable. Levine writes clearly, avoids jargon (or explains what the jargon means when he can't avoid it), and is unfailingly civil in characterizing the positions in the abortion debate. He is judicious, giving the reader confidence that he is not playing favorites when the data are inconclusive or contradictory. The breadth and detail of the literature review are exemplary. The book is filled with convenient summaries of material that could take a researcher weeks to assemble--a table showing the differences in abortion policy across European countries plus Canada and Japan, for example. Levine also gets high marks for one of the most challenging problems for any social scientist who is modeling complex human behavior: making the model simple enough to be testable while not losing sight of the ways in which it oversimplifies the underlying messiness of human behavior. The book's inadequacies reflect not so much Levine's failings as the nature of contemporary social science. Abortion policy is one of the great moral conundrums of our time. Anyone who is not the purest of the pure on one side or the other has had to wrestle with the moral difference (or whether there even is one) between destroying an embryo when it is a small collection of cells and when it is unmistakably a human fetus. None of the tools in Levine's toolkit can speak to this problem. Levine is aware of this, and makes the sensible point that more argumentation on the philosophical issues is not going to get us anywhere. He has picked a corner of the topic where his tools are useful, he says, and that's a step in the right direction. Still, as I read his dispassionate review of the effects of abortion policy on the pregnancy rate, I could not help muttering to myself occasionally, "Aside from that, Mrs. Lincoln, how was the play?" * EVEN granting the legitimacy of looking where the light is good, Sex and Consequences may be faulted for sheering away from acknowledging how much scholars could do to inform the larger issues if they were so inclined. Here is Levine discussing the non-monetary costs of abortion: The procedure may be physically unpleasant for the patient. She may need to take time off from work and spend time traveling to an abortion provider that may not be local. When she gets to the provider's location, there may be protesters outside the clinic, making her feel intimidated or even scared. If her family and/or friends find out about it, she may feel some stigma. Finally, it should not be overlooked that the procedure may be very difficult psychologically for a woman in a multitude of ways that cannot be easily expressed. "Cannot be easily expressed"? The woman is destroying what would, if left alone, have become her baby. That's easy enough to express. That Levine could not bring himself to spit out this simple reason why "the procedure may be very difficult psychologically" is emblematic of the tunnel vision that besets contemporary social science. A policy is established that has implications for the most profound questions of what it means to be human, to be a woman, to be a member of a community. What is the most obvious topic for research after such a policy is instituted? To me, a leading candidate is the psychological effects on the adult human beings who are caught up in this problematic behavior. There are ways to study these effects. Quantifiable measures of psychological distress are available--rates of therapy or specific psychological symptoms, for example--not to mention well-established techniques for collecting systematic qualitative data. And yet it appears from Levine's review that the only thing that social scientists can think of to study are outcomes such as pregnancy rates, abortion rates, birth rates, age of first intercourse, and welfare recipiency. I don't know if there are good studies on psychological effects that Levine thought were outside his topic, or whether the available studies aren't numerous enough or good enough to warrant treatment. I suspect that good studies just aren't available--Levine gives the impression of covering all the outcomes that the literature has addressed. IS the tunnel vision a result of political correctness or of the inherent limitations of quantitative social science? One should not underestimate the role of technical problems. Counting pregnancy rates is relatively easy; assessing long-term psychological outcomes for women who have abortions is much tougher and more expensive. Studying topics such as the coarsening effect that abortion might have on a society would be tougher yet. But it remains a fact that the overwhelming majority of academics who collect data on the effects of abortion policy are ardently pro-choice. The overwhelming majority of their colleagues and friends are ardently pro-choice. To set out on a research project that might in the end show serious psychological harm to women who have abortions or serious social harm to communities where abortions rates are high would take more courage and devotion to truth than I have commonly encountered among today's academics. Actually, Levine represents a significant profile in courage. By concluding that restrictions on abortion do not necessarily have "bad" effects (from a pro-choice perspective), Levine is stating a conclusion that most of his fellow academics do not want to hear. What makes the tunnel vision most frustrating is the extent to which it produces uninteresting results. Out of all the tables that Levine presents and all the generalizations he draws from the extant literature, hardly any of the findings fall in the category of "I would never have expected that." Economics does indeed explain many things under the rubric of "make it more expensive and you get less of it, subsidize it and you get more of it." But we knew that already. And when it comes to the less obvious findings, one is seldom looking at large, transforming effects, but at effects that are statistically significant but small in magnitude. Levine is caught in the same bind as all of us who commit quantitative social science: The more precisely we can measure something, the less likely we are to learn anything important. But as we try to measure something important less precisely, the more vulnerable we become to technical attack. And so it has come to pass that on the great issues that quantitative social scientists might study, we are so often irrelevant. Princeton University Press. 215 pp. $35.00. * There should be a rule requiring anyone reviewing a book on a controversial policy to disclose his own biases. With regard to the morality of abortion, I set the bar high--abortion for any but compelling reasons is in my view morally wrong, and my definition of "compelling" is strict. But I think that governments do a bad job of characterizing where the bar should be, and that, except in extreme cases such as partial-birth abortion, the onus for discouraging abortion should rest with family and community, not laws. My legal position is thus pro-choice. References 1. http://www.findarticles.com/ 2. http://www.findarticles.com/p/articles/mi_m0377 3. http://www.findarticles.com/p/articles/mi_m0377/is_158 From checker at panix.com Sun Jan 1 03:02:40 2006 From: checker at panix.com (Premise Checker) Date: Sat, 31 Dec 2005 22:02:40 -0500 (EST) Subject: [Paleopsych] Dissent: Ellen Willis: Ghosts, Fantasies, and Hope (fwd) Message-ID: ---------- Forwarded message ---------- Date: Fri, 30 Dec 2005 16:46:16 -0500 (EST) From: Premise Checker To: Premise Checker: ; Subject: Dissent: Ellen Willis: Ghosts, Fantasies, and Hope Ellen Willis: Ghosts, Fantasies, and Hope Dissent Magazine - Fall 2005 http://www.dissentmagazine.org/menutest/articles/fa05/willis.htm [Clearing off the deck: Joel Garreau's new book, _Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies--and What It Means to be Human_ (NY: Doubleday, 2005) has just arrived. I am signed up to review it for _The Journal of Evolution and Technology_ and commenced reading it at once. Accordingly, I have stopped grabbing articles to forward until I have written my review *and* have caught up on my reading, this last going on for how many ever weeks it takes. I have a backlog of articles to send and will exhaust them by the end of the year. After that, I have a big batch of journal articles I downloaded on my annual visit to the University of Virginia and will dole our conversions from PDF to TXT at the rate of one a day. I'll also participate in discussions and do up and occasional meme. But you'll be on your own in analyzing the news. I hope I have given you some of the tools to do so. As I go through my backlog of the TLS, New Scientist, and Foreign Policy, I'll send choice articles your way, Foreign Policy first, since that hits the two themes I am striving (vainly!) to concentrate on, "deep culture change" and "persistence of difference."] Picture Imperfect: Utopian Thought for an Anti-Utopian Age by Russell Jacoby Columbia University Press, 2005 211 pp $24.95 For most of my politically conscious life, the idea of social transformation has been the great taboo of American politics. From the smug 1950s to the post-Reagan era, in which a bloodied and cowed left has come to regard a kinder, gentler capitalism as its highest aspiration, this anti-utopian trend has been interrupted only by the brief but intense flare-up of visionary politics known as "the sixties." Yet that short-lived, anomalous upheaval has had a more profound effect on my thinking about the possibilities of politics than the following three decades of reaction. The reason is not (to summarize the conversation-stopping accusations routinely aimed at anyone who suggests that sixties political and cultural radicalism might offer other than negative lessons for the left) that I am stuck in a time warp, nursing a romantic attachment to my youth, and so determined to idealize a period that admittedly had its politically dicey moments. Rather, as I see it, the enduring interest of this piece of history lies precisely in its spectacular departure from the norm. It couldn't happen, according to the reigning intellectual currents of the fifties, but it did. Nor--in the sense of ceasing to cast a shadow over the present--can it really be said to be over, even in this age of "9/11 Changed Everything." That the culture war instigated by the 1960s revolt shows no signs of abating thirty-some years later is usually cited by its left and liberal opponents to condemn it as a disastrous provocation that put the right in power. Yet the same set of facts can as plausibly be regarded as evidence of the potent and lasting appeal of its demand that society embrace freedom and pleasure as fundamental values. For the fury of the religious right is clearly a case of protesting too much, its preoccupation with sexual sin a testament to the magnitude of the temptation (as the many evangelical sex scandals suggest). Meanwhile, during the dot-com boom, enthusiastic young free marketeers fomented a mini-revival of sixties liberationism, reencoded as the quest for global entrepreneurial triumph, new technological toys, and limitless information. Was this just one more example of the amazing power of capitalism to turn every human impulse to its own purposes--or, given the right circumstances, might the force of desire overflow that narrow channel? If freedom's just another word for nothing left to lose, as Janis Joplin-cum-Kris Kristofferson famously opined, this could be a propitious moment to reopen a discussion of the utopian dimension of politics and its possible uses for our time. After all, the left has tried everything else, from postmodern rejection of "master narratives" and universal values to Anybody But Bush. Russell Jacoby, one of the few radicals to consistently reject the accommodationist pull, has been trying to nudge us toward such a conversation for some time. Picture Imperfect is really part two of a meditation that Jacoby began in 1999 with The End of Utopia, a ferocious polemic against anti-utopian thought. Both books trace the assumptions of today's anti-utopian consensus to the thirties and forties, when liberal intellectuals--most notably Karl Popper, Hannah Arendt, and Isaiah Berlin--linked Nazism and communism under the rubric of totalitarianism, whose essential characteristic, they proposed, was the rejection of liberal pluralism for a monolithic ideology. In the cold war context, Nazism faded into the background; the critique of totalitarianism became a critique of communism and was generalized to all utopian thinking--that is, to any political aspiration that went beyond piecemeal reform. As the logic of this argument would have it, attempts to understand and change a social system as a whole are by definition ideological, which is to say dogmatic; they violate the pluralistic nature of social life and so can only be enforced through terror; ergo, utopianism leads to mass murder. Never mind that passionate radicals such as Emma Goldman condemned the Soviet regime in the name of their own utopian vision or that most of the past century's horrors have been perpetrated by such decidedly non-utopian forces as religious fanaticism, nationalism, fascism, and other forms of racial and ethnic bigotry. (Jacoby notes with indignation that some proponents of the anti-utopian syllogism have tried to get around this latter fact by labeling movements like Nazism and radical Islamism "utopian"--as I write, David Brooks has just made use of this ploy in the New York Times--as if there is no distinction worth making between a universalist tradition devoted to "notions of happiness, fraternity, and plenty" and social "ideals" that explicitly mandate the mass murder of so-called inferior races or the persecution of infidels.) In the post-communist world, Jacoby laments, the equation of utopia with death has become conventional wisdom across the political board. The End of Utopia is primarily concerned with the impact of this brand of thinking on the left; it attacks the array of "progressive" spokespeople who insist that we must accept the liberal welfare state as the best we can hope for, as well as the multiculturalists who have reinvented liberal pluralism, celebrating "diversity" and "inclusiveness" within a socioeconomic system whose fundamental premises are taken for granted. With Picture Imperfect, Jacoby takes on larger and more philosophical questions about the nature of utopia and of the human imagination--too large, actually, to be adequately addressed in this quite short book, which has a somewhat diffuse and episodic quality as a result. Still, the questions are central to any serious discussion of the subject, and it helps that they are framed by a more concrete project: to rescue utopian thought from its murderous reputation as well as from the more mundane charge that it is puritanical and repressive in its penchant for planning out the future to the last detail. To this end, Jacoby distinguishes between two categories of utopianism: the dominant "blueprint" tradition, exemplified by Thomas More's eponymous no place or Edward Bellamy's Looking Backward, and the dissident strain he calls "iconoclastic" utopianism, whose concern is challenging the limits of the existing social order and expanding the boundaries of imagination rather than planning the perfect society. While he does not simply write off the blueprinters--fussy as their details may be, he regards them as contributors to the utopian spirit and credits them with inspiring social reforms--his heroes are the iconoclasts, beginning with Ernst Bloch and his 1918 The Spirit of Utopia, and including a gallery of anarchists, refusers, and mystics ranging from Walter Benjamin, Theodor Adorno, and Herbert Marcuse to Gustav Landauer and Martin Buber. The iconoclastic tradition is mainly Jewish, and Jacoby, in an interesting bit of discursus, links it to the biblical prohibition of idolatry. Just as the Jews may neither depict God's image nor pronounce God's name, so the iconoclasts avoid explicit images or descriptions of the utopian future. Further, Jacoby argues, in the Kabbala and in Jewish tradition generally, the Torah achieves full meaning only through the oral law: "The ear trumps the eye. Alone, the written word may mislead: it is too graphic." Similarly, the future of the iconoclasts is "heard and longed for" rather than seen. Here, Jacoby's analysis intersects with a fear he has long shared with his Frankfurt School mentors--that a mass culture obsessed with images flattens the imagination and perhaps destroys it altogether. From this perspective, the iconoclasts' elision of the image is itself radically countercultural. Is it also impossibly abstract? "The problem today," Jacoby recognizes in his epilogue, "is how to connect utopian thinking with everyday politics." Even as utopianism is condemned as deadly, it is at the same time, and often by the same people, dismissed as irrelevant to the real world. Jacoby will have none of this; he rightly insists, "Utopian thinking does not undermine or discount real reforms. Indeed, it is almost the opposite: practical reforms depend on utopian dreaming." Again, the sixties offers many examples--particularly its most successful social movement, second wave feminism, which achieved mass proportions in response to the radical proposition that men and women should be equals not only under the law or on the job but in every social sphere from the kitchen to the nursery to the bedroom to the street. (As one of the movement's prominent utopians, Shulamith Firestone, put it, the initial response of most women to that idea was, "You must be out of your mind--you can't change that!") Yet it seems likely that the relationship of the utopian imagination and the urge to concrete political activity is not precisely one of cause and effect; rather, both impulses appear to have a common root in the perception that something other than what is is possible--and necessary. We might think of iconoclastic utopians as the inverse of canaries in the mine: if they are hearing the sounds of an ineffable redemption, others may already be at work on annoyingly literal blueprints, and still others getting together for as yet obscure political meetings. So the formulation of the problem may need to be fine-tuned: what is it that fosters, or blocks, that sense of possibility/necessity? Why does it seem so utterly absent today (you're out of your mind!), and how can we change that? These questions are an obvious project for a third book, though it's one Jacoby is unlikely to write: he is temperamentally a refusenik, like the iconoclasts he lauds, more attuned to distant hoofbeats than to spoor on the ground that might reward analysis. It is perhaps this bias that has kept him from seeing one reason why the anti-utopian argument has become so entrenched: although there is perversity in it, and bad faith, there is also some truth. Jacoby is no fan of authoritarian communism, but he is wrong in thinking he can simply bracket that disaster or that there is nothing to be learned from it that might apply to utopian movements in general. The striking characteristic of communism was the radical disconnection between the social ideals it professed and the actual societies it produced. Because the contradiction could never be admitted, whole populations were forced to speak and act as if the lies of the regime were true. It is not surprising that victims or witnesses of this spectacle would distrust utopians. Who could tell what even the most steadfast anti-Stalinists might do if they actually gained some power? Who could give credence to phrases like "workers' control" or "women's emancipation" when they had come to mean anything but? Jacoby persuasively analyzes 1984 to show that it was not meant as an anti-socialist tract, yet he never mentions the attacks on the misuse of language that made Orwell's name into an adjective. Communism was corrupted by a scientific (or more accurately, scientistic) theory of history that cast opponents as expendable, a theory of class that dismissed bourgeois democratic liberties as merely a mask for capitalist exploitation, and a revolutionary practice that allowed a minority to impose dictatorship. Similar tropes made their way into the sixties' movements, in, for instance, the argument that oppressors should not have free speech or that the American people were the problem, not the solution, and the proper function of American radicals was to support third world anti-imperialism by any means necessary, including violence. A milder form of authoritarianism, which owed less to Marxism than to a peculiarly American quasi-religious moralism, disfigured the counterculture and the women's movement. If the original point of these movements was to promote the pursuit of happiness, too often the emphasis shifted to proclaiming one's own superior enlightenment and contempt for those who refused to be liberated; indeed, liberation had a tendency to become prescriptive, so that freedom to reject the trappings of middle-class consumerism, or not to marry, or to be a lesbian was repackaged as a moral obligation and a litmus test of one's radicalism or feminism. Just as communism discredited utopianism for several generations of Europeans, the antics of countercultural moralists fed America's conservative reaction. But it's not only corruption that distorts the utopian impulse when it begins to take some specific social shape. The prospect of more freedom stirs anxiety. We want it, but we fear it; it goes against our most deeply ingrained Judeo-Christian definitions of morality and order. At bottom, utopia equals death is a statement about the wages of sin. Left authoritarianism is itself a defense against anxiety--a way to assimilate frightening anarchy into familiar patterns of hierarchy and moral demand--as is the fundamentalist backlash taking place not only in the United States but around the world. Jacoby links the decline of utopian thought to the collapse of communism in 1989, and that is surely part of the story, but in truth the American backlash against utopianism was well underway by the mid-seventies. The sixties scared us, and not only because of Weatherman and Charles Manson. We scared ourselves. How did the sixties happen in the first place? I'd argue that a confluence of events stimulated desire while temporarily muting anxiety. There was widespread prosperity that made young people feel secure, able to challenge authority and experiment with their lives. There was a vibrant mass mediated culture that, far from damping down the imagination, transmitted the summons to freedom and pleasure far more broadly than a mere political movement could do. (Jacoby is on to something, though, about the importance of the ear: the key mass cultural form, from the standpoint of inciting utopianism, was rock and roll.) There was a critical mass of educated women who could not abide the contradiction between the expanding opportunities they enjoyed as middle-class Americans and the arbitrary restrictions on their sex. There was the advent of psychedelics, which allowed millions of people to sample utopia as a state of mind. Those were different times. Today, anxiety is a first principle of social life, and the right knows how to exploit it. Capital foments the insecurity that impels people to submit to its demands. And yet there are more Americans than ever before who have tasted certain kinds of social freedoms and, whether they admit it or not, don't want to give them up or deny them to others. From Bill Clinton's impeachment to the Terri Schiavo case, the public has resisted the right wing's efforts to close the deal on the culture. Not coincidentally, the cultural debates, however attenuated, still conjure the ghosts of utopia by raising issues of personal autonomy, power, and the right to enjoy rather than slog through life. In telling contrast, the contemporary left has not posed class questions in these terms; on the contrary, it has ceded the language of freedom and pleasure, "opportunity" and "ownership," to the libertarian right. Our culture of images notwithstanding, it cannot fairly be said that Americans' capacity for fantasy is impaired, even if it takes sectarian and apocalyptic rather than utopian forms. If anxiety is the flip side of desire, perhaps what we need to do is start asking ourselves and our fellow citizens what we want. The answers might surprise us. Ellen Willis writes on cultural politics and political culture and directs the Cultural Reporting and Criticism program in the Department of Journalism at New York University. She is currently at work on a book about the mass psychology of contemporary politics. From anonymous_animus at yahoo.com Sun Jan 1 19:51:28 2006 From: anonymous_animus at yahoo.com (Michael Christopher) Date: Sun, 1 Jan 2006 11:51:28 -0800 (PST) Subject: [Paleopsych] shame in the Bible In-Reply-To: <200601011900.k01J0ce29299@tick.javien.com> Message-ID: <20060101195128.5946.qmail@web36808.mail.mud.yahoo.com> >>So amongst the knowledge of good and evil that came from eating the forbidden fruit is that nudity is shameful.<< --Or maybe rather, that knowledge of one's nudity (i.e. shame) is shameful. My favorite theory about the Fall is that it was an allegory for the tendency of human beings to judge one another. To have knowledge of good and evil is to be a judge, and to carry out that judgment against another person is to play God. The message is: don't play God. In that context, awareness of nudity is a symbol for awareness of one's own hubris and arrogance in the presence of a greater judge. A bit like how many politicians would feel if their acts were exposed to the public, with no protection by secrecy or status. To be naked before one's enemies was to be judged without the protection of status symbols (uniform, title, etc). Michael __________________________________________ Yahoo! DSL ? Something to write home about. Just $16.99/mo. or less. dsl.yahoo.com From checker at panix.com Sun Jan 1 23:11:08 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:11:08 -0500 (EST) Subject: [Paleopsych] New Left Review: Eric Hobsbawm: Identity Politics and the Left Message-ID: Eric Hobsbawm: Identity Politics and the Left New Left Review 217, May/June 1996 [This is a significant article by an old-line 20th century British leftist. It deplores the replacement of "equality and social justice" as the essential aim of the Left with identity politics and mourns the disappearance of universalism on the Left. [The article, nearly a decade old, should be read carefully. He states, "Since the 1970s there has been a tendency-an increasing tendency' to see the Left essentially as a coalition of minority groups and interests: of race, gender, sexual or other cultural preferences and lifestyles, even of economic minorities such as the old getting-your-hands-dirty, industrial working class have now become." [Since then, the trends he deplores have been exacerbated, with universalism further in retreat. It is now getting to the point where Whites are starting their own identity politics. [Hobsbawn calls "equality and social justice" the essential defining characteristic of the Left, and this was true--or rather equality formed the *principle* Left-Right divide--but only for a while after the nearly universally recognized failure of central planning. The failure of egalitarian politics is becoming nearly as manifest as the failure of central planning. What is replacing equality, I have been arguing repeatedly, as the new major Left-Right divide in politics is universalism (on the Right, now taking the form of spreading "democratic capitalism" to the world or else the universal truths of one religion or another) and particularism (on the Left, now not very coherent, except to resist Rightist universalism). [Hobsbawm is quite correct to say that the Left in Britain degenerated into rent-seeking for higher wages for those who happen to be unionized. (Unionization simply cannot, and never could, raise wages overall in a competitive economy, but that's another story.) And the Central Planner in him remains in his Unchecked Premise that, while it is true that identities are multiple and fluid--but only to a degree, only to a degree--a Central Planner can make them what he will. [I could argue that capitalism is defective, in that it rewards inventors, entrepreneurs, capitalists, and businessmen too small a share of what they contribute to society (far less than their marginal product), while the workers collect nearly their full marginal product and that "social justice" demands regressive taxes. But all this would serve only to continue 20th-century Rightist arguments, coming down on the side of inequality rather than equality. [The politics of the 21st century will move away from the increasingly dead issue of equality. Hobsbawm writes that "the emergence of identity politics is a consequence of the extraordinarily rapid and profound upheavals and transformations of human society in the third quarter of this century," and quotes Daniel Bell as noting that "the breakup of the traditional authority structures and the previous affective social units-historically nation and class...make the ethnic attachment more salient." [But identity is not just a matter of politics and rent-seeking coalitions. Identity is becoming ever more salient, for it provides islands of stability in an world where everything else changes. This will only increase as change itself increases. This is deep culture change indeed, and the inevitable emergence of political entrepreneurs to form rent-seeking coalitions is a small aspect of this. [So read the article, not for the politics or for Hobsbawm's nostalgia for 20th century Leftist politics (but 1996, the date of the article, was still in the last century!) Try to think about the sociology of identity, how individuals will remake their identities to create new islands of stability, and how those with a particular identity, or mixture of them (as Hobsbawm quite correctly emphasizes--he is at some level a Public Choice man himself), will react to those with other identities. [Think, in other words, how those with particular enhancements will deal socially with those of different enhancements or with no enhancements?] -------------- My lecture is about a surprisingly new subject. [*] We have become so used to terms like 'collective identity', 'identity groups, 'identity politics', or, for that matter 'ethnicity', that it is hard to remember how recently they have surfaced as part of the current vocabulary, or jargon, of political discourse. For instance, if you look at the International Encyclopedia of the Social Sciences, which was published in 1968-that is to say written in the middle 1960s-you will find no entry under identity except one about psychosocial identity, by Erik Erikson, who was concerned chiefly with such things as the so-called 'identity crisis' of adolescents who are trying to discover what they are, and a general piece on voters' identification. And as for ethnicity, in the Oxford English Dictionary of the early 1970s it still occurs only as a rare word indicating 'heathendom and heathen superstition' and documented by quotations from the eighteenth century. In short, we are dealing with terms and concepts which really come into use only in the 1960s. Their emergence is most easily followed in the USA, partly because it has always been a society unusually interested in monitoring its social and psychological temperature, blood-pressure and other symptoms, and mainly because the most obvious form of identity politics-but not the only one-namely ethnicity, has always been central to American politics since it became a country of mass immigration from all parts of Europe. Roughly, the new ethnicity makes its first public appearance with Glazer and Moynihan's Beyond the Melting Pot in 1963 and becomes a militant programme with Michael Novak's The Rise of the Unmeltable Ethnics in 1972. The first, I don't have to tell you, was the work of a Jewish professor and an Irishman, now the senior Democratic senator for New York; the second came from a Catholic of Slovak origin. For the moment we need not bother too much about why all this happened in the 1960s, but let me remind you that-in the style-setting USA at least-this decade also saw the emergence of two other variants of identity politics: the modern (that is, post suffragist) women's movement and the gay movement. I am not saying that before the 1960s nobody asked themselves questions about their public identity. In situations of uncertainty they sometimes did; for instance in the industrial belt of Lorraine in France, whose official language and nationality changed five times in a century, and whose rural life changed to an industrial, semi-urban one, while their frontiers were redrawn seven times in the past century and a half. No wonder people said: 'Berliners know they're Berliners, Parisians know they are Parisians, but who are we?' Or, to quote another interview, 'I come from Lorraine, my culture is German, my nationality is French, and I think in our provincial dialect'. [1] Actually, these things only led to genuine identity problems when people were prevented from having the multiple, combined, identities which are natural to most of us. Or, even more so, when they are detached 'from the past and all common cultural practices'. [2] However, until the 1960s these problems of uncertain identity were confined to special border zones of politics. They were not yet central. They appear to have become much more central since the 1960s. Why? There are no doubt particular reasons in the politics and institutions of this or that country-for instance, in the peculiar procedures imposed on the USA by its Constitution-for example, the civil rights judgments of the 1950s, which were first applied to blacks and then extended to women, providing a model for other identity groups. It may follow, especially in countries where parties compete for votes, that constituting oneself into such an identity group may provide concrete political advantages: for instance, positive discrimination in favour of the members of such groups, quotas in jobs and so forth. This is also the case in the USA, but not only there. For instance, in India, where the government is committed to creating social equality, it may actually pay to classify yourself as low caste or belonging to an aboriginal tribal group, in order to enjoy the extra access to jobs guaranteed to such groups. The Denial of Multiple Identity But in my view the emergence of identity politics is a consequence of the extraordinarily rapid and profound upheavals and transformations of human society in the third quarter of this century, which I have tried to describe and to understand in the second part of my history of the 'Short Twentieth Century', The Age of Extremes. This is not my view alone. The American sociologist Daniel Bell, for instance, argued in 1975 that 'The breakup of the traditional authority structures and the previous affective social units-historically nation and class...make the ethnic attachment more salient'. [3] In fact, we know that both the nation-state and the old class-based political parties and movements have been weakened as a result of these transformations. More than this, we have been living-we are living-through a gigantic 'cultural revolution', an 'extraordinary dissolution of traditional social norms, textures and values, which left so many inhabitants of the developed world orphaned and bereft.' If I may go on quoting myself, 'Never was the word "community" used more indiscriminately and emptily than in the decades when communities in the sociological sense become hard to find in real life'. [4] Men and women look for groups to which they can belong, certainly and forever, in a world in which all else is moving and shifting, in which nothing else is certain. And they find it in an identity group. Hence the strange paradox, which the brilliant, and incidentally, Caribbean Harvard sociologist Orlando Patterson has identified: people choose to belong to an identity group, but 'it is a choice predicated on the strongly held, intensely conceived belief that the individual has absolutely no choice but to belong to that specific group.' [5] That it is a choice can sometimes be demonstrated. The number of Americans reporting themselves as 'American Indian' or 'Native American' almost quadrupled between 1960 and 1990, from about half a million to about two millions, which is far more than could be explained by normal demography; and incidentally, since 70 per cent of 'Native Americans' marry outside their race, exactly who is a 'Native American' ethnically, is far from clear. [6] So what do we understand by this collective 'identity', this sentiment of belonging to a primary group, which is its basis? I draw your attention to four points. First, collective identities are defined negatively; that is to say against others. 'We' recognize ourselves as 'us' because we are different from 'Them'. If there were no 'They' from whom we are different, we wouldn't have to ask ourselves who 'We' were. Without Outsiders there are no Insiders. In other words, collective identities are based not on what their members have in common-they may have very little in common except not being the 'Others'. Unionists and Nationalists in Belfast, or Serb, Croat and Muslim Bosnians, who would otherwise be indistinguishable-they speak the same language, have the same life styles, look and behave the same-insist on the one thing that divides them, which happens to be religion. Conversely, what gives unity as Palestinians to a mixed population of Muslims of various kinds, Roman and Greek Catholics, Greek Orthodox and others who might well-like their neighbours in Lebanon-fight each other under different circumstances? Simply that they are not the Israelis, as Israeli policy continually reminds them. Of course, there are collectivities which are based on objective characteristics which their members have in common, including biological gender or such politically sensitive physical characteristics as skin-colour and so forth. However most collective identities are like shirts rather than skin, namely they are, in theory at least, optional, not inescapable. In spite of the current fashion for manipulating our bodies, it is still easier to put on another shirt than another arm. Most identity groups are not based on objective physical similarities or differences, although all of them would like to claim that they are 'natural' rather than socially constructed. Certainly all ethnic groups do. Second, it follows that in real life identities, like garments, are interchangeable or wearable in combination rather than unique and, as it were, stuck to the body. For, of course, as every opinion pollster knows, no one has one and only one identity. Human beings cannot be described, even for bureaucratic purposes, except by a combination of many characteristics. But identity politics assumes that one among the many identities we all have is the one that determines, or at least dominates our politics: being a woman, if you are a feminist, being a Protestant if you are an Antrim Unionist, being a Catalan, if you are a Catalan nationalist, being homosexual if you are in the gay movement. And, of course, that you have to get rid of the others, because they are incompatible with the 'real' you. So David Selbourne, an all-purpose ideologue and general denouncer, firmly calls on 'The Jew in England' to 'cease to pretend to be English' and to recognize that his 'real' identity is as a Jew. This is both dangerous and absurd. There is no practical incompatibility unless an outside authority tells you that you cannot be both, or unless it is physically impossible to be both. If I wanted to be simultaneously and ecumenically a devout Catholic, a devout Jew, and a devout Buddhist why shouldn't I? The only reason which stops me physically is that the respective religious authorities might tell me I cannot combine them, or that it might be impossible to carry out all their rituals because some got in the way of others. Usually people have no problem about combining identities, and this, of course, is the basis of general politics as distinct from sectional identity politics. Often people don't even bother to make the choice between identities, either because nobody asks them, or because it's too complicated. When inhabitants of the USA are asked to declare their ethnic origins, 54 per cent refuse or are unable to give an answer. In short, exclusive identity politics do not come naturally to people. It is more likely to be forced upon them from outside-in the way in which Serb, Croat and Muslim inhabitants of Bosnia who lived together, socialized and intermarried, have been forced to separate, or in less brutal ways. The third thing to say is that identities, or their expression, are not fixed, even supposing you have opted for one of your many potential selves, the way Michael Portillo has opted for being British instead of Spanish. They shift around and can change, if need be more than once. For instance non-ethnic groups, all or most of whose members happen to be black or Jewish, may turn into consciously ethnic groups. This happened to the Southern Christian Baptist Church under Martin Luther King. The opposite is also possible, as when the Official IRA turned itself from a Fenian nationalist into a class organization, which is now the Workers' Party and part of the Irish Republic's government coalition. The fourth and last thing to say about identity is that it depends on the context, which may change. We can all think of paid-up, card-carrying members of the gay community in the Oxbridge of the 1920s who, after the slump of 1929 and the rise of Hitler, shifted, as they liked to say, from Homintern to Comintern. Burgess and Blunt, as it were, transferred their gayness from the public to the private sphere. Or, consider the case of the Protestant German classical scholar, Pater, a professor of Classics in London, who suddenly discovered, after Hitler, that he had to emigrate, because, by Nazi standards, he was actually Jewish-a fact of which until that moment, he was unaware. However he had defined himself previously, he now had to find a different identity. The Universalism of the Left What has all this to do with the Left? Identity groups were certainly not central to the Left. Basically, the mass social and political movements of the Left, that is, those inspired by the American and French revolutions and socialism, were indeed coalitions or group alliances, but held together not by aims that were specific to the group, but by great, universal causes through which each group believed its particular aims could be realized: democracy, the Republic, socialism, communism or whatever. Our own Labour Party in its great days was both the party of a class and, among other things, of the minority nations and immigrant communities of mainland Britainians. It was all this, because it was a party of equality and social justice. Let us not misunderstand its claim to be essentially class-based. The political labour and socialist movements were not, ever, anywhere, movements essentially confined to the proletariat in the strict Marxist sense. Except perhaps in Britain, they could not have become such vast movements as they did, because in the 1880s and 1890s, when mass labour and socialist parties suddenly appeared on the scene, like fields of bluebells in spring, the industrial working class in most countries was a fairly small minority, and in any case a lot of it remained outside socialist labour organization. Remember that by the time of World War I the social-democrats polled between 30 and 47 per cent of the electorate in countries like Denmark, Sweden and Finland, which were hardly industrialized, as well as in Germany. (The highest percentage of votes ever achieved by the Labour Party in this country, in 1951, was 48 per cent.) Furthermore, the socialist case for the centrality of the workers in their movement was not a sectional case. Trade unions pursued the sectional interests of wage-earners, but one of the reasons why the relations between labour and socialist parties and the unions associated with them, were never without problems, was precisely that the aims of the movement were wider than those of the unions. The socialist argument was not just that most people were 'workers by hand or brain' but that the workers were the necessary historic agency for changing society. So, whoever you were, if you wanted the future, you would have to go with the workers' movement. Conversely, when the labour movement became narrowed down to nothing but a pressure-group or a sectional movement of industrial workers, as in 1970s Britain, it lost both the capacity to be the potential centre of a general people's mobilization and the general hope of the future. Militant 'economist' trade unionism antagonized the people not directly involved in it to such an extent that it gave Thatcherite Toryism its most convincing argument-and the justification for turning the traditional 'one-nation' Tory Party into a force for waging militant class-war. What is more, this proletarian identity politics not only isolated the working class, but also split it by setting groups of workers against each other. So what does identity politics have to do with the Left? Let me state firmly what should not need restating. The political project of the Left is universalist: it is for all human beings. However we interpret the words, it isn't liberty for shareholders or blacks, but for everybody. It isn't equality for all members of the Garrick Club or the handicapped, but for everybody. It is not fraternity only for old Etonians or gays, but for everybody. And identity politics is essentially not for everybody but for the members of a specific group only. This is perfectly evident in the case of ethnic or nationalist movements. Zionist Jewish nationalism, whether we sympathize with it or not, is exclusively about Jews, and hang-or rather bomb-the rest. All nationalisms are. The nationalist claim that they are for everyone's right to self-determination is bogus. That is why the Left cannot base itself on identity politics. It has a wider agenda. For the Left, Ireland was, historically, one, but only one, out of the many exploited, oppressed and victimized sets of human beings for which it fought. For the IRA kind of nationalism, the Left was, and is, only one possible ally in the fight for its objectives in certain situations. In others it was ready to bid for the support of Hitler as some of its leaders did during World War II. And this applies to every group which makes identity politics its foundation, ethnic or otherwise. Now the wider agenda of the Left does, of course, mean it supports many identity groups, at least some of the time, and they, in turn look to the Left. Indeed, some of these alliances are so old and so close that the Left is surprised when they come to an end, as people are surprised when marriages break up after a lifetime. In the USA it almost seems against nature that the 'ethnics'-that is, the groups of poor mass immigrants and their descendants-no longer vote almost automatically for the Democratic Party. It seems almost incredible that a black American could even consider standing for the Presidency of the USA as a Republican (I am thinking of Colin Powell). And yet, the common interest of Irish, Italian, Jewish and black Americans in the Democratic Party did not derive from their particular ethnicities, even though realistic politicians paid their respects to these. What united them was the hunger for equality and social justice, and a programme believed capable of advancing both. The Common Interest But this is just what so many on the Left have forgotten, as they dive head first into the deep waters of identity politics. Since the 1970s there has been a tendency-an increasing tendency' to see the Left essentially as a coalition of minority groups and interests: of race, gender, sexual or other cultural preferences and lifestyles, even of economic minorities such as the old getting-your-hands-dirty, industrial working class have now become. This is understandable enough, but it is dangerous, not least because winning majorities is not the same as adding up minorities. First, let me repeat: identity groups are about themselves, for themselves, and nobody else. A coalition of such groups that is not held together by a single common set of aims or values, has only an ad hoc unity, rather like states temporarily allied in war against a common enemy. They break up when they are no longer so held together. In any case, as identity groups, they are not committed to the Left as such, but only to get support for their aims wherever they can. We think of women's emancipation as a cause closely associated with the Left, as it has certainly been since the beginnings of socialism, even before Marx and Engels. And yet, historically, the British suffragist movement before 1914 was a movement of all three parties, and the first woman mp, as we know, was actually a Tory. [7] Secondly, whatever their rhetoric, the actual movements and organizations of identity politics mobilize only minorities, at any rate before they acquire the power of coercion and law. National feeling may be universal, but, to the best of my knowledge, no secessionist nationalist party in democratic states has so far ever got the votes of the majority of its constituency (though the Qu?becois last autumn came close-but then their nationalists were careful not actually to demand complete secession in so many words). I do not say it cannot or will not happen-only that the safest way to get national independence by secession so far has been not to ask populations to vote for it until you already have it first by other means. That, by the way, makes two pragmatic reasons to be against identity politics. Without such outside compulsion or pressure, under normal circumstances it hardly ever mobilizes more than a minority-even of the target group. Hence, attempts to form separate political women's parties have not been very effective ways of mobilizing the women's vote. The other reason is that forcing people to take on one, and only one, identity divides them from each other. It therefore isolates these minorities. Consequently to commit a general movement to the specific demands of minority pressure groups, which are not necessarily even those of their constituencies, is to ask for trouble. This is much more obvious in the USA, where the backlash against positive discrimination in favour of particular minorities, and the excesses of multiculturalism, is now very powerful; but the problem exists here also. Today both the Right and to the Left are saddled with identity politics. Unfortunately, the danger of disintegrating into a pure alliance of minorities is unusually great on the Left because the decline of the great universalist slogans of the Enlightenment, which were essentially slogans of the Left, leaves it without any obvious way of formulating a common interest across sectional boundaries. The only one of the so-called 'new social movements' which crosses all such boundaries is that of the ecologists. But, alas, its political appeal is limited and likely to remain so. However, there is one form of identity politics which is actually comprehensive, inasmuch as it is based on a common appeal, at least within the confines of a single state: citizen nationalism. Seen in the global perspective this may be the opposite of a universal appeal, but seen in the perspective of the national state, which is where most of us still live, and are likely to go on living, it provides a common identity, or in Benedict Anderson's phrase, 'an imagined community' not the less real for being imagined. The Right, especially the Right in government, has always claimed to monopolize this and can usually still manipulate it. Even Thatcherism, the grave-digger of 'one-nation Toryism', did it. Even its ghostly and dying successor, Major's government, hopes to avoid electoral defeat by damning its opponents as unpatriotic. Why then has it been so difficult for the Left, certainly for the Left in English-speaking countries, to see itself as the representative of the entire nation? (I am, of course, speaking of the nation as the community of all people in a country, not as an ethnic entity.) Why have they found it so difficult even to try? After all, the European Left began when a class, or a class alliance, the Third Estate in the French Estates General of 1789, decided to declare itself 'the nation' as against the minority of the ruling class, thus creating the very concept of the political 'nation'. After all, even Marx envisaged such a transformation in The Communist Manifesto. [8] Indeed, one might go further. Todd Gitlin, one of the best observers of the American Left, has put it dramatically in his new book, The Twilight of Common Dreams: 'What is a Left if it is not, plausibly at least, the voice of the whole people?...If there is no people, but only peoples, there is no Left.' [9] The Muffled Voice of New Labour And there have been times when the Left has not only wanted to be the nation, but has been accepted as representing the national interest, even by those who had no special sympathy for its aspirations: in the USA, when the Rooseveltian Democratic Party was politically hegemonic, in Scandinavia since the early 1930s. More generally, at the end of World War II the Left, almost everywhere in Europe, represented the nation in the most literal sense, because it represented resistance to, and victory over, Hitler and his allies. Hence the remarkable marriage of patriotism and social transformation, which dominated European politics immediately after 1945. Not least in Britain, where 1945 was a plebiscite in favour of the Labour Party as the party best representing the nation against one-nation Toryism led by the most charismatic and victorious war-leader on the scene. This set the course for the next thirty-five years of the country's history. Much more recently, Fran?ois Mitterrand, a politician without a natural commitment to the Left, chose leadership of the Socialist Party as the best platform for exercising the leadership of all French people. One would have thought that today was another moment when the British Left could claim to speak for Britain-that is to say all the people-against a discredited, decrepit and demoralized regime. And yet, how rarely are the words 'the country', 'Great Britain', 'the nation', 'patriotism', even 'the people' heard in the pre-election rhetoric of those who hope to become the next government of the United Kingdom! It has been suggested that this is because, unlike 1945 and 1964, 'neither the politician nor his public has anything but a modest belief in the capacity of government to do very much'. [10] If that is why Labour speaks to and about the nation in so muffled a voice, it is trebly absurd. First, because if citizens really think that government can't do very much, why should they bother to vote for one lot rather than the other, or for that matter for any lot? Second, because government, that is to say the management of the state in the public interest, is indispensable and will remain so. Even the ideologues of the mad Right, who dream of replacing it by the universal sovereign market, need it to establish their utopia, or rather dystopia. And insofar as they succeed, as in much of the ex-socialist world, the backlash against the market brings back into politics those who want the state to return to social responsibility. In 1995, five years after abandoning their old state with joy and enthusiasm, two thirds of East Germans think that life and conditions in the old gdr were better than the 'negative descriptions and reports' in today's German media, and 70 per cent think 'the idea of socialism was good, but we had incompetent politicians'. And, most unanswerably, because in the past seventeen years we have lived under governments which believed that government has enormous power, which have used that power actually to change our country decisively for the worse, and which, in their dying days are still trying to do so, and to con us into the belief that what one government has done is irreversible by another. The state will not go away. It is the business of government to use it. Government is not just about getting elected and then re-elected. This is a process which, in democratic politics, implies enormous quantities of lying in all its forms. Elections become contests in fiscal perjury. Unfortunately, politicians, who have as short a time-horizon as journalists, find it hard to see politics as other than a permanent campaigning season. Yet there is something beyond. There lies what government does and must do.There is the future of the country. There are the hopes and fears of the people as a whole-not just 'the community', which is an ideological cop-out, or the sum-total of earners and spenders (the 'taxpayers' of political jargon), but the British people, the sort of collective which would be ready to cheer the victory of any British team in the World Cup, if it hadn't lost the hope that there might still be such a thing. For not the least symptom of the decline of Britain, with the decline of science, is the decline of British team sports. It was Mrs Thatcher's strength, that she recognized this dimension of politics. She saw herself leading a people 'who thought we could no longer do the great things we once did'-I quote her words-'those who believed our decline was irreversible, that we could never again be what we were'. [11] She was not like other politicians, inasmuch as she recognized the need to offer hope and action to a puzzled and demoralized people. A false hope, perhaps, and certainly the wrong kind of action, but enough to let her sweep aside opposition within her party as well as outside, and change the country and destroy so much of it. The failure of her project is now manifest. Our decline as a nation has not been halted. As a people we are more troubled, more demoralized than in 1979, and we know it. Only those who alone can form the post-Tory government are themselves too demoralized and frightened by failure and defeat, to offer anything except the promise not to raise taxes. We may win the next general election that way and I hope we will, though the Tories will not fight the election campaign primarily on taxes, but on British Unionism, English nationalism, xenophobia and the Union Jack, and in doing so will catch us off balance. Will those who have elected us really believe we shall make much difference? And what will we do if they merely elect us, shrugging their shoulders as they do so? We will have created the New Labour Party. Will we make the same effort to restore and transform Britain? There is still time to answer these questions. [*] This is the text of the Barry Amiel and Norman Melburn Trust Lecture given at the Institute of Education, London on 2 May 1996. [1] M.L. Pradelles de Latou, 'Identity as a Complex Network', in C. Fried, ed., Minorities, Community and Identity, Berlin 1983, p. 79. [2] Ibid. p. 91. [3] Daniel Bell, 'Ethnicity and Social Change', in Nathan Glazer and Daniel P. Moynihan, eds., Ethnicity: Theory and Experience, Cambridge, Mass. 1975, P. 171 [4] E.J. Hobsbawm, The Age of Extremes. The Short Twentieth Century, 1914-1991, London 1994, p. 428. [5] O. Patterson, 'Implications of Ethnic Identification'in Fried, ed., Minorities: Community and Identity, pp. 28-29. O. Patterson, 'Implications of Ethnic Identification'in Fried, ed., Minorities: Community and Identity, pp. 28-29. [6] O. Patterson, 'Implications of Ethnic Identification'in Fried, ed., Minorities: Community and Identity, pp. 28-29. [7] Jihang Park, 'The British Suffrage Activists of 1913', Past & Present, no. 120, August 1988, pp. 156-7. [8] 'Since the proletariat must first of all acquire political supremacy, must raise itself to be the national class, must constitute itself the nation, it is itself still national, though not in the bourgeois sense.' Karl Marx and Frederick Engels, The Communist Manifesto, 1848, part II. The original (German) edition has 'the national class'; the English translation of 1888 gives this as 'the leading class of the nation'. [9] Gitlin, The Twilight of Common Dreams, New York 1995, p. 165. [10] Hugo Young, 'No Waves in the Clear Blue Water', The Guardian, 23 April 1996, p. 13. [11] Cited in Eric Hobsbawm, Politics for a Rational Left, Verso, London 1989, p. 54. From checker at panix.com Sun Jan 1 23:11:46 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:11:46 -0500 (EST) Subject: [Paleopsych] World Science: Bees can recognize human faces, study finds Message-ID: Bees can recognize human faces, study finds http://www.world-science.net/exclusives/051209_beesfrm.htm March 30, 2005 Honeybees may look pretty much all alike to us. But it seems we may not look all alike to them. A study has found that the bees can learn to recognize human faces in photos, and remember them for at least two days. The findings toss new uncertainty into a long-studied issue that some scientists considered largely settled, the researchers say: the question of how humans themselves recognize each other's faces. The results also may help lead to better face-recognition software, developed through study of the insect brain, the authors of the new research said. Many researchers traditionally believed the task required a large brain and a specialized area of that brain dedicated to processing face information. The bee finding casts doubt on that, said Adrian G. Dyer, the lead researcher in the study. He recalls that the discovery startled him so much that he called out to a colleague, telling her to come quickly because "no one's going to believe it--and bring a camera!" Dyer said that to his knowledge, the finding is the first time an invertebrate has shown ability to recognize faces of other species. But not all bees were up to the task: some flunked it, he said, although this seemed due more to a failure to grasp how the test worked than to poor facial recognition specifically. In any cases, some humans also can't recognize faces, Dyer noted; the condition is called prosopagnosia. In the bee study, reported in the Dec. 15 issue of the Journal of Experimental Biology, Dyer and two colleagues presented honeybees with photos of human faces drawn from a standard human psychology test. The photos had similar lighting, background colors and sizes and included only the face and neck to avoid having the insects make judgments based on the clothing. In some cases, the people in the picture themselves looked similar. The researchers tried to train the bees to realize that one photo had a drop of a sugary liquid next to it. Different photos came with a drop of bitter liquid instead. Many bees apparently failed to realize that that they should pay attention to the photos at all. But five bees learned to fly toward the photo horizontally in such a way that they could get a good look at it, Dyer reported. In fact, these bees tended to hover a few centimeters in front of the image for a while before deciding where to land. The bees learned to distinguish the correct face from the wrong one with better than 80 percent accuracy, even when the faces were similar, and regardless of where the photos were placed, the researchers found. Also, just like humans, the bees performed more poorly when the faces were flipped upside-down. "This is evidence that face recognition requires neither a specialised neuronal [brain] circuitry nor a fundamentally advanced nervous system," the researchers wrote, noting that the test they used was one for which even humans have some difficulty. Also, "Two bees tested 2 days after the initial training retained the information in long-term memory," they wrote. One scored about 94 percent on the first day and 79 percent two days later; the second bee's score dropped from about 87 to 76 percent during the same time frame. The researchers also checked whether bees performed better for faces that humans judged as being more different. This seemed to be so, they found, but the result didn't reach statistical significance. The bees probably don't understand what a human face is, Dyer said in an email. "To the bees the faces were spatial patterns (or strange looking flowers)," he added. Bees are famous for their pattern-recognition abilities, which scientists believe evolved in order to discriminate among flowers. As social insects, it's well known that they can also tell apart their hivemates. But the new study shows that they can recognize human faces better than some humans can--with one-thousandth of the brain cells. This raises the question of how bees recognize faces, and if so, whether they do it differently from the way we do it, Dyer and colleagues wrote. Studies suggest small children recognize faces by picking out specific features that are easy to recognize, whereas adults see the interrelationships among facial features. Bees seem to show aspects of both strategies depending on the study, the researchers added. The findings cast doubt on the belief among some researchers that the human brain has a specialized area for face recognition, Dyer and colleagues said. Neuroscientists point to an area called the fusiform gyrus, which tends to show increased activity during face-viewing, as serving this purpose. But the bee finding "supports the view that the human brain may not need to have a visual area specific for the recognition of faces," Dyer and colleagues wrote. That may be helpful to researchers who develop face-recognition technologies to be used for security at airports and other locations, Dyer noted. The United States is investing heavily in such systems, but they remain primitive. Already, the way that bees navigate is "being used to design self-autonomous aircraft that can fly in remote areas without the need for radio contact or satellite navigation," he wrote in the email. "We show that the miniature brain can definitely recognize faces, and if in the future we can work out the mechanisms by which this is achieved then perhaps there are insights to how to try novel recognition solutions." On the other hand, Dyer said, the findings probably don't back up an adage popular in some parts of the world--that you shouldn't kill a bee because its nestmates will remember and come after you. Bees may launch revenge attacks, but they might simply do so because they smell the dead bee, he remarked, adding that that's his speculation only. In any case, "bees don't normally go around looking at faces." From checker at panix.com Sun Jan 1 23:11:58 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:11:58 -0500 (EST) Subject: [Paleopsych] Hartford Courant: The Mind of the Psychopath: Contours of Evil Message-ID: The Mind of the Psychopath: Contours of Evil http://www.courant.com/news/health/hc-psychopathbrain1218.artdec18,0,209514.story?coll=hc-big-headlines-breaking Researchers Study How The Mind Works When There's No Remorse By WILLIAM HATHAWAY Hartford Courant Staff Writer December 18 2005 Dr. Kent A. Kiehl has interviewed dozens of psychopaths over the past their heinous acts he remains as astonished as he is repulsed. "I think, `I can't believe this guy is telling me he bashed in his mother's head with a propane tank,'" Kiehl says. Kiehl and a team of researchers at Hartford Hospital's Institute of Living are using brain scans in an attempt to explain the inexplicable: What makes some people absolutely devoid of empathy and remorse? Society needs answers because of the sheer havoc psychopaths create, the researchers say. Superficially charming, psychopaths lie, steal, rape, rob, embezzle, assault and abuse with no compunction, no conscience. But all psychopaths are notoriously impervious to rehabilitation. Psychopaths account for a quarter of all prisoners in the United States - and for as much as 50 percent of all violent crime, the researchers estimate. There are also hundreds of thousands of psychopaths in the United States who manage to stay out of prison, but nonetheless dole out immeasurable amounts of pain in homes, schools, even corporate boardrooms. Within the pattern of bright blue and yellow blotches on the brain scans he has taken, Kiehl believes he has found the dark contours of the psychopathic mind. When psychopaths see or hear emotional words or pictures of misery, areas of their brains that should light up like a Christmas tree are dark and devoid of activity. Instead, their brains process information such as a picture of a bereaved mother holding her dead child in the same way they would react to a picture of a chair or shovel. Psychopaths seem to know the words, but they can't hear the music, researchers often say. In probing the abyss of the psychopathic mind, Kiehl and others are raising questions about our criminal justice system and our assumptions about human morality. Beyond Bundy Kiehl's own quest began with stories his father, a newspaper editor, told about serial killer Ted Bundy, who grew up in the same Tacoma, Wash., neighborhood as the Kiehls. Bundy was the archetypal psychopath - handsome, disarmingly charming and utterly ruthless. His outwardly clean-cut appearance and his cunning - he volunteered for a suicide hot line and the Republican Party - made Bundy a virtuoso killer. He was known to use crutches as props and feign car trouble to induce young victims to give him a ride. He eventually confessed to more than two dozen murders, but he is thought to have killed dozens more during a spree in the mid- to late 1970s. "The question has always been, `What makes people do something like that?'" Kiehl said. Years later, while doing postgraduate studies in neurobiology at the University of California at Davis, Kiehl decided he would try to answer the question. He launched a campaign to get hired in the lab of the guru of psychopathy: Robert D. Hare, now professor emeritus of psychology at the University of British Columbia. Hare told him he "didn't hire Americans." But after a concerted sales pitch, which included a gift of baseball tickets to a Toronto Blue Jays game, Kiehl says Hare relented and hired the young researcher in 1994. It was an auspicious time in psychopathy research. Hare's research had given the nascent field some terminology to use. And new imaging technology was just beginning to open a window onto the dark world of the psychopathic brain. The personality type had been known for centuries. In the 18th century, Frenchman Philippe Pinel coined the term "insanity without delirium" to describe aberrant behavior accompanied by a complete lack of remorse. The study of psychopathy in the United States dates from 1941, when Hervey Cleckley published a book called "The Mask of Sanity" that described psychopaths as unusually intelligent people, characterized by a "poverty of emotions." But it wasn't until Hare devised his psychopathy checklist in 1980 - which he revised in 1991 - that an easily identified set of personality characteristics defined the condition and opened up a field of research. "There was a gut feeling that there was something different" about psychopaths, Hare said. There was. Grading Psychopaths Psychopaths aren't crazy, at least in a traditional medical sense, but they are unfettered by any sense of shame or guilt. Symptoms can show up early in life. Psychopathic children have total disregard for rules and engage in unusually vicious assaults or torture animals. Kiehl has received a federal grant to see whether children diagnosed with "callous conduct disorder" might actually be budding psychopaths. Researchers have come to the conclusion that while a hostile environment can contribute to the development of psychopathy, many psychopaths are born, not made. Studies of twins suggest that psychopathic tendencies can develop even in loving homes. Some studies suggest that male psychopaths outnumber females by about 3 to 1. The general lack of social causes for the disorder is one reason why most experts no longer use the term "sociopath" to describe a psychopath. Researchers say as many as 1 out of every 100 people in the United States may meet the classification of a psychopath; serial killers make up a tiny minority of them. The revised psychopathy checklist, known as the PCL-R, lists 20 traits and behaviors common to the disorder. Experts who are trained in administering the test score subjects with a 0, 1 or 2 on each item on the checklist. Hare said most people might score a 4 on his PCL-R checklist. A person is not designated a psychopath unless he or she scores 30 or more on the scale of 40. The higher the score, the more devastation a psychopath is likely to cause. Somebody who scores a 27 probably wouldn't be a great dinner guest. Psychopaths are pathological liars who crave stimulation, are sexually promiscuous and unable to control their behavior. They typically lack realistic long-term goals. They may be master manipulators, but psychopaths have a hard time concealing their nature from people trained to use the checklist, Kiehl said. Inevitably, they lie, boast or reveal their callousness. "They can't help themselves," he said. People who deal with psychopaths have observed another shared quality, one not on the checklist or easily measured. There is something different about their eyes. The gaze of the psychopath is disquieting, even frightening, and has been described as cold or penetrating, empty, reptilian, not quite human. They lack any depth to their emotions and the ability to connect emotion to cognition "They don't quite get it," Hare said. "There is something missing." `I Never Hurt People' In the mid-1990s, in Canadian prisons, Kiehl began to perfect the art of using the checklist to score prisoners, who were told their interviews would not be shared with law enforcement authorities. In training tapes he recorded, Kiehl, a burly former football player, maintains a steady voice as he peppers subjects with short questions. In one tape, a 30-something man with long sideburns and thinning hair, dressed in a green windbreaker, answers Kiehl's questions with an easy smile, a collegial, confiding, "just between us boys" air. "Sideburns" confesses to bootlegging cigarettes, petty thefts. "Do you have a temper?" asks Kiehl, who is off camera. "Oh yes, explosive," Sideburns answers. "Do you assault people?" "Oh, I never physically hurt people." "What happens when you lose your temper?" "Oh, I can just lose it. Like the time I killed my girlfriend." The blurted truth comes quick as a cobra strike. There is nothing in the man's face or voice to suggest he even recognizes he had told a lie about hurting people. When he relates how he held his girlfriend's head under water in a bathtub, there is no hesitation or pause in his voice, no change in tenor or inflection that hints he is aware the interview has shifted to a different moral ground. "Police said she was already unconscious," he says, as if the statement absolves him of wrongdoing. He changes the subject to all the stolen electronic equipment he gave the woman. For the first time, Sideburns seems a bit worked up. When you steal electronics, he asks, "Do you know how hard it is to find remote controls?" Spotting The Predators Warning the public about the dangers of such psychopaths is a passion for Hare, author of the book "Without Conscience." Hare said zebras and other animals congregating around an African waterhole know to scatter when they see a lion. "There you can identify a predator, but psychopaths don't wear bells around their necks," Hare said. Psychopaths tend to thrive "where the rules are obscure, where there is chaotic upheaval," Hare said. "Countries such as Yugoslavia and the Soviet Union after their breakups were a warm niche for psychopaths, who simply moved in to take advantage of the chaos." A corporation that is disorganized and growing quickly offers the same type of fertile environment, Hare said. In a book tentatively titled "Snakes in Suits" to be published next spring, Hare blames scandals such as the destruction of Enron at least partly on a category of psychopaths who typically know how to stay out of jail. Hare's checklist today has provided a generally accepted definition of the psychopath, the "who" and the "what" that allows researchers from different disciplines to study the phenomenon. Hare says his checklist has been both abused and underused. He railed against a judge in Texas, for instance, who has sentenced defendants to death because he has deemed them psychopaths, even though they were never examined by people trained to use his checklist. But Hare also says many parole boards underutilize psychopathic evaluations when considering whether to release a prisoner. How a prisoner scores on the 20 characteristics of Hare's checklist "is the best predictor of recidivism that we have," said Diana Fishbein, a researcher at RTI International of Research Triangle Park in North Carolina. About half of prisoners released from jail wind up back there within three years, Kiehl said. The number skyrockets to at least 4 in 5 when the prisoner is a psychopath. And psychopaths seem to be immune to any sort of therapy that might better those odds. One study explored whether group therapy might lower the recidivism rate of psychopaths. Sixty percent of untreated psychopaths in the study were back in jail after a year. But 80 percent of psychopaths who participated in group therapy were convicted of another offense in the same period. "They used the sessions to learn how to exploit the emotions of others," Kiehl said. Some people debate the value of using the checklist to determine sentences for individual killers. The recidivism rate is not 100 percent for psychopaths, noted Dr. Michael Norko, director of the Whiting Forensic Division of Connecticut Valley Hospital. And using the checklist is akin to doing a DNA analysis for an incurable disease. What are you going to do if you find it? "It would be different if we had a pill for psychopathy," Norko said. Devoid Of Emotion Kiehl says he believes that brain imaging studies can pinpoint the biological cause of psychopathic behavior and possibly lead to a remedy, perhaps even a psychopathy pill. "If we could develop a treatment for psychopaths, it would alleviate an enormous burden on society," said Kiehl, who is director of the clinical cognitive neuroscience laboratory at the institute's Olin Neuropsychiatry Research Center. He has a theory of where in the brain to look. His previous work showed a peculiar pattern of brain activity in psychopaths when they were presented with different words or images. Using both an electroencephalograph (EEG), which measures electrical activity in the brain, and functional magnetic resonance imaging scans, which measure oxygen use, Kiehl found striking differences between psychopaths and non-psychopaths in the activity of several regions of the brain. He is particularly intrigued by abnormalities in psychopaths' brains, in what he calls the paralimbic system, a loose organization of brain structures involved in processing emotion. In most people, that picture of a distraught woman holding a dead child will trigger heightened activity in these brain areas, including a region called the amygdale. In contrast, the brains of criminal psychopaths respond much as they would to any inanimate object. Kiehl and other scientists have also found heightened brain activity in the frontal cortex of psychopaths when they are presented with emotionally charged words or images. The frontal cortex helps govern reason and planning. Some scientists have interpreted that as evidence that the root of psychopathic behavior lies in the frontal cortex. But Kiehl and others see it differently. People with injuries to the frontal cortex do not exhibit the goal-directed aggression or callousness often associated with psychopaths, Kiehl says. Kiehl believes psychopaths enlist areas of the frontal cortex to process information that the brain usually processes in its emotional centers. On his desk at the Institute of Living, Kiehl keeps a replica of a railroad spike, a memento of an 1848 accident that befell a Vermont construction foreman named Phineas Gage. An explosion drove a 3-foot-7-inch tamping iron through Gage's brain. The sheer improbability of his survival - the tamping iron entered under his cheekbone, exited the top of his skull and landed 25 feet away - assured Gage a place in the history of medical oddities. But the changes in his behavior made him famous. Gage, who had been a reliable worker and a sober, churchgoing, devoted family man, became an irresponsible cad, ignoring his wife, children and job. In short, he acted like a psychopath. Kiehl notes that the tamping iron damaged the paralimbic system in Gage's brain, the same areas that seem abnormal in the brain's of psychopaths. Kiehl's theory explains, for instance, why psychopaths seldom seem to experience anxiety or fear in the same way normal people do and why they do not fully comprehend the meaning of emotions such as love or compassion. "For a psychopath, it is all cognition," Kiehl said. His lab has received federal grants totaling $6 million for the study of psychopathy. In one study, he is investigating whether one reason that drug abuse treatment programs have a high failure rate in prisoners is because so many psychopaths are enrolled. Psychopaths do not respond to traditional treatment and Kiehl suspects that, while psychopaths are heavy drug and alcohol abusers, they do not develop the same sort of dependency on drugs as non-psychopaths. To test his hypothesis, he hopes to persuade Connecticut correctional officials to allow his team to study teen and adult inmates. If Kiehl's ideas are borne out by research, they may suggest ways to change psychopathic behavior. For now, Hare believes that any therapeutic approach must appeal to the psychopath's own self-interest because treatments based on an appreciation of somebody else's feelings are bound to fail. Understanding the underlying physiology of the disorder could lead to a drug that might actually restore emotional responses and cure psychopaths, said Dr. James Blair, an expert in psychopathy and a researcher at the National Institute of Mental Health, part of the National Institutes of Health. Blair points out that the symptoms of psychopathy are almost exactly the opposite of symptoms of people who suffer from post-traumatic stress and anxiety disorders - conditions for which treatments now exist. Roots Of Morality Kiehl hopes that by explaining how psychopaths' minds work, he can help arm society with the tools to deal with them. One of his research associates, Jana Schaich-Borg, also wants to answer a more fundamental question: Why are most humans moral in the first place? If Kiehl is correct that a failure of the emotional processing centers of the brain is at the root of psychopathy, then it follows that moral behavior might arise in those same areas. If a pill could create emotional responses in a psychopath, could such a drug also give him a moral core? Schaich-Borg plans to investigate whether psychopaths feel disgust - or the deeply ingrained reaction that people in most cultures have about, say, handling feces or having sex with a sibling. She speculates that the areas of the brain that govern disgust in a normal person may also play a role in the formation of more sophisticated moral beliefs, which are absent in psychopaths. For years, the link between instincts and moral decision-making has been inferred from fictional ethical scenarios. Schaich-Borg offers one example: Five people are tied up on a railroad track and a locomotive barrels toward them. You can save them, but only by pulling a lever and switching the locomotive to a different track, where two other people are tied. Do you pull the lever? People answer instinctively, and study after study shows that they are split right down the middle and argue their positions passionately. "Some people say they won't play God under any circumstance," said Schaich-Borg, who said she personally would pull the lever. But what if you could save the people on both sides of the railway spur by shoving a single man in front of the train? "Nearly everybody says no," she said. But, she said, a psychopath wouldn't care a whit whether the lever was pulled or not. She wants to compare what happens in people's brains when the question is asked. In the pattern of neural activity, she believes she may see the outline of human morality. And those imaging scans may illustrate why predators such as Ted Bundy are a rarity, rather than a rule in society. Kiehl says most people probably make moral choices using both rational and emotional parts of their brain. But he and Hare both say much more research needs to be done to shed light into the abyss of the psychopathic mind. "Unless we understand what makes these people tick," Hare says, "we are all going to suffer." A discussion of this story with Courant Staff Writer William Hathaway is scheduled to be shown on New England Cable News each hour Monday between 9 a.m. and noon. From checker at panix.com Sun Jan 1 23:12:09 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:12:09 -0500 (EST) Subject: [Paleopsych] NYTBR: 'The Man Everybody Knew: Bruce Barton and the Making of Modern America,' by Richard M. Fried Message-ID: 'The Man Everybody Knew: Bruce Barton and the Making of Modern America,' by Richard M. Fried http://www.nytimes.com/2005/12/18/books/review/18kazin.html [I read this bestselling book about how Jesus was a master salesman about thirty years ago and remember it fondly as a lesson that each generation makes Christ over into its own image. I am glad the book is being remembered. I should not be surprised if Mr. Mencken had the same reaction as I did, but I haven't found any trace of his commentary.] Review by MICHAEL KAZIN THE MAN EVERYBODY KNEW Bruce Barton and the Making of Modern America. By Richard M. Fried. Illustrated. 286 pp. Ivan R. Dee. $27.50. IF consumerism is our secular religion, then copywriters are its evangelists. No one in the golden days of the American advertising industry preached the faith more fervently or effectively than Bruce Barton. The affable son of a liberal Protestant minister, he created much of the copy that propelled Batten, Barton, Durstine & Osborn, the agency he helped found, to the top of its industry during the 1920's. Barton always believed the best ads were ones that depicted corporations as the fount of services that transcended the particular product on offer. For General Motors, he composed the inspiring tale of a doctor whose reliable auto sped him to the bedside of a failing young girl. One historian has labeled such ads essential to "creating the corporate soul," and Barton pursued it with a singular passion. But it was his selling of Jesus that transformed the ad man into a celebrity. In 1925, Barton published "The Man Nobody Knows," which quickly became an enormous best seller - and one of the most easily ridiculed examples of pop theology ever written. He urged readers to banish the image of the long-haired, "sissified" figure who gazed woefully from Victorian lithographs. Barton's Jesus was a muscular "outdoor man" and a "sociable" fellow in demand at Jerusalem's best banquet tables. More to the point, he was a masterly entrepreneur. Hadn't this humble carpenter "picked up 12 men from the bottom ranks of business and forged them into an organization that conquered the world"? From his father, Barton had learned that a "preacher is really a salesman." The son simply reversed the nouns. On the wings of his prosperity and fame, Barton rose to the inner circle of the Republican Party. He helped to write major speeches for President Calvin Coolidge and to devise the campaign of his successor, Herbert Hoover. Barton refused to become depressed in the months after the stock market crashed. "Anyone who looks gloomily at the business prospects of this country in 1930 is going broke," he predicted. In the late 30's, Barton proved that he could also sell himself. He was twice elected to the House, by huge margins, from the East Side of Manhattan. Down at the Capitol, Barton warned, in a tone of atypical grimness, that a third term for Franklin Roosevelt would mean "the end of freedom." In return, Roosevelt helped sink his 1940 campaign for the Senate. Barton retreated to his agency. Until his death a quarter-century later, he surfaced mostly as an elder statesman for anodyne causes like fighting heart disease and urging brotherhood between Christians and Jews. It is surprising to learn this is the first biography of Barton, whose name was indeed once familiar to any American who read a daily paper. Richard M. Fried, a professor of history at the University of Illinois at Chicago, provides a suitably brisk, anecdote-filled account, which focuses on how the master publicist's clever optimism suffused his words - whether they were designed to promote Christ, a corporation or the Republican Party. Fried concludes that Barton was a more ambivalent figure than he seemed to his contemporaries. He extolled consumerism, yet fretted about the loss of the old "values of work and self-restraint." He wrote homilies to big business, yet increasingly viewed ads as superfluous and banal. Unfortunately, Fried doesn't attempt to make sense of these contradictions or to justify the clich? of the subtitle. The question is not whether Barton helped "make modern America" but to what purpose. Perhaps the absence of a previous biography reflects the fact that those who succeed at advertising and public relations merely hold up gilded mirrors to society rather than helping to improve it. Bruce Barton contributed his drops of wisdom to an onrushing tide. The man whom everybody once knew may also have been someone neither business nor politics nor religion really needed Michael Kazin, who teaches history at Georgetown University, is the author of the forthcoming book "A Godly Hero: The Life of William Jennings Bryan." From checker at panix.com Sun Jan 1 23:12:22 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:12:22 -0500 (EST) Subject: [Paleopsych] NYT: Remote and Poked, Anthropology's Dream Tribe Message-ID: Remote and Poked, Anthropology's Dream Tribe http://www.nytimes.com/2005/12/18/international/africa/18tribe.html [This recalls the Pilgrims of Massachusetts in 1620 being greeted by an Indian who spoke English, evidently learned from the earlier settlers at Jamestown.] By MARC LACEY LEWOGOSO LUKUMAI, Kenya - The rugged souls living in this remote desert enclave have been poked, pinched and plucked, all in the name of science. It is not always easy, they say, to be the subject of a human experiment. "I thought I was being bewitched," Koitaton Garawale, a weathered cattleman, said of the time a researcher plucked a few hairs from atop his head. "I was afraid. I'd never seen such a thing before." Another member of the tiny and reclusive Ariaal tribe, Leketon Lenarendile, scanned a handful of pictures laid before him by a researcher whose unstated goal was to gauge whether his body image had been influenced by outside media. "The girls like the ones like this," he said, repeating the exercise later and pointing to a rather slender man much like himself. "I don't know why they were asking me that," he said. Anthropologists and other researchers have long searched the globe for people isolated from the modern world. The Ariaal, a nomadic community of about 10,000 people in northern Kenya, have been seized on by researchers since the 1970's, after one - an anthropologist, Elliot Fratkin - stumbled upon them and began publishing his accounts of their lives in academic journals. Other researchers have done studies on everything from their cultural practices to their testosterone levels. National Geographic focused on the Ariaal in 1999, in an article on vanishing cultures. But over the years, more and more Ariaal - like the Masai and the Turkana in Kenya and the Tuaregs and Bedouins elsewhere in Africa - are settling down. Many have migrated closer to Marsabit, the nearest town, which has cellphone reception and even sporadic Internet access. The scientists continue to arrive in Ariaal country, with their notebooks, tents and bizarre queries, but now they document a semi-isolated people straddling modern life and more traditional ways. "The era of finding isolated tribal groups is probably over," said Dr. Fratkin, a professor at Smith College who has lived with the Ariaal for long stretches and is regarded by some of them as a member of the tribe. For Benjamin C. Campbell, a biological anthropologist at Boston University who was introduced to the Ariaal by Dr. Fratkin, their way of life, diet and cultural practices make them worthy of study. Other academics agree. Local residents say they have been asked over the years how many livestock they own (many), how many times they have had diarrhea in the last month (often) and what they ate the day before yesterday (usually meat, milk or blood). Ariaal women have been asked about the work they do, which seems to exceed that of the men, and about local marriage customs, which compel their prospective husbands to hand over livestock to their parents before the ceremony can take place. The wedding day is one of pain as well as joy since Ariaal women - girls, really - have their genitals cut just before they marry and delay sex until they recuperate. They consider their breasts important body parts, but nothing to be covered up. The researchers may not know this, but the Ariaal have been studying them all these years as well. The Ariaal note that foreigners slather white liquid on their very white skin to protect them from the sun, and that many favor short pants that show off their legs and the clunky boots on their feet. Foreigners often partake of the local food but drink water out of bottles and munch on strange food in wrappers between meals, the Ariaal observe. The scientists leave tracks as well as memories behind. For instance, it is not uncommon to see nomads in T-shirts bearing university logos, gifts from departing academics. In Lewogoso Lukumai, a circle of makeshift huts near the Ndoto Mountains, nomads rushed up to a visitor and asked excitedly in the Samburu language, "Where's Elliot?" They meant Dr. Fratkin, who describes in his book "Ariaal Pastoralists of Kenya" how in 1974 he stumbled upon the Ariaal, who had been little known until then. With money from the University of London and the Smithsonian Institution, he was traveling north from Nairobi in search of isolated agro-pastoralist groups in Ethiopia. But a coup toppled Haile Selassie, then the emperor, and the border between the countries was closed. So as he sat in a bar in Marsabit, a boy approached and, mistaking him for a tourist, asked if he wanted to see the elephants in a nearby forest. When the aspiring anthropologist declined, the boy asked if he wanted to see a traditional ceremony at a local village instead. That was Dr. Fratkin's introduction to the Ariaal, who share cultural traits with the Samburu and Rendille tribes of Kenya. Soon after, he was living with the Ariaal, learning their language and customs while fighting off mosquitoes and fleas in his hut of sticks covered with grass. The Ariaal wear sandals made from old tires and many still rely on their cows, camels and goats to survive. Drought is a regular feature of their world, coming in regular intervals and testing their durability. "I was young when Elliot first arrived," recalled an Ariaal elder known as Lenampere in Lewogoso Lukumai, a settlement that moves from time to time to a new patch of sand. "He came here and lived with us. He drank milk and blood with us. After him, so many others came." Over the years, the Ariaal have had hairs pulled not just from their heads, but also chins and chests. They have spat into vials to provide saliva samples. They have been quizzed about how often they urinate. Sometimes the questioning has become even more intimate. Mr. Garawale recalls a visiting anthropologist measuring his arms, back and stomach with an odd contraption and then asking him how often he got erections and whether his sex life was satisfactory. "It was so embarrassing," recalled the father of three, breaking out in giggles even years later. Not all African tribes are as welcoming to researchers, even those with the necessary permits from government bureaucrats. But the Ariaal have a reputation for cooperating - in exchange, that is, for pocket money. "They think I'm stupid for asking dumb questions," said Daniel Lemoille, headmaster of the school in Songa, a village outside of Marsabit for Ariaal nomads who have settled down, and a frequent research assistant for visiting professors. "You have to try to explain that these same questions are asked to people all over the world and that their answers will help advance science." The researchers arriving in Africa by the droves, probing every imaginable issue, every now and then leave controversy in their wake. In 2004, for instance, a Kenyan virologist sued researchers from Britain for taking blood samples out of the country that he said had been obtained from a Nairobi orphanage for H.I.V.-positive children without government permission. The Ariaal have no major gripes about the studies, although the local chief in Songa, Stephen Lesseren, who wore a Boston University T-shirt the other day, said he wished their work would lead to more tangible benefits for his people. "We don't mind helping people get their Ph.D.'s," he said. "But once they get their Ph.D.'s, many of them go away. They don't send us their reports. What have we achieved from the plucking of our hair? We want feedback. We want development." Even when conflicts break out in the area, as happened this year as members of rival tribes slaughtered each other, victimizing the Ariaal, the research does not cease. With tensions still high, John G. Galaty, an anthropologist at McGill University in Toronto who studies ethnic conflicts, arrived in northern Kenya to question them. In a study in The International Journal of Impotence Research, Dr. Campbell also found that Ariaal men with many wives showed less erectile dysfunction than did men of the same age with fewer spouses. Dr. Campbell's body image study, published in The Journal of Cross-Cultural Psychology this year, also found that Ariaal men are much more consistent than men in other parts of the world in their views of the average man's body and what they think women want. Dr. Campbell came across no billboards or international magazines in Ariaal country and only one television in a local restaurant that played CNN, leading him to contend that Ariaal men's views of their bodies were less affected by media images of burly male models with six-pack stomachs and rippling chests. To test his theories, a nonresearcher without a Ph.D. showed a group of Ariaal men a copy of Men's Health magazine full of pictures of impossibly well-sculpted men and women. The men looked on with rapt attention and admired the chiseled forms. "That one, I like," said one nomad who was up in his years, pointing at a photo of a curvy woman who was clearly a regular at the gym. Another old-timer gazed at the bulging pectoral muscles of a male bodybuilder in the magazine and posed a question that got everybody talking. Was it a man, he asked, or a very, very strong woman? From checker at panix.com Sun Jan 1 23:14:41 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:14:41 -0500 (EST) Subject: [Paleopsych] NYT: Exposing the Economics Behind Everyday Behavior Message-ID: Exposing the Economics Behind Everyday Behavior http://www.nytimes.com/2005/12/18/business/yourmoney/18shelf.html [These are articles stacked up, as I was complying with Howard's request to keep my posting down to seven a day. I'm not breaking my Gregorian New Calendar New Year's resolutions. Sorry about the bad formatting of some of these articles. I was using different software and can get the lines right only with a lot of extra work.] Off the Shelf By ROGER LOWENSTEIN A FUNNY thing seems to be happening to economics writing: it's getting better. In recent books like "Freakonomics" and "The Travels of a T-Shirt in the Global Economy," economists have taken it upon themselves to explain something of how the world works. They even tell little stories. What interests Tim Harford, the author of "The Undercover Economist," are the stories behind the myriad little transactions that take place every day. Do you drive to work or ride a subway? Do you buy coffee en route? Is it a high-priced, frothy variety, or something plainer? And if it's the first kind, why is it so darn expensive, when the incremental cost of steaming a little milk amounts to only pennies? One question that interests Mr. Harford is: What will persuade you to fork over $26 for a copy of "The Undercover Economist" (Oxford University Press)? He seems to believe that witty, bracing prose will do the trick. "I would like to thank you for buying this book, but if you're anything like me you haven't bought it at all," he begins. "Instead, you've carried it into the bookstore cafe and even now are sipping a cappuccino in comfort while you decide whether it's worth your money." While we're on the subject of that cappuccino, Mr. Harford explains that Starbucks would like to charge each of us exactly what we are willing to pay, but that it would simply not do for it to advertise "Cappuccino for the Lavish, $3," and "Cappuccino for the Thrifty, $1." It has to be clever about it. Something like, "Hot Chocolate, $2.20; Caffe Mocha, $2.75; 20 oz. Cappuccino, $3.40." To the customer, the choice of drinks is what matters. To Starbucks, it is the choice of prices. Similarly, when Disney World in Florida offers discounts to people who live in the Orlando area, Mr. Harford observes, "They're not making a statement about the grinding poverty of the Sunshine State." They are making an educated guess that out-of-towners, who visit only once in a while, are willing to pay more than people from nearby. The author, a Briton who lives in Washington and who writes the cheeky Dear Economist column for The Financial Times, says that "there is a story to tell" in nearly every such interaction. For instance, Whole Foods lures you to spend more by offering distinct and - relative to what the competition offers - more expensive foods. It sells organic broccoli in addition to the customary industrial-strength variety, and it is careful never to display them side by side, because you would then notice the difference in price. Whole Foods wants you to be thinking only about the incremental good health that organic broccoli presumably confers. "The economist's job," Mr. Harford says, "is to shine a spotlight on the underlying process." Sounds reasonable, but that is not what most economists actually do. Most professional economists are paid to predict the future. This is why so much of economics writing is dull - and pretty silly. No one can predict the future, least of all an economist. Mr. Harford fancies himself to be more like a detective - an "undercover" economist. Perhaps he is less policeman than psychologist. Psychologists are not much good at predictions, either, but they do help us understand behavior, and recognize what sort of social settings induce people to behave better or worse. Just so, Mr. Harford's undercover op is a creature of incentives. Recalling that in his university days, student clubs allowed unlimited drinking in return for an upfront fee, he notes that these encouraged bingeing because the cost of each additional drink was zero. What matters in terms of limiting intake is the marginal cost of each new drink. So, too, with reducing automobile traffic: it's not the average cost per trip that matters, but the cost of getting into your car each additional time. To an economist, the truly interesting decisions are those that occur at the margin - the point at which one employee more is hired, one dollar more is invested, one cappuccino (on top of all those you have already imbibed) is drunk. Mr. Harford explains this central concept by returning to the source - namely, the classical economist David Ricardo's explication of how the yield from a marginal piece of land determined rents in pre-industrial England. The author is good at showing how such basic concepts apply across a complex modern economy. After observing that rents in London today are higher thanks to the surrounding Green Belt, which cuts off development, he notes that as an undercover economist, "you start to see 'green belts' of one kind or another all over the place." For instance, professional associations that restrict entry into, say, medicine serve as green belts that shield doctors from competition. NONE of this is the least bit unconventional. Though the author enjoys being politically incorrect ("sweatshops are good news," he offers tartly), he is not economically incorrect. In fact, lively presentation aside, he has written a pretty standard primer, one that defends free markets to a fault and attacks government as the source of just about everything bad. Predictably, he says that the best way to limit pollution is through free-market incentives; he then goes overboard by suggesting that environmental debates are mere "moral posturing." Yet without some discussion first, it is unlikely that we would have developed any incentives. And some of his arguments are far too brief to carry their intended weight. The author cannot really expect to explain "Why Poor Countries Are Poor" in a single chapter, the highlight of which is an interview with a cabdriver in Cameroon. A final criticism is that too many of Mr. Harford's interesting details lack a source or a footnote. He gets Amazon's stock-price history wrong. (The author says that during the dot-com bust, it fell below its initial offering price; adjusting for splits, it never did.) Many other details lack the specificity or the attribution to enable one to check. But these are quibbles. For those of you, even now, still stuck in the bookstore cafe, this is a book to savor. From checker at panix.com Sun Jan 1 23:16:41 2006 From: checker at panix.com (Premise Checker) Date: Sun, 1 Jan 2006 18:16:41 -0500 (EST) Subject: [Paleopsych] NYT: Robert Luce, 83, Former Editor And Publisher of New Republic, Is Dead Message-ID: Robert Luce, 83, Former Editor And Publisher of New Republic, Is Dead http://www.nytimes.com/2005/12/18/arts/18luce.html [This is more personal than anything else, but I thought you might like to know a little more about your Checker of Premises and his wonderful wife.] Sarah worked for Luce from 1973-75. At the time, it was the largest independent book publisher, with about ten books a year, in Washington, D.C. How things have changed since then! Their books were distributed by David McKay (which has the honor of publishing Walt Whitman's _Leaves of Grass_), and back then before the Internet, getting distributed was usually the only path to success. Luce was bought by Robert van Roijen, and Sarah worked directly under the late Joseph J. Binns. Three notable books are: Edward J. Gilfillan's _Migration to the Stars_ (1975). Though it was a better book, I thought, than Gerard K. O'Neill's _The High Frontier_, the latter book got thoroughly promoted, went to paperback, and became widely read. Reginald R. Gerig's _Famous Pianists and Their Technique_ (also 1975), a thorough compendium of pianists, is held by 939 libraries covered by WorldCat, which covers mostly the United States. It is still in print, $28 for the paperback, and rates four out of five stars at Amazon. Robert W. Whitaker's _A Plague on Both Your Houses_ (1976) was the first popular book to apply the Public Choice perspective to American politics throughout its history and did not hesitate to describe the liberal Human Betterment Industry as self-interested. After van Roijen retired, the aforementioned Joseph J. Binns established his own company under that name. I found a copy of Lawrence R. Brown, _The Might of the West_ (NY: Ivan Obolensky, 1963), my very favorite book, a panorama of world history from a macrohistorical perspective (Spengler without the mysticism), and have read it a dozen times. Though Joe disagreed with its politics, he found the book provocative and reprinted it in 1979. Used copies of this underground classic on Bookfinder go for $32-$100. ---------- By MONICA POTTS Robert B. Luce, a former editor and publisher of The New Republic who founded his own book publishing house, died on Nov. 29 in Boca Raton, Fla. He was 83. He died in a nursing home, his family said. Mr. Luce, known as Bob, began his career working for magazines, taking over at The New Republic in 1963. He edited a book compilation for the magazine's 50th anniversary, which was published in 1964. In the early 1960's, he also founded his own general-interest book publishing house, Robert B. Luce Inc., the first of its kind in Washington. Mr. Luce left The New Republic in 1966, sold his publishing house and returned to New York City, eventually working for Time-Life Books as director of editorial planning; his family said he was only distantly related to Henry R. Luce, the founder of Time-Life. He also worked for several other organizations, including Metromedia Inc. and Harcourt Brace. He moved from New York and began teaching journalism in 1997 at Florida Atlantic University, which he left in 2001. Robert Bonner Luce was born in 1922 in Gross Pointe, Mich. He served in the Army Air Forces in World War II and graduated from Antioch College in Ohio with a bachelor's degree in economics in 1946. Mr. Luce is survived by his wife, Iris, of Boca Raton, Fla.; three daughters, Jennifer Luce-Reynolds of Boulder, Colo.; Ann Luce Auerbach of New York City; and Jan Luce Nance of Burbank, Calif.; two sisters, Gwen Briggs of Rowayton, Conn., and Jean Davis of Ann Arbor, Mich.; one brother, Chris, of Florida; and four grandsons. Another daughter, Kathryn, died in 1992. From shovland at mindspring.com Mon Jan 2 16:14:04 2006 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 2 Jan 2006 08:14:04 -0800 Subject: [Paleopsych] Great Pictures for all you animal lovers ! ! ! Message-ID: -----Original Message----- From: LELNAC1947 at aol.com [mailto:LELNAC1947 at aol.com] Sent: Wednesday, December 28, 2005 8:20 PM To: LELNAC1947 at aol.com Subject: Re: Great Pictures for all you animal lovers ! ! ! These are great.....most new, a few reruns.....guaranteed to make you smile and go ahhhh!! __________________________________________________ ____ Live everyday with enjoyment we don't know what tomorrow will give us. Forwarded By: Lee Utley -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 28022 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 53829 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 27511 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 24904 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 25503 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 47940 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 29177 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 39952 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 12184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 27149 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 20169 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 19179 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 21799 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 31686 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 21507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 30270 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 55473 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 56209 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 69527 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 40951 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 24786 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 27231 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 73353 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 31665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 21124 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 25676 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 16838 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 19459 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 20743 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 33037 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 52003 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 18087 bytes Desc: not available URL: From shovland at mindspring.com Mon Jan 2 17:02:36 2006 From: shovland at mindspring.com (Steve Hovland) Date: Mon, 2 Jan 2006 09:02:36 -0800 Subject: [Paleopsych] CDC warning Message-ID: The Center for Disease Control has issued a warning about a new virulent strain of Sexually Transmitted Disease. The disease is contracted through dangerous and high-risk behavior. The disease is called Gonorrhea Lectim and pronounced "gonna re-elect him." Many victims contracted it in 2004, after having been screwed for the past four years. Cognitive characteristics of individuals infected include: anti-social personality disorders, delusions of grandeur with messianic overtones, extreme cognitive dissonance, inability to incorporate new information, pronounced xenophobia and paranoia, inability to accept responsibility for own actions, cowardice masked by misplaced bravado, uncontrolled facial smirking, ignorance of geography and history, tendencies toward evangelical theocracy, categorical all-or-nothing behavior. Naturalists and epidemiologists are amazed at how this destructive disease originated only a few years ago from a bush found in Texas. From checker at panix.com Mon Jan 2 20:55:20 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 15:55:20 -0500 (EST) Subject: [Paleopsych] NZ Herald: Homeopathy Is Bunk, says Professor Who Put It To Test Message-ID: Homeopathy Is Bunk, says Professor Who Put It To Test The New Zealand Herald December 19, 2005 Monday Homeopathy is bunk, says professor who put it to test London - Millions of people use it to deal with illnesses ranging from asthma to migraines. Prince Charles believes it is the answer to many of the evils of modern life. But now Britain's first professor of complementary medicine, Edzard Ernst of Exeter University in south-west England, has denounced homeopathy as ineffective. "Homeopathic remedies don't work," he told the Observer. "Study after study has shown it is simply the purest form of placebo. You may as well take a glass of water than a homeopathic medicine." Chiropractic, which involves spine manipulation to treat illnesses, and the laying on of hands to cure patients are equally invalid, he says. His views and his studies have provoked furious reactions. Chiropractors and homeopaths have written in droves to denounce him. But now the scourge of alternative medicine says he is going to have to quit because Exeter will no longer support him or his department. The university denied the charge. "Professor Ernst's department has enough money to go on for a couple of more years," said a spokesman. "We are still trying to raise cash." In 1993, Professor Ernst, then a professor of rehabilitation medicine in Vienna, took the job to bring scientific rigour to the study of alternative medicines, an approach that has made him a highly controversial figure in the field. An example is provided by his study of arnica, a standard homeopathic treatment for bruising. "We arranged for patients after surgery to be given arnica or a placebo," he said. "They didn't know which they were getting. It made no difference. They got better at the same rate." Professor Ernst also found no evidence homeopathy helped with asthma, which is said to be particularly responsive to such treatments. Britain has five homeopathic hospitals, which are funded by the country's health service (NHS). "The treatments do no good," said Professor Ernst. "But the long interview - about an hour-and-a-half - carried out by an empathetic practitioner during diagnosis may explain why people report improvements in their health." The incredibly dilute solutions used by homeopaths also make no sense, he added. "If it were true, we would have to tear up all our physics and chemistry textbooks." Professor Ernst insists he is a supporter of complementary medicines. "No other centre in the world has produced more positive results than we have to support complementary medicine," he said. "Herbal medicine, for instance, can do good. If I was mildly depressed, I think St John's wort would be a good treatment. It has fewer side-effects than Prozac. "Acupuncture seems to work for some conditions and there are relaxing techniques, including hypnotherapy, that can be effective. "These should not be used on their own, but as complements to standard medicines." Professor Ernst has been attacked by chiropractors and homeopaths. The latter point to studies they say show that most patients they treat are satisfied and cite an analysis in the Lancet of 89 trials in which their medicines were found to be effective. The Smallwood report, commissioned by Prince Charles, calls for more complementary medicines to be given on the NHS. Like with like * Homeopathy is a controversial system of alternative medicine more than 300 years old. * It calls for treating "like with like", a doctrine referred to as the "Law of Similars". The practitioner considers all a patient's symptoms then chooses as a remedy a substance that produces a similar set of symptoms in healthy subjects. The remedy is usually given in tiny concentrations. * Many of its claims are at odds with modern medicine and the scientific method. From checker at panix.com Mon Jan 2 20:55:31 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 15:55:31 -0500 (EST) Subject: [Paleopsych] Light Planet: Book of Mormon Literature Message-ID: Book of Mormon Literature http://www.lightplanet.com/mormons/basic/bom/literature_eom.htm by Richard Dilworth Rust and Donald W. Parry Although understated as literature in its clear and plain language, the Book of Mormon exhibits a wide variety of literary forms, including intricate Hebraic poetry, memorable narratives, rhetorically effective sermons, diverse letters, allegory, figurative language, imagery, symbolic types, and wisdom literature. In recent years these aspects of Joseph Smith's 1829 English translation have been increasingly appreciated, especially when compared with biblical and other ancient forms of literature. There are many reasons to study the Book of Mormon as literature. Rather than being "formless," as claimed by one critic (Bernardd DeVoto, American Mercury 19 [1930]:5), the Book of Mormon is both coherent and polished (although not obtrusively so). It tells "a densely compact and rapidly moving story that interweaves dozens of plots with and inexhaustible fertility of invention and an uncanny consistency that is never caught in a slip or contradiction" (CWHN 7:138). Despite its small working vocabulary of about 2,225 root words in English, the book distills much human experience and contact with the divine. It presents its themes artfully through simple yet profound imagery, direct yet complex discourses, and straightforward yet intricate structures. To read the Book of Mormon as literature is to discover how such literary devices are used to convey the messages of its content. Attention to form, diction figurative language, and rhetorical techniques increases sensitivity to the structure of the text and appreciation of the work of the various authors. The stated purpose of the Book of Mormon is to show the Lamanites, a remnant of the House of Israel, the covenants made with their fathers, and to convince Jew and Gentile that Jesus is the Christ (see Book of Mormon Title Page). Mormon selected materials and literarily shaped the book to present these messages in a stirring and memorable way. While the discipline of identifying and evaluating literary features in the Book of Mormon is very young and does not supplant a spiritual reading of the text, those analyzing the book from this perspective find it a work of immediacy that shows as well as tells as great literature usually does. It no longer fits Mark Twain's definition of a classic essentially s a book everyone talks about but no one reads; rather, it is a work that "wears you out before you wear it out" (J. Welch, "Study, Faith, and the Book of Mormon," BYU 1987-88 Devotional and Fireside Speeches, p. 148. [Provo, Utah, 1988]). It is increasingly seen as a unique work that beautifully and compellingly reveals and speaks to the essential human condition. POETRY. Found embedded in the narrative of the Book of Mormon, poetry provides the best examples of the essential connection between form and content in the Book of Mormon. When many inspired words of the Lord, angels, and prophets are analyzed according to ancient verse forms, their meaning can be more readily perceived. These forms include line forms, symmetry, parallelism, and chiastic patterns, as defined by Adele Berlin "The Dynamics of Biblical Parallelism [Bloomington, Ind., 1985]) and Wilford Watson (Classical Hebrew Poetry [Sheffield, 1984]). Book of Mormon texts shift smoothly from narrative to poetry, as in this intensifying passage: But behold, the Spirit hath said this much unto me, saying: Cry unto this people, Saying--Repent ye, and prepare the way of the Lord, and walk in his paths, which are straight; for behold, the kingdom of heaven is at hand, and the Son of God cometh upon the face of the earth [Alma 7:9]. The style of the Book of mormmon has been criticized by some as being verbose and redundant, but in most cases these repetitions are orderly and effective. For example, parallelisms, which abound in the Book of Mormon, serve many functions. They add emphasis to twice-repeated concepts and give definition to sharply drawn contrasts. A typical synonymous parallelism is in 2 Nephi 9:52: Pray unto him continually by day, and give thanks unto his holy name by night. Nephi's discourse aimed at his obstinate brothers includes a sharply antithetical parallelism: Ye are swift to do iniquity But slow to remember the Lord your God. [1 Ne. 17:45.] Several fine examples of chiasmus (an a-b-b-a pattern) are also found in the Book of Mormon. In the Psalm of Nephi (2 Ne. 4:15-35), the intial appeals to the soul and heart are accompanied by negations, while the subsequent mirror uses the heart and soul are conjoined with strong affirmations, making the contrasts literarily effective and climactic: Awake, my soul! No longer droop in sin. Rejoice, O my heart, and give place no more for the enemy of my soul. Do not anger again because of mine enemies. Do not slacken my strength because of mine afflictions. Rejoice, O my heart, and cry unto the Lord, and say: O Lord, I will praise thee forever; yea, my soul will rejoice in thee, my God, and the rock of my salvation. [2 Ne. 4;28- 30.] Other precise examples of extended chiasmus (a-b-c--c-b-a) are readily discernible in Mosiah 5:10-12 and Alma 36:1-30 and 41:13-15. This literary form in Alma 36 effectively focuses attention on the central passage of the chapter (Alma 36:17-18); in Alma 41, it fittingly conveys the very notion of restorative justice expressed in the passage (cf. Lev. 24:13-23, which likewise uses chiasmus to convey a similar notion of justice). Another figure known as a fortiori is used to communicate an exaggerated sense of multitude, as in Alma 60:22, where a "number parallelism" is chiastically enclosed by a twice-repeated phrase: Yea, will ye sit in idleness while ye are surrounded with thousands of those, yea, and tens of thousands, who do also sit in idleness? Scores of Book of Mormon passages can be analyzed as poetry. They range from Lehi's brief desert poems (1 Ne. 2:9-10, a form Hugh Nibley identifies as an Arabic quasida) [CWHN 6:270-75] to extensive sermons of Jacob, Abinadi, and the risen Jesus (2 Ne. 6-10; Mosiah 12-16; and 3 Ne. 27). NARRATIVE TEXTS. In the Book of Mormon, narrative texts are often given vitality by vigorous conflict and impassioned dialogue or personal narration. Nephi relates his heroic actions in obtaining the brass plates from Laban; Jacob resists the false accusations of Sherem, upon whom the judement of the Lord falls; Ammon fights of plunderers at the waters of Sebus and wins the confidence of king Lamoni; Amulek is confronted by the smooth-tongued lawyer Zeezrom; Alma2 and Amulek are preserved while their accusers are crushed by collapsing prison walls; Captain Moroni1 engages in a showdown with the lamanite chieftain Zerahemnah; Amalickiah rises to power through treachery and malevolence; a later prophet named Nephi 2 reveals to an unbelieving crowd the murder of their chief judge by the judge's own brother; and the last two Jaredite kings fight to the mutual destruction of their people. Seen as a whole, the Book of Mormon is an epic account of the history of the Nephite nation. Extensive in scope with an eponymic hero, it presents action involving long and arduous journeys and heroic deeds, with supernatural beings taking an active part. Encapsulated within this one-thousand-year account of the establishment, development, and destruction of the Nephites is the concentrated epic of the rise and fall of the Jaredites, who preceded them in type and time. (For its epic milieu, see CWHN 5:285-394.) The climax of the book is the dramatic account of the visit of the resurrected Jesus to an assemblage of righteous Nephites. SERMONS AND SPEECHES. Prophetic discourse is a dominant literary form in the Book of Mormon. Speeches such as King Benjamin's address (Mosiah 1-6), Alma 2's challenge to the people of Zarahemla (Alma 5), and Mormon's teachings on faith, hope, and charity (Moro. 7) are crafted artistically and have great rhetorical effectiveness in conveying their religious purposes. The public oration of Samuel The Lamanite (Hel. 13-15) is a classic prophetic judgment speech. Taking rhetorical criticism as a guide, one can see how Benjamin's ritual address first aims to persuade the audience to reaffirm a present point of view and then turns to deliberative rhetoric--"which aims at effecting a decision about future action, often the very immediate future" (Kennedy, New Testament interpretation Through Rhetorical Criticism [1984], p. 36). King Benjamin's speech is also chiastic as a whole and in several of its parts (Welch, pp. 202-205). LETTERS. The eight epistles in the Book of Mormon are conversational in tone, revealing the diverse personalities of their writers. These letters are from Captain Moroni1 (Alma 54:5-14; 60:1-36), Ammoron (Alma 54:16-24), Helaman1 (Alma 56:2-58:41), Pahoran (Alma 61:2-21), Giddianhi (3 Ne. 3:2-10), and Mormon (Moro. 8:2-30; 9:1-26). ALLEGORY, METAPHOR, IMAGERY, AND TYPOLOGY. These forms are also prevalent in the Book of Mormon. Zenos's allegory of the olive tree (Jacob 5) vividly incorporates dozens of horticultural details as it depicts the history of God's dealings with Israel. A striking simile curse, with Near Eastern parallels, appears in Abinadi's prophetic denunciation: The life of king Noah shall be "as a garment in a furnace of fire,...as a stalk, even as a dry stalk of the field, which is run over by the beasts and trodden under foot" (Mosiah 12:10-11). An effective extended metaphor is Alma's comparison of the word of God to a seed planted in one's heart and then growing into a fruitful Tree Of Life (Alma 32:28-43). In developing this metaphor, Alma uses a striking example of synesthesia: As the word enlightens their minds, his listeners can know it is real--"Ye have tasted this light" (Alma 32:35). Iteration of archetypes such as tree, river, darkness, and fire graphically confirms Lehi's understanding "that there is an opposition in all things" (2 Ne. 2:11) and that opposition will be beneficial to the righteous. A figural interpretation of God-given words and God-directed persons or events is insisted on, although not always developed, in the Book of Mormon. "All things which have been given of God from the beginning of the world, unto man, are the typifying of [Christ]" (2 Ne. 11:4); all performances and ordinances of the Law of Moses "were types of things to come" (Mosiah 13:31); and the Liahona, or compass, was seen as a type: "For just as surely as this director did bring our fathers, by following its course, to the Promised Land, shall by following its course, to the Promised Land, shall the words of Christ, if we follow their course, carry the words of Christ, if we follow their course, carry us beyond this vale of sorrow into a far better land of promise" (Alma 37:45). In its largest typological structure, the Book of Mormon fits well the seven phases of revelation posited by Northrop Frye: creation, revolution or exodus, law, wisdom, prophecy, tospel, and apocalypse (The Great Code: The Bible and Literature [New York, 1982]). WISDOM LITERATURE. Transmitted sayings of the wise are scattered throughout the Book of Mormon, especially in counsel given by fathers to their sons. Alma counsels, "O remember, my son, and learn wisdom in thy youth; yea, learn in thy youth, to keep the commandments of God" (Alma 37:35; see also 38:9-15). Benjamin says, "I tell you these things that ye may learn wisdom; that ye may learn that when ye are in the service of your fellow beings ye are only in the service of your God" (Mosiah 2:17). A memorable aphorism is given by Lehi: "Adam fell that men might be; and men are, that they might have joy" (2 Ne. 2:25). Pithy sayings such as "fools mock, but they shall mourn" (Ether 12:26) and "wickedness never was happiness" (Alma 41:10) are often repeated by Latter-day Saints. APOCALYPTIC LITERATURE. The vision in 1 Nephi 11-15 (sixth century B.C.) is coomparable in form with early Apocalyptic literature. It contains a vision, is delivered in dialogue form, has an otherworldly mediator or escort, includes a commandment to write, treats the disposition of the recipient, prophesies persecution, foretells the cosmic transformations, and has an otherworldly place as its spatial axis. Later Jewish developments of complex angelology, mystic numerology, and symbolism are absent. STYLE AND TONE. Book of Mormon writers show an intense concern for styyle and tone. Alma desires to be able to "speak with the trump of God, with a voice to shake the earth," yet realizes that "I am a man, and do sin in my wish; for I ought to be content with the things which the Lord hath allotted unto me" (Alma 29:1-3). Moroni2 expresses a feeling of inadequacy in writing: "Lord, the Gentiles will mock at these things, because of our weakness in writing.... Thou hast also made our words powerful and great, even that we cannot write them; wherefore, when we write we behold our wekaness, and stmble beacuse of the placing of our words" (Ether 12:23-25; cf. 2 Ne. 33:1). Moroni's written words, however, are not weak. In cadences of ascending strength he boldly declares: O ye pollutions, ye hypocrites, ye teachers, who sell yourselves for that which will canker, why have ye polluted the holy church of God? Why are ye ashamed to take upon you the name of Christ?...Who will despise the children of Christ? Behold, all ye who are despisers of the works of the Lord, for ye shall wonder and perish [Morm. 8:38, 9:26]. The styles employed by the different writers in the Book of Mormon vary from the unadorned to the sublime. The tones range from Moroni's strident condemnations to Jesus' humblest pleading: "Behold, mine arm of mercy is extended towards you, and whosoever will come, him will I receive" (3 Ne. 9:14). A model for communication is Jesus, who, Moroni reports, "told me in plain humility, even as a man telleth another in mine own language, concerning these things; and only a few have I written, because of my weakness in writing" (Ether 12:39-40). Two concepts in this report are repeated throughout the Book of Mormon--plain speech and inability to write about some things. "I have spoken plainly unto you," Nephi says, "that ye cannot misunderstand" (2 Ne. 25:28). "My soul delighteth in plainness," he continues, "for after this manner doth the Lord God work among the children of men (2 Ne. 31:3). Yet Nephi also delights in the words of Isaiah, which "are not plain unto you" although "they are plain unto all those that are filled with the spirit of prophecy" (2 Ne. 25:4). Containing both plain and veiled language, the Book of Mormon is a spiritually and literarily powerful book that is direct yet complex, simple yet profound. (See [1]Basic Beliefs home page; [2]Scriptural Writings home page; [3]The Book of Mormon home page) ______________________________________________________________ Bibliography England, Eugene. "A Second Witness for the Logos: The Book of Momon and Contemporary Literary Criticism." In By Study and Also by Faith, 2 vols., ed. J. Lundquist and S. Ricks, Vol.2 pp. 91-125. Salt Lake City, 1990. Jorgensen, Bruce W.; Richard Dilworth Rust; and George S. Tate. Essays on typology in Literature of Belief, ed. Neal E. Lambert. Provo, Utah, 1981. Nichols, Robert E., Jr. "Beowulf and Nephi: A Literary View of the Book of Mormon." Dialogue 4 (Autumn 1969):40-47. Parry, Donald W. "Hebrew Literary Patterns in the Book of Mormon." Ensign 19 (Oct. 1989):58-61. Rust, Richard Dilworth. "Book of Mormon Poetry." New Era (Mar. 1983):46-50 Welch, John W. "Chiasmus in the Book of Mormon." In Chiasmus in Antiquity, ed. J. Welch, pp. 198-210. Hildesheim, 1981. Encyclopedia of Mormonism, Vol. 1, Book of Mormon Literature References 1. http://www.lightplanet.com/mormons/basic/index.htm 2. http://www.lightplanet.com/mormons/basic/doctrines/scripture/index.htm 3. http://www.lightplanet.com/mormons/basic/bom/index.htm From checker at panix.com Mon Jan 2 20:55:43 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 15:55:43 -0500 (EST) Subject: [Paleopsych] New Oxford Review: Bio-Luddites & the Secularist Rapture Message-ID: Bio-Luddites & the Secularist Rapture http://www.newoxfordreview.org/note.jsp?did=1103-notes-rapture New Oxford Review November 2003 Coming soon to a theater near you: cyborgs. Not on the screen, but sitting next to you in the audience. This is "the coming reality" in our technological world, or so say a group calling themselves "transhumanists." According to transhumanists, man has, since time immemorial, depended on technology for his survival in this hostile world: From the first primitive tool to walking sticks to eyeglasses to emergency alert bracelets to artificial intelligence -- man's dependence on machines increases with each new development. Soon one may be indistinguishable from the other. No surprise, say the transhumanists, because we have long been on our way to becoming cyborgs. Some would contend that we are already cyborgs. That cyborg at the movie theater? That cyborg could be you. A perusal of recent news clippings could easily lead one to believe that the prototypical elements of the transhumanists' "coming reality" are more science nonfiction than science fiction. To wit: A robot governed by neurons from a rat's brain (a "hybrot" -- a machine with living cells) is now reportedly drawing pictures; a lab monkey, via a chip implanted in its brain, is now able to move a cursor on a computer screen by thought alone; a rat was made to climb over fences and up trees, and walk through pipes and across rubble by signals sent from a remote computer to a chip implanted in its brain. Even more to the point, a British cybernetics professor became the first human to have a chip implanted into his central nervous system. This chip records and transmits his sensations (such as movement and pleasure) to a remote computer, which later plays back those sensations, causing the professor to experience them again. Since then about 20 people across the U.S. been "chipped" by the Applied Digital Solution's VeriChip Corporation, which for $200 up front and $10 a month will chip and track anyone from its traveling ChipMobile. Giddy from the possibilities stories like these present, the World Transhumanist Association (WTA) held a conference at Yale University this past June, as reported in The Village Voice (Jul. 30-Aug. 5), "to lay the groundwork for a society that would admit as citizens and companions intelligent robots, cyborgs made from a free mixing of human and machine parts, and fully organic, genetically engineered people who aren't necessarily human at all." The first order of business is to expand the definition of what we now call human rights to include "post-humans" -- robots, hybrots, cyborgs, and other such "people" who may, or may not, be human. According to Natasha Vita-More, founder of the transhumanist movement, we must begin the process of redefining today. Why? "To relinquish the rights of a future being merely because he, she, or it has a higher percentage of machine parts than biological cell structure would be racist toward all humans who have prosthetic parts." Racist? Really? We weren't aware that amputees constitute a "race" of humans. The conference's opening debate was titled "Should Humans Welcome or Resist Becoming Posthuman?" with the overwhelming sentiment favoring the former. Echoing the majority opinion, Kevin Fitzgerald, a Jesuit priest and "bioethicist" at Georgetown Medical Center, is quoted in The Voice: "To err on the side of inclusion is the loving thing to do." Oh, right. We certainly must be inclusive. And of course we must be loving. We must love our robots. Still, Fr. Fitzgerald may be onto something here. The Jesuits have experienced a steep decline in ordinations -- might a fleet of robo-priests be the answer to the Jesuit priest shortage? At least then we could be assured that the rubrics of the Mass would be adhered to, albeit in a mechanical, robotic fashion. Domo arigato, Fr. Roboto. The Voice reports that transhumanists "look for inspiration to civil rights battles, most recently to the transgender and gay push for self-determination." (The WTA has even modified a popular homosexualist slogan, decrying "technophobia.") James Hughes, Secretary of the WTA, says this: "The whole thrust of the liberal democratic movement of the last 400 years has been to allow people to use reason and science to control their own lives, free from the authority of church and state." Dr. J, as Hughes is affectionately known, has expanded on this theme in a series of columns on the Better Humans website. He applauds the "enormous progress" we have made in "overcoming" the "barriers to active, guilt-free sexuality," and in "transcending...biological gender." Despite the transhumanists' gushing over the "gay" and transsexual movements, the admiration is apparently not mutual. Homosexuals and transsexuals, reasons The Voice, "might not particularly like being associated with imagined cyborgs and human-animal hybrids." Still, the transhumanists are preparing for what they see as an impending battle against those who would resist the proliferation of the technology that is supposed to lead to the inevitable intermingling of man and machine. Hughes, quoted in The Voice, throws down the gauntlet: "If...the technology of human advancement is forbidden by bio-Luddites...that becomes a fundamental civil rights struggle." But not one of those nonviolent civil rights struggles. No, "there might come a time," predicts Hughes, "for the legitimate use of violence in self-defense," for "liberation acts" to unyoke "fully realized forms of artificial intelligence" from possible enslavement by humans. Suddenly the phrase "technological revolution" takes on an ominous tone. One pictures hordes of Arnold Schwarzenegger clones plodding about, shooting things, blowing up buildings, and setting entire cities aflame. Transhumanism's "coming reality" may be closer to this scenario than we might like to think. Transhumanists would like nothing more than to transform themselves into a technologically enhanced race of Uebermenschen. Their "vision" isn't limited to protecting amputees. According to The Voice, a good many transhumanists are "feverishly anticipating" an event they call "the Singularity" -- the moment when "technologies meld and an exponentially advancing intelligence is unleashed." This limitless technology is messianic in nature: Transhumanists "aspire to immortality and omniscience through uploading human consciousness into ever evolving machines." There is even a "Singularity Institute" for the furtherance of their agenda. The Institute's website heralds Singularity as the moment when man will be "capable of breaking the upper limit on intelligence that has held since the rise of humanity." The Singularity is akin to a transhumanist version of the Rapture, an endtime event invented by Protestant millennialists who believe that Jesus will at any moment whisk His true believers away before the onset of the 1,000-year reign of the Antichrist. Only for transhumanists, man's Ascension (through a deified technology) to the throne of omniscience begins right here, right now. And we sorry bio-Luddites who aren't plugged in, online, and geeked out will be "left behind" to suffer the Tribulation. From checker at panix.com Mon Jan 2 20:55:55 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 15:55:55 -0500 (EST) Subject: [Paleopsych] Independent: 'Chronic happiness' the key to success Message-ID: http://news.independent.co.uk/world/science_technology/article333972.ece 19 December 2005 10:27 By Lyndsay Moss Published: 19 December 2005 The key to success may be "chronic happiness" rather than simply hard work and the right contacts, psychologists have found. Many assume a successful career and personal life leads to happiness. But psychologists in the US say happiness can bring success. Researchers from the universities of California, Missouri and Illinois examined connections between desirable characteristics, life success and well-being in more than 275,000 people. They found that happy individuals were predisposed to seek out new goals in life, leading to success, which also reinforced their already positive emotions. The psychologists addressed questions such as whether happy people were more successful than unhappy people, and whether happiness came before or after a perceived success. Writing in Psychological Bulletin, published by the American Psychological Association, they concluded that "chronically happy people" were generally more successful in many areas of life than less happy people. The key to success may be "chronic happiness" rather than simply hard work and the right contacts, psychologists have found. Many assume a successful career and personal life leads to happiness. But psychologists in the US say happiness can bring success. Researchers from the universities of California, Missouri and Illinois examined connections between desirable characteristics, life success and well-being in more than 275,000 people. They found that happy individuals were predisposed to seek out new goals in life, leading to success, which also reinforced their already positive emotions. The psychologists addressed questions such as whether happy people were more successful than unhappy people, and whether happiness came before or after a perceived success. Writing in Psychological Bulletin, published by the American Psychological Association, they concluded that "chronically happy people" were generally more successful in many areas of life than less happy people. From checker at panix.com Mon Jan 2 20:56:10 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 15:56:10 -0500 (EST) Subject: [Paleopsych] Steve Sailer: Boys Will Be Boys Message-ID: Steve Sailer: Boys Will Be Boys http://www.claremont.org/writings/crb/fall2005/sailer.html. A review of Why Gender Matters: What Parents and Teachers Need to Know about the Emerging Science of Sex Differences, by Leonard Sax By Steve Sailer Posted November 30, 2005 Until last winter, I had assumed that fundamentalist feminism had peaked in the early 1990s with the Anita Hill brouhaha, and that Bill Clinton's political survival in 1998, which hinged on his near-unanimous support from hypocritical feminists, ended the era in which anyone took feminism seriously. The Larry Summers fiasco, however, showed that while feminism may have entered its Brezhnev Era intellectually, it still commands the institutional equivalent of Brezhnev's thousands of tanks and nuclear missiles. After just a few days, Harvard President Lawrence Summers caved in to critics of his off-hand comment that nature, not invidious discriminations alone, might be to blame for the lower percentage of women who study math and science. In short order, he propitiated the feminists by promising, in effect, to spend $50 million taking teaching and research opportunities at Harvard away from male jobseekers and giving them to less talented women. Perhaps in a saner society, then, we would have less need for Leonard Sax's engaging combination of popular science exposition and advice guidebook, Why Gender Matters: What Parents and Teachers Need to Know about the Emerging Science of Sex Differences. But parents as well as professors could benefit from it now. Sax speaks of "gender" when he means "sex"--male or female. I fear, though, that this usage battle is lost because the English language really does need two different words to distinguish between the fact, and the act, of sex. Supreme Court Justice Ruth Bader Ginsburg claims her secretary Millicent invented the use of "gender" to mean "sex" in the early 1970s while typing the crusading feminist's briefs against sex discrimination. Millicent pointed out to her boss that judges, like all men, have dirty minds when it comes to the word "sex," so she should use the boring term "gender" to keep those animals thinking only about the law. Unfortunately, "gender" now comes with a vast superstructure of 99% fact-free feminist theorizing about how sex differences are all just socially constructed. According to this orthodoxy, it's insensitive to doubt a burly transvestite truck driver demanding a government-subsidized sex change when he says he feels like a little girl inside. Yet it's also insensitive to assume that the average little girl feels like a little girl inside. Fortunately, Sax, a family physician and child psychologist, subscribes to none of the usual cant. Indeed, I thought I was a connoisseur of sex differences until I read Why Gender Matters, where I learned in the first chapter, for instance, that girls on average hear better than boys, especially higher-pitched sounds, such as the typical schoolteacher's voice, which is one little-known reason girls on average pay more attention in class. Males and females also tend to have different kinds of eyeballs, with boys better at tracking movement and girls better at distinguishing subtle shades of colors. Presumably, these separate skills evolved when men were hunters trying to spear fleeing game and women were gatherers searching out the ripest fruit. So, today, boys want to catch fly balls and girls want to discuss whether to buy the azure or periwinkle skirt. Cognitive differences are profound and pervasive. Don't force boys to explain their feelings in great detail, Sax advises. Their brains aren't wired to make that as enjoyable a pastime as it is for girls. * * * As founder of the national association for Single-Sex Public Education, Sax's favorite and perhaps most valuable theory is that co-educational schooling is frequently a mistake. He makes a strong case, especially concerning the years immediately following puberty. He cites the experience of two psychologists studying self-esteem in girls. They went to Belfast, where children can be assigned fairly randomly to coed or single-sex schools: They found that at coed schools, you don't need to ask a dozen questions to predict the girl's self-esteem. You have to ask only one question: "Do you think you're pretty?" Similarly, the Coleman Report found, four decades ago, that boys put more emphasis on sports and social success in coed schools, and less on intellectual development. Sax argues: Here's the paradox: coed schools tend to reinforce gender stereotypes. There is now very strong evidence that girls are more likely to take courses such as computer science and physics in girls-only schools. Boys in single-sex schools are more than twice as likely to study art, music, foreign languages, and literature as boys of equal ability attending comparable coed schools. Noting that the Department of Education projects that by 2011 there will be 140 women college graduates for every 100 men, he asks, "I'm all in favor of women's colleges, butwhy are nominally coed schools looking more and more like all-women's colleges?" So far, the decline of male academic achievement in the U.S. is mostly among blacks and Hispanics, but the catastrophic downturn into "laddism" of young white males in England in recent years, and their consequent decline in test scores, shows that no race is permanently immune to the prejudice that school is for girls. Of course, American schools have long been taught largely by women, and boys and schoolmarms have not always seen eye-to-eye. But the rise of feminism has encouraged female teachers to view their male students as overprivileged potential oppressors. Further, feminism justifies teachers' self-absorption with female feelings. Thus, a remarkable fraction of the novels my older son has been assigned to read in high school are about girls getting raped. I hope it hasn't permanently soured him on fiction. We've now achieved the worst of both worlds: the educational authorities are committed to anti-male social constructionist ideology, but the pop culture market delivers the crudest, most sexualized imagery. The irony is that when the adult world imposes gender egalitarianism on young people in the name of progressive ideologies, it just makes the young people even more cognizant of their primordial differences. * * * Sax's book often resembles a nonfiction version of Tom Wolfe's impressive novel I am Charlotte Simmons. What's most striking about Wolfe's merely semi-satirical portrait of Duke University is how, after 35 years of institutionalized feminism, student sexuality hasn't evolved into an egalitarian utopia. Instead, it has regressed to something that a caveman would understand--a sexual marketplace where muscles are the measure of the man. Not all of Sax's arguments are so dependable. For instance, he is far more confident that homosexuality is substantially genetic in origin than is the leading researcher he cites in support of his assertion, J. Michael Bailey of Northwestern University. Bailey has publicly noted how challenging he has found it to assemble a reliably representative sample of identical and fraternal twins for his homosexuality studies. Further, Bailey is troubled by the fundamental objection that natural selection would, presumably, cause genes for homosexuality to die out. Sax, though, races past these prudent concerns. Still, this is a better than average advice book for mothers and fathers. Most parenting books are unrealistic because they overemphasize how much parents can mold their children's personalities. Raising a second child, with his normally quite different personality, typically undermines parents' belief in their omnipotence, but most child-rearing books hush this up because their market is gullible first-timers. Fortunately, by emphasizing how much you need to fine-tune your treatment to fit your child's sex, Why Gender Matters injects some needed realism into the genre. But Sax's bulletproof confidence in his own advice gives me pause. Sixteen years of fatherhood have left me less confident that I know what I'm doing than when I started, but he doesn't suffer from any such self-skepticism. Steve Sailer is the film critic for The American Conservative and a columnist for VDARE.com. From checker at panix.com Mon Jan 2 20:57:45 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 15:57:45 -0500 (EST) Subject: [Paleopsych] World Science: Bees can recognize human faces, study finds Message-ID: Bees can recognize human faces, study finds http://www.world-science.net/exclusives/051209_beesfrm.htm Honeybees may look pretty much all alike to us. But it seems we may not look all alike to them. A study has found that they can learn to recognize human faces in photos, and remember them for at least two days. The findings toss new uncertainty into a long-studied question that some scientists considered largely settled, the researchers say: how humans themselves recognize faces. The results also may help lead to better face-recognition software, developed through study of the insect brain, the scientists added. Many researchers traditionally believed facial recognition required a large brain, and possibly a specialized area of that organ dedicated to processing face information. The bee finding casts doubt on that, said Adrian G. Dyer, the lead researcher in the study. He recalls that when he made the discovery, it startled him so much that he called out to a colleague, telling her to come quickly because ?no one?s going to believe it?and bring a camera!? Dyer said that to his knowledge, the finding is the first time an invertebrate has shown ability to recognize faces of other species. But not all bees were up to the task: some flunked it, he said, although this seemed due more to a failure to grasp how the experiment worked than to poor facial recognition specifically. In any cases, some humans also can?t recognize faces, Dyer noted; the condition is called prosopagnosia. In the bee study, reported in the Dec. 15 issue of the Journal of Experimental Biology, Dyer and two colleagues presented honeybees with photos of human faces taken from a standard human psychology test. The photos had similar lighting, background colors and sizes and included only the face and neck to avoid having the insects make judgments based on the clothing. In some cases, the people in the pictures themselves looked similar. The researchers, with Johannes Gutenberg University in Mainz, Germany, tried to train the bees to realize that a photo of one man had a drop of a sugary liquid next to it. Different photos came with a drop of bitter liquid instead. A few bees apparently failed to realize that they should pay attention to the photos at all. But five bees learned to fly toward the photo horizontally in such a way that they could get a good look at it, Dyer reported. In fact, these bees tended to hover a few centimeters in front of the image for a while before deciding where to land. The bees learned to distinguish the correct face from the wrong one with better than 80 percent accuracy, even when the faces were similar, and regardless of where the photos were placed, the researchers found. Also, just like humans, the bees performed worse when the faces were flipped upside-down. ?This is evidence that face recognition requires neither a specialised neuronal [brain] circuitry nor a fundamentally advanced nervous system,? the researchers wrote, noting that the test they used was one for which even humans have some difficulty. Moreover, ?Two bees tested two days after the initial training retained the information in long-term memory,? they wrote. One scored about 94 percent on the first day and 79 percent two days later; the second bee?s score dropped from about 87 to 76 percent during the same time frame. The researchers also checked whether bees performed better for faces that humans judged as being more different. This seemed to be the case, they found, but the result didn?t reach statistical significance. The bees probably don?t understand what a human face is, Dyer said in an email. ?To the bees the faces were spatial patterns (or strange looking flowers),? he added. Bees are famous for their pattern-recognition abilities, which scientists believe evolved in order to discriminate among flowers. As social insects, they can also tell apart their hivemates. But the new study shows that they can recognize human faces better than some humans can?with one-ten thousandth of the brain cells. This raises the question of how bees recognize faces, and if so, whether they do it differently from the way we do it, Dyer and colleagues wrote. Studies suggest small children recognize faces by picking out specific features that are easy to recognize, whereas adults see the interrelationships among facial features. Bees seem to show aspects of both strategies depending on the study, the researchers added. The findings cast doubt on the belief among some researchers that the human brain has a specialized area for face recognition, Dyer and colleagues said. Neuroscientists point to an area called the fusiform gyrus, which tends to show increased activity during face-viewing, as serving this purpose. But the bee finding suggests ?the human brain may not need to have a visual area specific for the recognition of faces,? Dyer and colleagues wrote. That may be helpful to researchers who develop face-recognition technologies to be used for security at airports and other locations, Dyer noted. The United States is investing heavily in such systems, but they still make many mistakes. Already, the way that bees navigate is being used to design ?autonomous aircraft that can fly in remote areas without the need for radio contact or satellite navigation,? Dyer wrote in the email. ?We show that the miniature brain can definitely recognize faces, and if in the future we can work out the mechanisms by which this is achieved,? this might suggest ideas for improved face recognition technologies. Dyer said that if bees can learn to recognize humans in photos, then they reasonably might also be able to recognize real-life faces. On the other hand, he remarked, this probably isn?t the explanation for an adage popular in some parts of the world?that you shouldn?t kill a bee because its nestmates will remember and come after you. Francis Ratnieks of Sheffield University in Sheffield, U.K., says that apparent bee revenge attacks of this sort actually occur because a torn-off stinger releases chemicals that signal alarm to nearby hivemates. Says Dyer, ?bees don?t normally go around looking at faces.? From shovland at mindspring.com Sun Jan 1 17:23:18 2006 From: shovland at mindspring.com (Steve Hovland) Date: Sun, 1 Jan 2006 09:23:18 -0800 Subject: [Paleopsych] Medicare for All Message-ID: The health care industry is acting in a way that is detrimental to society as a whole. They are taking more and more money from us while providing less care for the money. Most countries in Europe provide health care for all of their citizens for half the money we spend in the US while leaving 45 million uninsured. Harry Truman first proposed a national health system for the US in 1950. It?s time to make the change. From checker at panix.com Tue Jan 3 01:37:32 2006 From: checker at panix.com (Premise Checker) Date: Mon, 2 Jan 2006 20:37:32 -0500 (EST) Subject: [Paleopsych] Roger D. Congelton: The political economy of Gordon Tullock Message-ID: Roger D. Congelton: The political economy of Gordon Tullock* Public Choice 121:213-238, 2004 [This is a superb appreciation of one of the Founding Fathers of Public Choice theory, and it no mean introduction to the field, since Gordon's interests were so broad. I loved this in particular: "It bears noting that Tullock invented or at least helped to invent the rent-seeking model of con?.ict (1967/1974). A social scientist who was more interested in maximizing fame than in understanding the world would never have raised a question that reduces the importance of one of his own major contributions, even were such doubts to arise. Fame and fortune tend to go to those whose ideas are "bigger" than initially thought, not "smaller." However, a proper scientist is a truth seeker (The Organization of Inquiry, 1966: 49), and Tullock is in this sense, if not in the conventional sense, a very proper social scientist." [Gordon, I think, loved more than anything else, to make unsettling if not outrageous assertions in conversation. Back in graduate school, when I first met him, I was arguing foreign policy with him. He was dubious and asked me a question. I changed my view, and he asked me another question. This went on until he told me, "You have now gone full circle." [Of all the people I have ever met, only Steve Sniegoski comes close in challenging my opinions. Both of them have first opinions of their own, which in the case of Gordon, as the article shows, are not always so apparent. I'm even further removed myself, since it is leaving no Premise Unchecked (that is, the Premise of my conversant) that is my forte, not persuasion itself. [In my opinion, Gordon did not share the Nobel Prize with Jim Buchanan because he tweaked the nose of the Swedes with their welfare state, and who award the prize, too many times by demanding to know what they thought a "just" distribution of income looks like, as their underlying mood is not to achieve some fixed distribution but to have ever more re-distribution. [The world badly needs far more challengers like Gordon.] [I call myself one of the Founding Sons of Public Choice theory, having studied under Gordon and Jim Buchanan in the early years at U.Va. I'm sending this also to a number of U.Va. people, so they can see what came out of it. [Sorry about the ligatures, like showing up as periods. Adobe's PDF to TXT converter (starting with version 7) changes all these ligatures to a period, so I can't do a global search and replace. I do convert (at least I try to get them all) the various Microsoft smart characters to ASCII ", ', --, etc., as appropriate. Lynx, my text-only web browser does this now, except that in some cases, nothing, not even a space, remains. But it should be obvious what's what. Tables generally do not convert. Sometimes I have to remove page headers, sometimes not. And, too often, spaces are omitted. It can be quite time comsuming, so please forgive me if I didn't replace each ligature by hand. I sometimes also keep paragraphs together when interrupted with footnotes. I can generally send the PDFs to anyone who e-mails me asking for them.] *Center for Study of Public Choice, George Mason University, Fairfax, VA 22030, U.S.A.; e-mail: congleto at gmu.edu Accepted 25 August 2003 The perspective on Tullock's work presented here is based partly on his proli.c writings and partly on numerous conversations with him over the course of several decades. He was kind enough to read through a previous draft, and the version presented here re.ects his comments and suggestions. Comments and suggestions received at the 2002 meeting of the Public Choice Society, and from James Buchanan, Charles Rowley, Robert Tollison, and an anonymous referee were also very helpful. "Leaving aside the problem of the correctness of my answers, the fact remains that I have been unable to .nd any indications that scientists have asked the questions to which I address myself. The unwary might take this as proof that the problems are unimportant, but scientists, fully conscious of the importance of asking new questions, will not make this mistake." (Gordon Tullock, The Organization of Inquiry (1966: 3.) 1.Introduction It is fair to say that few public choice scholars have contributed to so many areas of public choice research as frequently or with as much insight as Gordon Tullock. Professor Tullock's work considers not only political and contractual relationships within a well-established legal order, but also ex?traordinary political behavior within rent-seeking societies, within .rms, at court, within communities at war, among those considering revolution, and among those emerging from or falling into anarchy. The result is an unusually complete political economy that includes theories of the origin of the state; theories of decision making within bureaucracy, dictatorship, democracy, and the courts; and within science itself. It is also fair to say that Professor Tullock uses relatively simple tools to analyze these far-reaching topics. Indeed, it is the use of relatively simple tools that makes the broad scope of his work possible. All the principal actors in Tullock's analysis maximize expected net bene.ts in circumstances where bene.ts, costs, and probabilities are assumed to be known by the relevant de?cision makers with some accuracy. This is the core hypothesis of the rational choice approach to social science, and it is the rationale for the title of Brady and Tollison's (1994) very interesting collection of Tullock papers. For the social scientist who uses the rational choice methodology, the re?search problem at hand is not to understand the complex chain of events that gave rise to unique personalities and historical moments, but rather to more fully appreciate the general features of the choice problems facing more or less similar actors at times when more or less routine decisions are made. By focusing on the general rather than the particular, a good deal of human behavior can be predicted within broad limits, without requiring intimate knowledge of the individuals or institutional settings of interest. Such an approach is commonplace within economics, where it has been very suc?cessfully applied to understand general features of decisions made by .rms and consumers, and is becoming more common within other social sciences where the rational choice methodology remains somewhat controversial. Tullock's work is largely written for economists and the subset of political scientists who routinely use rational-choice models, and his analysis naturally uses that mode of reasoning and argument. What distinguishes Tullock's work from that of most other social scientists who use the rational choice approach is that, in spite of his use of reductionist tools, Tullock's work tends to be anti-reductionist rather than reductionist in nature.1 A good deal of Tullock's work uses simple models to demonstrate that the world is more complex than may have previously been appreciated. It is partly the critical nature of his work that makes Tullock's world view dif.cult to summarize, as might also be said of much of Frank Knight's work. Tullock's more conventional work suggests that some arguments are more general than they appear and others less general than might be appreciated. To make these points, Tullock, like Knight, tends to focus sharply on neg?lected implications and discomforting facts. Unlike Knight, his arguments are usually very direct, and often simple appearing. Indeed, critics sometime suggest that Tullock's direct and informal prose implies super.ciality rather than a clear vision. However, a more sympathetic reading of Tullock's work asawhole discovers irreduciblecomplexity, rather than simplicity. This complexity arises partly because his approach to political economy bears a closer relationship to work in law, history, or biology than it does to physics or astronomy, and much work within economics. Tullock was trained as a lawyer and reads widely in history. Both lawyers and historians are inclined to regard every case as somewhat unique and every argument as somewhat .awed. Both these propensities are evident in his work. It is also true that Professor Tullock enjoys pointing "the way," and "the way" seems to be a bit different in every paper and book. His published work, especially his books, often leaps from one innovative idea to the next without providing readers with a clear sense of the general lay of the intellectual landscape. Although many of Tullock's pieces can be accurately summarized in a few sentences (as tends to be true of much that is written by academic scholars) the world revealed by Professor Tullock's work as a whole is not nearly so easily condensed. Complexity also arises because the aim of Tullock's work is often to stim?ulatenew research on issues and evidence largely neglected by the scholarly literature, rather than to completeor.nalizeexisting lines of research through careful integration and testing. To the extent that he succeeds with his enter?prise - and he often has - his efforts to blaze new trails stimulate further exploration by other scholars. For example, his work with James Buchanan on constitutional design (1962) has generated a substantial .eld of rational choice-based research on the positive and normative properties of alternative constitutional designs. His path-breaking paper on rent seeking (1967) was so original that it passed largely unnoticed for a decade, although it and sub?sequent work have since become widely praised for opening important new areas of research. His work on dictatorship (1974, 1987), which was almost a forbidden topic at the time that he .rst began working on it, has helped to launch important new research on non-democratic governance (Olson, 1993, Wintrobe, 1994). His editorial essays, "Ef.cient Rent-Seeking" and "Back to the Bog," have also encouraged a large body of new work on the equilibrium size of the rent-seeking industry and helped establish the new .eld of contest theory. The institutionally induced equilibrium literature pioneered by Wein?gast and Shepsle (1981) was developed partly in response to Tullock's "Why so Much Stability?" essay. His early work on vote trading (1959), the courts (1971, 1980), and bureaucracy (1965) also helped to establish new literatures. The breadth of Tullock's political economy and the simplicity of its com?ponent arguments also re.ects his working style and interests. Professor Tullock is very quick, reads widely, and works rapidly. He dictates the ma?jority of his papers. And although his papers are revised before sending them off, he lacks the patience to polish them to the high gloss evident in the work of most prominent scholars. In Tullock's mind, it is the originality of the ideas and analysis that determines the value of a particular piece of research, rather than the elegance of the prose or the mathematical models used to com?municate its ideas. (To paraphrase McLuran, "the message is the message," rather than the "medium.") The result is a very large body of very creative and stimulating work, but also a body of work that could bene.t from just a bit more care at its various margins.2 If a major fault exists in that substantial body of research, it is that Tul?lock has not provided fellow travelers with a road map to his intellectual enterprise, as for example James Buchanan, Mancur Olson, and William Riker have. None of Tullock's hundreds of papers explains his overarching world view in detail, nor is there a single piece that attempts to integrate his many contributions into a coherent framework. The purpose of this essay is to provide such an intellectual road map. It directs attention to the easily neglected general themes, conclusions, and connections between Professor Tullock's many contributions to public choice. The aim of the essay is, thus, in a sense "non-Tullockian" insofar as it attempts to explain Tullock's com?plex and multifaceted world view with a few fundamental principles, rather than to probe for weaknesses or suggest new problems or interpretations of existing work. The present road map is organized as follows. Section 2 focuses on the methodological foundations of Tulluck's work, Section 3 sur?veys his broad research on political economy, and Section 4 summarizes the main argument and brie.y discusses some of Tullock's major contributions. Numerous quotes from Tullock's work are included in endnotes. 2.Tullock'sworldview A.Methodology: Positivism without statistics "We must be skeptical about each theory, but this does not mean that we must be skeptical about the existence of truth. In fact our skepticism is an illustration of our belief in truth. We doubt that our present theories are in fact true, and look for other theories which approach that goal more closely. Only if one believes in an objective truth will experimental evid?ence contrary to the predictions ?disprove' the theory." (The Organization of Inquiry: 48) Tullock's perspective on science and methodology, although implicit in much of his work, is most clearly developed in The Organization of Inquiry [1966]. The Organization of Inquiry applies the tools of rational choice-based social science to science, itself, in order to better understand how the scienti.c com?munity operates and why scienti.c discourse has been an engine of progress for the past two centuries. Such questions cannot be addressed without char?acterizing the aims and methods of science and scientists, and, thus, Tullock could not analyze the organization of inquiry without revealing his own vision of science, scienti.c progress, and proper methodology. The preface of The Organization of Inquiryacknowledges the in.uence of Karl Popper, Michael Polanyi, and Thomas Kuhn, and these in.uences are clearly evident in his work.3 Although Tullock's work is largely theoretical, he remains very interested in empirical evidence. A logical explanation that fails to explain key facts can be overturned by those facts even if the line of reasoning is completely self-consistent. That is to say, both the assumptions and predictions of a model should account for facts that are widely recognized by intelligent persons who read reputable newspapers and are familiar with world history. The world can "say" something about a theory, and a proper scientist should be prepared to hear what is said. He or she does this by remaining a bit skeptical about the merits of existing theories, no matter how well-stated or long-standing.4 His simultaneous skepticism and belief in the possibility of truth is clearly evident in his wide range of articles and comments critiquing the theor?ies and mistaken conclusions of other social scientists. For example, in a series of essays on "the bog" Tullock asks and re-asks those working in the rent-seeking literature to explain why the rent-seeking industry is so small? And, moreover, why is the rate of return on rent seeking evidently so much greater than the rate of return from other investments? It bears noting that Tullock invented or at least helped to invent the rent-seeking model of con?.ict (1967/1974). A social scientist who was more interested in maximizing fame than in understanding the world would never have raised a question that reduces the importance of one of his own major contributions, even were such doubts to arise. Fame and fortune tend to go to those whose ideas are "bigger" than initially thought, not "smaller." However, a proper scientist is a truth seeker (The Organization of Inquiry, 1966: 49), and Tullock is in this sense, if not in the conventional sense, a very proper social scientist. In contrast to most academic scholars, Tullock argues that the scienti.c enterprise is not elitist. Science is accessible to non-experts. The facts do not respect titles, pedigrees, or even a history of scienti.c achievement. He suggests that essentially any area of science can be understood by intelligent outsiders who take the time to investigate them. Thus, every theory is open to examination by newcomers with a fresh eye as well as those with established reputations in particular .elds of research.5 Together Tullock's truth-oriented skepticism and nonelitism sheds consid?erable light on the broad domain in which Professor Tullock has read and written. A strong sense that "the truth" can be known by anyone who invests time and attention induces Tullock to read more widely and more critically than those inclined to defer to well-credentialed "experts." His positivism induces him to focus on modern scienti.c theories and historical facts rather than philosophical controversies. Because his extensive reading covers areas that are unfamiliar to his less widely read or more philosophical colleagues, he is able to use a wide range of historical facts and scienti.c theories to criticize existing theories and also as a source of puzzles and dilemmas to be addressed in new research. Together with his non-elitist view of science, his broad interest in the world induces him to think and write without re?gard to the disciplinary boundaries that constrain the thoughts of his more convention-bound colleagues. B.Social science: How narrow and how rational is human nature? "Every man is an individual with his own private ends and ambitions. He will only carry out assigned tasks if this proves the best way of attaining his own ends, and will make every effort to change the tasks so as to make them more in keeping with these objectives. A machine will carry out instructions given to it. A man is not so con.ned." (ThePoliticsofBureaucracy, 1966: 32) Economists tend to view man as "a rational animal," by which various eco?nomists mean various things not uniformly agreed to, but nonetheless clearly distinct from the customary usage of the word "rational" by non-economists. For example, microeconomics texts normally introduce the notion of "ra?tionality" at the same time that they discuss preference orderings. Rational decision makers have transitive preference orderings. Game theorists and macroeconomists who model individual decision making through time con?sider a decision maker to have "rational expectations." A rational decision maker anticipates the consequences of his or her actions, and does so in a manner free of systematic mistakes of bias. (In this amended concept of ra?tionality, economists are returning to the use of the term "rational" in ordinary language.) The preference and informational meanings of the term rational are often commingled by modern economists so that rational individuals become characterized as persons having consistent and durable preferences and unbiased expectations. This very demanding de.nition of rationality is occasionally found in Tullock's work.6 However, in most cases, Tullock is unwilling to adopt the full rationality hypothesis. He argues, for example, that information problems exist that lead to systematic errors, especially within politics (1967, chs. 6-9). The existence of such information problems is grounded in his personal experience. If hu?man beliefs were always unbiased, it would be impossible to .nd instances in which large groups of people, especially professionals, have systematically mistaken views about anything. For those who have more than occasionally been persuaded by Professor Tullock to change their own views, or seen him launch a well-reasoned barrage on the views of thoughtful but confused col?leagues, it sometimes appears that the only economist whose expectations are untainted by wishful thinking is Gordon Tullock, himself.7 Tullock's value as a critic and curmudgeon is, itself, largely incompatible with the "rational expectations" usage of the term "rational." Yet, it is partly because economists have failed to broadly apply the ra?tional choice paradigm that Tullock has achieved some notoriety among economists by reminding the profession of the limits of other motiva?tional theories; however, this is not because he believes that humans have one-dimensional objective functions.8 Tullock's view of man also incorporates a richer model of self interest than is included in most economic models. Although man is self-interested, his interests are often complex and context dependent.9 Consequently, Tullock rarely uses the simplest characterization of homoeconomicusas a narrow self-interested "wealth maximizer." For example, Tullock allows the possib?ility that a person's self-interest may be partly dependent on the welfare of others. Modest altruism and envy are at least weakly supported by evolution and therefore are likely to be present in human behavior.10 The evidence, however, leads Tullock to conclude that such "broader" interests are less important than many believe. In the end, it is narrow self-interest-based ana?lyses that provide the surest model of human behavior and, therefore, for institutional reform.11 If Buchanan's views may be said to be similar to those of James Madison, it might be said that Tullock's view of man parallels those of George Washington.12 Washington once said that to expect "ordinary people to be in.uenced by any other principle but those of interest is to look for what never did and I fear never will happen," (Johnson, 1997: 186) and also that "few men have virtue to withstand the highest bidder." The paradox in both cases is that neither men were themselves entirely motivated by narrow self-interest. C.Conflict and prosperity: On the cost and generality of rent seeking "Conflict" is to be expected in all situations in which transfers or redis?tribution occur, and in all situations in which problems of distribution arise. In general, it is rational for individuals to invest resources to either increase the transfers that they will receive or prevent redistributions away from them. Thus, any transactions involving distribution will lead to dir?ectly opposing resource investments and so to con.ict by our de.nition." (TheSocialDilemma,1974: 6) Take a rational individual and place him in a setting that includes other indi?viduals in possession of scarce resources, and most economists will predict the emergence of trade. Economists are all familiar with the Edgeworth box, which provides a convincing illustration of mutual gains from exchange. Tul?lock would be inclined to predict con.ict. Scarcity implies that individuals cannot achieve all of their objectives and that essentially all individuals would be better off with additional resources; however, it does not imply that vol?untary exchange is the only method of accomplishing this. Unfortunately, the economist's prediction that unrealized gains will be realized through volun?tary exchange follows only in settings where changes in the distribution of resources can be accomplished onlythrough voluntary means. In the absence of well-enforced rights, the strong may simply take the "initial endowments" of the weak.13 Few modern political economists would disagree with such claims about con.ict in a setting of anarchy, once reminded of the importance of well-enforced property rights. However, Tullockalsoarguesthatwastefulcon.ictalsotendstoemergeinsettingswhererightsareinitiallywellunderstoodandenforced.For example, lawful means are routinely used to change existing property rights assignments and the extent to which they are enforced - within legislatures and court proceedings. In ordinary markets, there is con.ict over the division of gains to trade and also in the efforts of .rms to increase market share through advertising and product innovation. In settled polities, con.ict is evident in the efforts of opposing special interest groups to persuade legis?latures to enact particular rules and regulations, and in the efforts of opposing candidates to win elective of.ce. In less lawful or settled settings, political and economic con.ict may imply theft and fraud, or bombs exploding and battles fought. Tullock often reminds us that con.ict is endemic to human existence. Con.ict implies that resources are devoted to activities that reduce rather than increase the output of .nal goods and services. These "rent-seeking" losses cannot be entirely avoided, although the cost of con.ict can be re?duced by intelligent institutional design. For example, the cost of con.ict is reduced by institutional arrangements that encourage the accumulation of productive capital rather than investments in redistribution.14 It bears noting that Tullock's conclusion regarding the feasibility of institutional solutions is empirical rather than analytical. Modern game theory suggests that perfect institutions cannot be ruled out a priori - indeed for essentially any well-de.ned game of con.ict, it can be shown analytically that a suitable bond or punishment scheme can completely eliminate the losses from con.ict. As far as Tullock knows, however, there are no real world institutional arrangements that completely solve the problem of con.ict. What changes with institutions is the magnitude and type of con.ict that takes place. That is to say, con.ict appears to be the normal state of human affairs whether bound by institu?tions or not. Theoretical solutions evidently underrepresent the strategy sets available to persons in real historical settings. 3.Tullock's political economy A. From the Hobbesian jungle to authoritarian government "Let us make the simplest assumption of transition conditions from the jungle to one where there is an enforcement apparatus. Assume, then, a jungle in which there are some bands - like prides of lions - and that one of these bands succeeds in destroying or enslaving all of the others, and establishes .rm control. This control would, .rstly, lead to a considerable change in the income distribution in the jungle in that the members of the winning band would have much larger incomes and the losers would have lower incomes. It would be rational for the stronger members of the winning band to permit sizable improvements in the incomes of the weaker members at the expense of nonmembers of the band, simply in order to retain the support of these weak members. The cohesion of the new government would depend on suitable reward for all members." (Gor?don Tullock, "The Edge of the Jungle," in ExplorationsintheTheoryofAnarchy, 1972: 70) Tullock argues that government, itself, often emerges from con.ict. For ex?ample, Tullock suggests that autocracy is the most likely form of governance to emerge in real political settings. In this one might suppose that Tullock agrees with Hobbes rather than with Buchanan, but neither turns out to be the case. Tullock's theory of the origin of government is based on conquest and domination rather than social contract. The theoretical and empirical importance of authoritarian regimes has led Tullock to devote substantial time and energy to analyzing the properties of this very common political institution. His analysis of autocracy implies that the rule of particular dictators tends to be short-lived, although autocratic institutions themselves tend to be very durable. Autocratic regimes have an inherent "stability problem" analogous to that associated with coalition polit?ics in democracies. Escape from anarchy does not imply the end of con.ict, as indirectly suggested by Hobbes.15 This is not to say that every dictatorship is overthrown. Tullock discusses a variety of methods by which dictators can decrease the probability of coup d'?tat by in-house rivals, most of which, by increasing the costs of conspiracy, also reduce the probability of a coup attempt being organized. For example, laws against treason should be aggressively enforced, rewards for providing the ruler(s) with creditable evidence of conspiracies should be high, com?missions rather than individuals should be given responsibility for as much as possible, and potential rivals should be exiled in a manner that reduces opportunities for acquiring support among elites (Autocracy,1987: Ch. 1 and TheSocialDilemma, 1974: Ch. 7). Nonetheless, the large personal advantage that successful conspirators expect to realize make conspiracies dif.cult to eliminate completely; consequently, coups do occur on a fairly regular basis. The dictator's coalition problem implies that a particular autocrat's "term of of.ce" is likely to be ended by an internal overthrow, or coup d'?tat (Autocracy, 1987: 9), and this is widely observed (Biennen and van de Walle (1989). However, the coalition problem does not apply to the institution of auto?cratic governance, itself. Centralized political power will not be given up easily, because political elites often share an interest in retaining autocratic forms of governance, even when they disagree about who should rule. Moreover, a well-informed autocrat can more easily subvert a popular revolt than a coup d'?tat. The same methods used to discourage palace coups also discourage popular revolts. Tullock argues that popular uprisings are far more dif.cult to organize than are palace coups, because the public-good problems that must be overcome are much larger. The individual advantages of participating in a popular uprising are very small relative to those obtained by members of a palace coup, although the aggregate bene.ts may be much larger. Being larger enterprises, revolutionary movements are also much easier to discover (Autocracy,1987: Ch. 3 and ThePoliticsofBureaucracy, 1966: 54). Together these imply that autocratic governmental institutions are more easily protected than is the tenure of a particular dictator.16 Tullock's analysis implies that democracy is a very unlikely form of gov?ernment, although not an impossible one. For example, Tullock notes that an internal overthrow engineered by elites may lead to democracy, as when an elected parliament or state assembly deposes a king or appointed governor, and it may well be the case that such transformations are broadly supported in the population as a whole (Autocracy,1987: 53-68). The evidence supports Tullock's authoritarian prediction, insofar as autocracies have been far more common than democracies throughout recorded history. B. Constitutional design Given the historical rarity of democracy and Tullock's assessment of the like?lihood of democratic reform, it is somewhat surprising that Professor Tullock has devoted so much of his intellectual life to understanding how modern democracy operates and how it can be improved. The most likely explanation is that knowledge of one's local political circumstances tends to be valuable for scholars and non-scholars alike. Tullock, like most other public choice scholars, resides in a democratic polity. And this, in combination with the wider freedom available within democracies to engage in political research, has led him and most other public choice scholars to focus largely on the properties of democratic governance.17 When government policies are to be selected by a group, rather than im?posed by a dictator, the .rst collective choice that must be made is the method of collective choice itself. How should such constitutional decisions be made? Buchanan and Tullock point out in the CalculusofConsent(1962) that the design and selection of collective decision rules is a complex problem, but one that is amenable to analysis using rational choice models.18 For example, Buchanan and Tullock note that a wide variety of voting rules can be em?ployed by a group to make collective decisions and, moreover, that decision rules other than majority rule can be in the interest of essentially all citizens. The best decision rule depends on the problems being addressed collectively and also on the diversity of group interests. Buchanan and Tullock also point out that, even in cases where majority rule is explicitly used and median voter outcomes emerge in the relevant elections, other institutional arrangements, such as bicameralism or single member districts, may imply that "majoritarian" legislative outcomes require substantially more or less than majority support from the electorate (CalculusofConsent: Chs. 15 and 16). In general, the menu of political constitutions includes a wide range of choices, and even majoritarian decisions are affected by the institutional setting in which voting takes place. In subsequent work, Tullock argues that a far better method of choice, the "Demand Revealing Procedure" (Tideman and Tullock, 1976), would not rely on counting votes at all.19 C. Interest groups, vote trading, and coalition politics On those occasions when collective decisions are made by majority rule, most economists assume that median voter interests tend to be advanced, partly be?cause the median voter model is so tractable.20 However, as Tullock has long argued, most voting models assume that voters make independent decisions about how to cast their votes. Tullock (1959, 1970) points out that if vote trading (log rolling) is pos?sible, mutual gains from trade can sometimes be realized by coordinating votes - mutual gains that would otherwise be infeasible. For example, sup?pose there are three equal-sized groups of voters who care intensely about three separate large-scale projects that can only be .nanced by the central government, for example, building a dam, dredging a river, or constructing a bridge. Tullock demonstrates that it may be Pareto ef.cient to undertake all three projects, but the concentration of bene.ts within minorities can cause ordinary majority rule to reject all three projects. Vote trading in such instances potentially allows some or all of the unrealized gains from gov?ernment service to be realized.21 In such cases, rather than appealing to the median voter, Tullock notes that candidates may take positions that appeal to several distinct "special interest" minorities that together add up to a majority. Direct vote trades are most feasible in relatively small number settings, as in legislatures, where continuous dealings allow informal exchanges of "favors" to be enforced. In large-scale elections, explicit vote trading is not likely to be a major factor in.uencing electoral outcomes, although what Tullock refers to as implicit log rolling may be. Figure 1 illustrates the case in which extremist groups A and B join forces to obtain policy X over the wishes of moderate voters who prefer policy B. Figure1. Implicit log rolling Such implicit vote trading, unfortunately, tends to be associated with ma?joritarian decision cycles. That is to say, if implicit vote trading can make a difference, there tends not to be a median voter. For example, in Figure 1, note that pairwise votes among policies X, B, and Y would be as follows: X > B, but Y > Xand B > Y. D. Bureaucracy Once legislative decisions are reached, they are normally implemented by large government organizations referred to as bureaucracies. In some cases, implementation is simply a matter of executing directives from elected rep?resentatives. Activity A is to be of.cially opposed or encouraged, and the bureaucracy implements the policy by imposing penalty P or subsidy S on persons engaging in activity A. In other cases, the bureaucracy has discretion to develop the policies themselves or the methods by which services will be produced, as when police and .re departments organize the production of crime-and .re-controlling services. In still others, the agency may be able to develop the law itself - as within regulatory agencies. In all such cases, it is clear that the .nal disposition of public policy depends in part on the incentives of individuals who work in government agencies as well as those of elected representatives. In Tullock's view, the incentives within large public and private organ?izations are broadly similar, although they differ somewhat at the margin (PoliticsofBureaucracy, 1966). Both public and private bureaucracies have their own internal incentive structures that encourage various kinds of pro?ductive and unproductive activities by the individuals who work within them. These incentives in.uence both the performance of individuals within organizations and the array of outputs produced by their organizations. Tullock argues that the importance of a particular organization's internal incentives relative to the external incentives of labor markets is determ?ined by the ability of individual bureaucrats to move between organizations. If every individual within a bureaucracy can costlessly change jobs, intra-organizational reward structures would be relatively unimportant for career advancement, and reputation in the wider community would largely determ?ine salaries. Alternatively, when it is dif.cult for persons to move between organizations, the internal structure of internal rewards and punishments be?comes an important determinant of individual salaries and perquisites, and, therefore, behavior (PoliticsofBureaucracy, 1966: 10). In such cases, large organizations will have some monopsony power with respect to their employees and internal incentives will largely determine employee performance on the job. Economics predicts that monopsony power will affect salaries and other economic aspects of job contracts. However, the intra.rm relationships of interest in Tullock's analysis are political, rather than economic. The politi?cization of an organization's hierarchy creates a nonprice mechanism by which hierarchical organizations can solve their coordination and principal-agent problems. He argues that political aspects of relationships within large organizations can be readily observed and, to some extent, measured by "de?ference." The "deference" observed is predicted to vary with the extent of monopsony power that a given organization possesses.22 For example, in?sofar as mobility decreases with seniority, Tullock's analysis predicts that deference would increase as individuals approach the top of an organization's hierarchy. The speci.c behavior that successfully curries favor or signals loyalty clearly varies according to the "wishes" induced on a given agent's boss by the boss's boss and so on. In principle, both public and private organizations can be organized in an ef.cient manner, in the sense that the organizational goals are advanced at least cost.23 However, incentives to assure ef.ciency within the public bureaucracy tend to be smaller than within large .rms. Wage differentials tend to be larger at the top levels of private-sector organizations than in comparable public-sector organizations; consequently, Tullock pre?dicts that more deference occurs in private than in comparable governmental organizations.24 Moreover, a public bureau's ef.ciency is generally more dif.cult to assess, and there is substantially less motivation for improving the performance of public bureaus than of comparable private bureaus within large .rms.25 For these reasons, Tullock concludes that the public bureaucracy tends to be less ef.cient than comparable organizations in the private sector. What this means as a practical matter is that organizational interests, as understood by senior bureaucrats and the legislature, are advanced less in public bureaus than within comparable organizations in the private sector. Tullock's analysis implies that the ef.ciency of the public bureaucracy can be improved if incentives to monitor public sector performance are increased, or if external competitive pressures on bureaus are intensi.ed. For example, Tullock argues that federalism can address both problems by reducing the complexity (size and scope) of the government agencies to be monitored (as local agencies replace national agencies) and by increasing competition between public agencies - both directly through efforts of localities to attract new residents and, indirectly, by comparison of the outputs of neighboring bureaus - as with local school districts and highway service departments. E. Enforcing the law: The courts, crime, and criminals "My readers are no doubt convinced by now that this book is different from other books on legal procedure. They may be convinced that it is superior, but, then again, they may not. I am proposing a radically differ?ent way of looking at procedural problems, and anyone making radical proposals must recognize the possibility that he could be wrong. But, although I concede the possibility that I could be wrong, I do not think that I am." (TrialsonTrial, 1980: 233) Of course, the executive bureaucracy is not the only governmental institution that affects legislative outcomes. Even within well-functioning democracies, many policy-relevant decisions are made by "independent" agencies. One crucial agency that is much neglected in the public choice literature is the courts. Economics implies that essentiallyalltheincentiveeffectsof public policy are generated by enforcement - that is to say, by the probabilities of punishment and the penalties associated with various kinds of private and public behavior.26 It is, thus, surprising that public choice scholars have in?vested so little effort analyzing the law enforcement system. Ef.cient and equitable enforcement of the law cannot be taken for granted. Professor Tullock was a pioneer in the rational choice-based analysis of the legal system, his LogicoftheLaw(1971) being published a year be?fore Posner's EconomicAnalysisoftheLaw(1972). Tullock's research on the legal system re.ects his interest in political economy. His work focuses largely on the problem of law enforcement, although the LogicoftheLawalso analyzes both civil and criminal law. On the former subject, largely neglected by Posner's treatise, Tullock reminds us that errors will always be made in the enforcement of law.27 Not all criminals are caught, not all who are caught are criminals, and not all of the guilty parties caught are punished, nor all innocent parties released. Mistakes can be made at every stage of the judicial process.28 With such errors in mind, Tullock explores the accuracy of institutions that determine fault or guilt, and attempts to assess the overall performance of the existing U. S. system of justice relative to alternative procedures for identifying criminals and persons at fault.29 Tullock argues that the available evidence implies that the U.S. courts make errors (wrongly determine guilt or innocence) in between 10% and 50% of the cases that they decide (TrialsonTrial, 1980: 33). Of course, a perfectly accurate justice system is impossible. The institutional or constitutional question is not whether mistakes are made, but whether too many (or too few) mistakes are being made. Improving the accuracy of court proceedings can reduce the social cost of illegal activities by better targeting sanctions at transgressors, which tends to reduce crime, and encourage greater efforts to settle out of court, which tends to reduce court costs (TrialsonTrial, 1980: 73-74). Tullock argues that the system of justice presently used in the United States can be improved at relatively low cost. He argues, for example, that the continental judicial system widely employed in Europe produces more accur?ate verdicts at a lower cost (TrialsonTrial, 1980, ch. 6). In the continental system, panels of judges assess guilt or innocence and mete out penalties in trials that are organized directly by the judges rather than produced by con.ict between legal teams for the votes of jury members. Accuracy could be further increased if the training of judges included a "good background in statistics, economics, ideas of administrative ef.ciency, etc." (TrialsonTrial, 1980: 204) 4. Conclusion and overview: Politicale conomy in the van Tullock's work demonstrates that the rational choice paradigm sheds light on a wide variety of political choice settings, but the world revealed is funda?mentally complex, varied, and irreducible. Each political setting has its own unique constellation of incentives and constraints. Political decisions at the constitutional level include voting rules, legislative structure, the institutional structures of the bureaucracy, and the courts. The public policies adopted within a given constitutional setting must address issues of redistribution and revolution as well as ordinary externality and coordination problems. De?cisions reached within all these settings can be understood as consequences of rational choice, but each choice setting differs from the others and the differences have to be taken into account if human behavior and policy outcomes are to be understood. Individuals are rational and largely self-interested, but on many issues will be rationally ignorant and, consequently, make systematic mistakes. This is not to say that there is nothing that can be said in general. Both indi?vidual choices and political outcomes are the result of the same fundamental considerations: self-interest, scarcity, and con.ict. And if the particulars al?ways differ, and are more than occasionally breathtaking, the basic "lay of the Tullock landscape" is always vaguely familiar.30 What is universal in Tullock's political economy is human nature. Tullock believes that (fairly) narrow self-interest can account for a wide range of human behavior, once individual interests are identi.ed for the institutional settings of interest. It is his characterization of human nature that provides Tullock's research in political economy with its uni.ed and coherent core. What is unique about Tullock's approach to political economy is his willingness to identify costs and bene.ts in essentially all choice settings, including many where more orthodox economists and political scientists fear to tread. Tullock's work suggests that a proper understanding of institutional settings allows relatively straightforward net-bene.t maximizing models to account for a rich and complex range of policy outcomes. A good deal of human behavior, perhaps most, can be understood using the rational choice model of behavior, once the particular costs and bene.ts of actions for a given institutional setting are recognized. A. Normative research Although Tullock's work is motivated, in large part, by his efforts to make sense of a broad range of historic and contemporary puzzles that have come to his attention over the course of a lifetime of rapid and extensive reading, his research has never aimed exclusively at understanding the world. His books and many of his papers address normative as well as positive issues.31 His normative approach is utilitarian and comparative, and, for the most part, his normative conclusions follow closely from his positive analyses. If he can show that the averageperson is better off under institution X than under institution Y, he concludes that Y is a better institution than X. In such cases, Y is approximately Pareto superior to X. Thus, a society with a stable criminal and civil law is better off than one lacking them (LogicoftheLaw,1971, ch. 2). A society with a more accurate judiciary is better off than one with a less accurate judicial process (TrialsonTrial, 1980: Ch. 6). A society with an ef.cient collective decision rule is better off than one that fails to minimize decision costs (CalculusofConsent, 1962: Ch. 6). A society that uses the demand-revealing process to make collective decisions would be better off than one relying on majority rule (Tideman and Tullock, 1976). A society that reduces rent-seeking losses is better off than one that fails to address this problem (Ef.cientRentSeeking,2000: Ch. 1). Intelligent institutional design can improve the ef.ciency of the judicial system, reduce the losses from con.ict, and produce better public policies, although it cannot eliminate all losses or mistakes. Although many normative arguments are found throughout Tullock's work, his analysis is never utopian. He never claims that institutional arrange?ment Y is the best possible arrangement, only that existing arrangements can be improved. Indeed, he argues that utopian approaches may impede useful reforms (SocialDilemma, 1974, p. 140). B. Breadth of Tullock's research Most economists study the behavior of rational self-interested individuals in?teracting within a stable pattern of laws and regulations governing ownership and exchange. Most political scientists study individual and group behavior within a stable pattern of constitutional laws and rules governing political procedures and constraints. The public choice literature as a whole analyzes how economic and political interests give rise to public policies. The public policies studied by public choice scholars include both the routine legislative outcomes of ordinary day-to-day politics and administration decisions, and also changes in the fundamental laws that determine the procedures and con?straints under which future political and economic decision making will be made. The political and economic processes studied by public choice schol?ars, thus, can be said to generate the "settings" and many of the "facts" studied by the more established .elds of economics and political science. In this respect, public choice can be regarded as broaderin scope than either of its parent disciplines, and, consequently, a scholar who contributes to all the research programs within public choice necessarily has a very broad program of research. Gordon Tullock is one of a handful of scholars who has contributed to all the various sub.elds in that area of research known as public choice. Of course, the public choice research program includes many men and women of insight who have addressed deep and broad issues along the same intellectual frontiers. Professor Tullock's intellectual enterprise has long been shared by his colleagues at the Thomas Jefferson Center and the Center for Study of Public Choice - especially James Buchanan and Robert Tollison - and by many in the extensive intellectual network in which those centers par?ticipated. However, Tullock's work is nearly unique among the well-known pioneers of public choice for its originality, breadth, comparative approach, and historical foundations. C. Tullock's intellectual impact In constructing a "road map" for the intellectual landscape traversed by Professor Tullock's political economy, the focus of this paper has been the underlying themes in his work, and, in some cases, it has attempted to bridge gaps in his work that are essentially implied by the totality of his political economy research. Other gaps have been ignored, and some of his work outside public choice has been neglected. For example, his work on dictatorship does not examine why some autocrats have better track records than others. The relative per?formance of American and European judicial systems is developed without addressing the empirical questions of whether crime rates or lawsuits are sys?tematically different as a consequence of different judicial procedures. Hints are provided in Autocracyand LogicoftheLawbut there is no systematic analysis. Moreover, some of his work has been neglected because it is not an essential part of his political economy research program. There is, for example, his work on biology and sociobiology, The Economics of Nonhuman Societies (1994), and his work on monetary economics (1954, 1979). The survey undertaken has not devoted signi.cant space to assessing the quality and impact of Tullock's work. That most readers of this piece are already familiar with many of his scholarly articles is itself evidence of this. A "tour guide" of Tullock's work would have tried to assess the magnitude of his major contributions with the bene.t of hindsight or from the perspective of the times at which his ideas were developed. It is clear, for example, that The Calculus of Consent(1962), written with James Buchanan, was not only very original, but in.uential from the moment it was published. The Calculus has been cited in scienti.c articles well over a thousand times since its pub?lication. Moreover, it continues to be highly regarded and continues to spur new research; the Calculus has already been cited more than 100 times since January 2000. Not all of Professor Tullock's contributions have been immediately re?cognized. Several of his ideas awaited reinvention by other scholars before coming to prominence. His original work on rent seeking (1967, 1974) was well-regarded, but not widely appreciated until 10 or 20 years after its pub-lication.32 The term "rent seeking" was actually coined by Anne Krueger in 1974. His contributions to principal-agent, ef.ciency wage, and organization theory worked out in the PoliticsofBureaucracy(1966) have been largely neglected by the new literatures on those subjects. His work on the law, es?pecially with respect to judicial proceedings, errors, and criminal sanctions, are noted, but not as widely as appears justi.ed. His theory of autocrats as service-providing income maximizers was worked out in the .rst anarchy volume (1972) and further developed in TheSocialDilemma(1974), but awaited rediscovery by Mancur Olson (1993) and Ronald Wintrobe (1990) nearly two decades later. The invention of what now is called a contest-success function in TheLogicoftheLaw(1971) and subsequently applied in his work on ef.cient rent seeking (1980) also seems underrecognized, although it is noted by Jack Hirshliefer (2001). His work on the enterprise of science, The Organization of Inquiry(1966) is a gold mine awaiting rediscov?ery. Sometimes, Tullock blazes a trail that is too far ahead of the mainstream to be fully appreciated. And, one can be too far in front of "the parade" to be readily associated with it. That Tullock's observations have contributed much to our understand?ing of the political landscape is, nonetheless, well recognized. His research continues to be among the most highly cited in the social sciences. His willingness to chart new grounds and point out the "dead ends," "ruts," "potholes," and "slippery slopes" of other scholars - largely to our bene.t, if often at his pleasure - continues to make his work provocative and entertain?ing. His books and papers address new issues and associated problems at the same time that general principles are being worked out. His long editorship of PublicChoicehelped to de.ne and establish the .eld. The huge range of original explanations and conclusions that Tullock de?velops in his books and papers can easily lead a casual reader or listener to conclude that there is little systematic in his research, or perhaps in public choice generally. His brisk discussions of issues risk losing the reader in a forest of special cases and ingenious insights, rather than illuminating the main pathways followed. Clearly, a mere list of possible explanations is not social science. Social sciencedoes not simply provide an unconnected logicof speci.c instances of collective action, but attempts to determine what is general about the behavior that we observe. The present essay attempts to remedy this potential misapprehension by providing a more concise and integrated vision of the territory charted by Tul-lock's unusually extensive political economy than a casual reader may have obtained from a small sample of Professor Tullock's published work. The aim of Tullock's social science is not just to explain the main details of social life, but as much as possibly can be understood. His social science attempts to systematically explain and predict allofhumanbehavior. His work demon?strates that self-interest, con.ict, and institutions account for a good deal of human behavior in both ordinary and extraordinary political circumstances - and, in Tullock's view, far more than is generally acknowledged. Notes 1. The work of many social scientists attempts to show that complex real world phenomena can be understood with a few fundamental principles that others have failed to recognize. This reductionist approach attempts to demonstrate that the world is essentially simpler than it appears. The reductionist research agenda is clearly of great esthetic interest for academics who appreciate the intellectual craftsmanship required to devise lean, pen?etrating, encompassing theories. It is also an important practical enterprise insofar as reductionist theories allow knowledge accumulated over many lifetimes to be passed on from one generation to the next with relatively modest investments of time and effort by teachers and students. 2. As many who have argued with Professor Tullock over the years will attest, the rough edges of his work somehow make his analyses all the more interesting. His provocative theoretical and historical assertions challenge his interlocutors to think more carefully about issues that they would not have imagined and/or mistakenly taken for granted. The fact that Tullock is occasionally incorrect somehow helps stimulate his fans and foes to greater effort. 3. "A scienti.c theory consists of a logical structure proceeding from certain assumptions to certain conclusions. We hope that both the assumptions and the conclusions may be checked by comparing them with the real world; the more highly testable the theory, the better. Normally, however, certain parts of the theory are dif.cult to test. We are not unduly concerned by this, since if parts of it survive tests, we may assume that the untestable remainder is also true." (Gordon Tullock, LogicoftheLaw, 1971: 10.) 4. "The theory of the lever may, of course, be disproved tomorrow, but the fact that it has withstood two thousand years of critical examination, much of it using tools which the Greeks could not even dream of, does raise some presumption that here we have a bit of theory which is absolutely true. It seems likely that somewhere in our present vast collection of theories there are others which are, in fact, true, that is which will not be disproved at any time in the future. It is, of course, impossible to say which they are." (Gordon Tullock, OrganizationofInquiry, 1966: 48.) 5. "An intelligent outsider who has the time and interest in a problem should investigate, himself, since only in this way can he reach the level of certainty of the experts them?selves. Personal knowledge is always superior to hearsay, ..." (Gordon Tullock, The Organization of Inquiry, 1966: 53.) 6. "I prefer to use the world ?rational' for those acts that might well achieve the goals to which the actor aims, regardless of whether they are humanitarian, violent, etc." (Gordon Tullock, TheSocialDilemma, 1974: 4.) 7. Tullock often acknowledges his own fallibility although he does not tout it. This is evident in the lead quote and several others included in the text. Another appears in the .rst chapter of TowardsaMathematicsofPolitics.There he relates a story about failing to purchase glasses made out of a new material when it was .rst suggested to him by his optometrist. Gordon, evidently misunderstood what was said regarding an innovation in lens design, and fully appreciated it only a week or so later, at which point he purchased the glasses with the recommended lenses. 8. "My main point is simply that we stop fooling ourselves about redistribution. We have a minor desire to help the poor. This leads to certain government policies. We also have some desire for income insurance. And we also, to some extent, envy the rich. ...[However,] the largest single source of income redistribution is simply the desire of the recipients to receive the money." (Gordon Tullock, "The Rhetoric and Reality of Redistribution," Southern Economic Journal, 1981: 906.) 9. "Man is a complicated animal and his motives are many and varied". (Gordon Tullock, The Organization of Inquiry, 1966: 39.) 10. "We argue below that it (altruism) is a relatively minor motive and the major motives tend to lead to inef.ciency and distortion. This motive (altruism), insofar as it is implemented, actually improves the ef.ciency of the economy." (Gordon Tullock, "The Rhetoric and Reality of Redistribution," SouthernEconomicJournal, 1981: 896.) Of course, if envy is strong enough, then taking a dollar away from me might give other people a total satisfaction which was larger than the loss of the dollar to me. Thus plunder?ing the Rockefeller family might be socially desirable if we had some way of measuring innate utilities." (Gordon Tullock, "The Rhetoric and Reality of Redistribution," SouthernEconomicJournal, 1981: 902.) 11. "The primacy of private interest is not inconsistent with the observation that most people, in addition to pursuing their private interests have some charitable instincts, some tend?ency to help others and to engage in various morally correct activities. However the evidence seems fairly strong that these motives other than the pursuit of private interests are not the ones on which we can depend for the achievement of long-continued ef.cient performance." (Gordon Tullock, GovernmentWhoseObedientServant?, 2000: 11.) 12. A collection of Washington quotes are available on the internet at http://www.dropbears. com/b/broughsbooks/qwashington.htm. 13. "Economics has traditionally studied the bene.ts of cooperation. Political science is be?ginning to move in that direction. Although I would not quarrel with the desirability of such studies, the fact remains that con.ict is also important. In general con.ict uses re?sources, hence it is socially inef.cient, but entering into the con.ict may be individually rational for one or both parties. ...The social dilemma, then, is that we would always be better off collectively if we could avoid playing this kind of negative sum game, but individuals may make gains by forcingsuchagameon the rest of us." (Gordon Tullock, The Social Dilemma, 1974: 2.) 14. "Obviously, as a good social policy, we should try to avoid having games that are likely to lead to this kind of waste. Again, we should try to arrange that the payoff to further investment in resources is comparatively low, or, in other words, that the cost curve [of rent seeking] points sharply upward." (Gordon Tullock, Ef.cientRentSeeking, 2000: 13.) "There are institutions that will reduce the likelihood of being forced into such a game, but these institutions cost resources, too. ...[However] the problem is unavoidable - at least in the present state of knowledge. Pretending that it does not exist is likely to make us worse off than conceding its existence and taking rational precautions." (Gordon Tullock, TheSocialDilemma, 1974: 2.) 15. "The problem of maintaining power in a dictatorship is really similar to that of maintain?ing a majority for redistributive purposes in a voting body. It is easily demonstrated, of course, that it is always possible to build a majority against any particular program of redistribution by offering something to the "outs" on the original program and fairly high payments to a few of the "ins." The situation in a dictatorship is similar. It is always pos?sible at least in theory to collect together a group of people which is more powerful than the group supporting the status quo. This group will be composed of important of.cials of the regime who could bene.t from its overthrow and their concomitant promotion." (Gordon Tullock, Autocracy, 1987: 19.) 16. "Preventing overthrow by the common people is, in general, quite easy if the ruler is only willing to repress vigorously and to offer large rewards for information about conspiracies against him." (Gordon Tullock, Autocracy, 1987: 68.) 17. Tullock may disagree with this location-based explanation. "Most of my work in Public Choice has dealt with democratic governments. This is not because I thought that demo?cratic governments were the dominant form of government, either currently or historically. That more people are ruled by autocracy than democracies today, and that the same can be said of earlier periods, is obvious. I did think that democratic governments were better than the various alternatives which have been tried from time to time, but the basic reason that most things that I have published have dealt with democracies is simply that I've found dictatorship to be a very, very dif.cult problem." (Gordon Tullock, Autocracy, 1987: x.) 18. "For a given activity, the fully rational individual at the time of constitutional choice will try to choose that decision-making rule which will minimize the present value of the expected costs that he must suffer. He will do so by minimizing the sum of the expected external costs and the expected decision-making costs . . . [In this manner,] the individual will choose the rule which requires that K/N of the group agree when collective decisions are made." (Gordon Tullock and James M. Buchanan, CalculusofConsent, 1962: 70.) "This broad ...classi.cation does not, of course, suggest that all collective action should rationally be placed under one of two decision making rules. The number of categories, and the number of decision-making rules chosen, will depend on the situation which the individual expects to prevail and the "returns to scale' expected to result from using the same rule over many activities." (Gordon Tullock and James M. Buchanan, CalculusofConsent, 1962: 76.) 19. In their words, the demand-revealing process "is a new process for making social choices, one that is superior to other processes that have been suggested. The method is immune to strategic maneuvering on the part of individual voters. It avoids the conditions of the Arrow theorem by using more information than the rank orders of preferences and selects a unique point on or ?almost on' the Pareto-optimal frontier, one that maximizes or ?almost maximizes' the consumer surplus of society. Subject to any given distributions of wealth, the process may be used to approximate the Lindahl equilibrium for all public goods." (Tideman and Tullock, JournalofPoliticalEconomy, 84: 1145.) 20. An interesting property of the median voter hypothesis is that decisions tend to be largely independent of the particulars of the interests of voters away from the median (Black, 1948). All that matters is that which is necessary to identify the median voter. How much more or less than the median voter's interest is demanded by other voters and how intensively those demands are held is irrelevant. A wide range of voter distributions can have the same median. However, not every distribution of voter preferences has a median. In the absense of a median, McKelvy (1979) demonstrates that literally "anything" can happen under a sequence of majority decisions. The properties of democratic governance are by no means obvious, and the more detailed the institutional structures and preferences that are taken account of, the more complex political decision making becomes. 21. Vote trading can also lead to the funding of regional boon-doggles, as in the pork barrel dilemma (Tullock, 1959). Again the world is more complex than one might have hoped. 22. "Insofar as the alternatives for employment are limited, and the shifting of either jobs or employees involves costs, the secondary, or ?political' relationship enters even here. ...The most obvious empirical veri.cation of this difference is the degree of deference shown to superiors." (Gordon Tullock, ThePoliticsofBureaucracy, 1966: 11.) 23. "In the ideally ef.cient organization, then, the man dominated by ambition would .nd himself taking the same courses of action as an idealist simply because such procedure would be the most effective for him in achieving the personal goals that he seeks. At the other extreme, an organization may be so badly designed that an idealist may .nd it necessary to take an almost completely opportunistic position because only in this manner can his ideals be served." (Gordon Tullock, ThePoliticsofBureaucracy, 1965, p. 21.) 24. "In the United States civil service, the individual career employee is generally not ex?pected to put up with quite as much ?pushing around' as he might endure in the higher ranks of some large corporations. To balance this, he will be receiving less salary and will probably .nd that the orders which he is expected to implement are less rational than those he could expect to receive in private industry." (Gordon Tullock, ThePoliticsofBureaucracy, 1966: 12.) 25. "Improving the ef.ciency of a large corporation by, let us say, 2 percent may well mean that some individual's wealth goes up by $50 million and a very large number of individu?als will have increases in wealth on the order of a hundred to a million dollars. Maximizing the public interest, however, would always be a public good, and improvement by 2 per?cent in the functioning ef.ciency of some bureau would characteristically increase the well-being of average citizens, or, indeed, any citizen by amounts which would be almost invisible." (Gordon Tullock, Government:WhoseObedientServant?, 2000: 58.) 26. It bears noting that many of the demands for public policy within a given society are independent of the type of political regime in place. For example, criminal and civil laws would be adopted by nearly unanimous agreement by all free men and women at a constitutional convention (Calculus of Consent,1962: Ch. 5, and Logic of the Law, 1971: Ch. 2). Alternatively, an autocrat may establish criminal and civil law as a means of maximizing the resources potentially available to the state (Explorations in the Theory of Anarchy, 1972: 72, and The Social Dilemma, 1974: 19). Murder and theft will ordinarily be punished, and most contracts will be enforced under both democratic and autocratic regimes. Some other rules may vary somewhat according to regime type, as with rules concerning payments to government of.cials, freedom of assembly, and the publication of news critical of the government, but regime type will not always directly affect public policy outcomes or economic performance. 27. Of course, procedural questions are more important for a political economist than for a scholar whose work focuses on a single society. This probably explains why procedural aspects of law enforcement are given relatively little attention in the law and economics literature, see, for example, Becker (1968) or Posner (1972). 28. "Most crimes are not simply the preliminary to punishment for the criminals, most people who are in prison have not had anything that we would recognize as a trial, and admin?istrative decisions keep people in prison and (in effect) extend their sentence." (Gordon Tullock, LogicoftheLaw, 1971: 169.) 29. "The problem of determining what actually happened is one of the court's duties and the only one we are discussing now. A historic reconstruction, which is what we are now talking about, is a dif.cult task for a variety of reasons. One is that witnesses lie and in lawsuits, there usually are at least some witnesses who have a strong motive to lie. They may also simply be mistaken. Another reason is that many things which happen that are of interest to the court leave no physical traces and, indeed, may leave no traces on the minds of the parties...different cases have different amounts of evidence of varying quality available, and ...this evidence leads us to varying probabilities of reaching the correct decision." (Gordon Tullock, Trials on Trial, 1980: 25-26.) 30. This is especially true for those working in the tradition of the public choice approach to politics. However, the latter is partly a consequence of Tullock's many contributions to public choice, but, perhaps even more so, a consequence of his two decades as editor of the journal PublicChoice. Those years largely de.ned the discipline as we know it now, and Tullock's editorial decisions helped determine those boundaries - such as they are - and his responses to contributors made his world view both familiar and important to aspiring public choice scholars of that period. 31. "We undertake investigations because we are curious, or because we hope to use the information obtained for some practical purpose." (Gordon Tullock, 1966, OrganizationofInquiry, 1966: 12.) 32. In fact, his .rst paper on the costly nature of efforts to secure rents (1967) was roundly rejected by the major economics journals (Brady and Tollison, 1994: 9-10). References Selected references: Gordon Tullock. Lockard, A.A. and Tullock, G. (2001). Efficient rent seeking: Chronicle of an intellectual quagmire. Boston: Kluwer Academic Publishers. Tullock, G., Seldon, A. and Brady, G.L. (2000). Government: Whose obedient servant?: A primer in public choice. London: Institute of Economic Affairs. Tullock, G. (1997). Thecaseagainstthecommonlaw. Durham: North Carolina Academic Press. Tullock, G. (1997). Economics of income redistribution. Boston: Kluwer Academic Publishers. Brady, G.L. and Tollison, R.D. (Eds.). (1994). On the trail of homo economicus: Essays by Gordon Tullock. Fairfax, VA: George Mason University Press. Tullock, G. (1994). The economics of nonhuman societies. Tucson: Pallas Press. Grier, K.B. and Tullock, G. (1989). An empirical analysis of cross-national economic growth, 1951-80. Journal of Monetary Economics 24: 259-276. Tullock, G. (1989). The economics of special privilege and rent seeking. Hingham, MA: Lancaster and Dordrecht: Kluwer Academic Publishers. Tullock, G. (1987). Autocracy. Hingham, MA: Lancaster and Dordrecht: Kluwer Academic Publishers. Tullock, G. (1986). The economics of wealth and poverty. New York: New York University Press; (distributed by Columbia University Press). Tullock, G. (1985). Adam Smith and the prisoners' dilemma, Quarterly Journal of Economics 100: 1073-1081. McKenzie, R.B. and Tullock, G. (1985). The new world of economics: Explorations into the human experience. Homewood, IL: Irwin. Brennan, G. and Tullock, G. (1982). An economic theory of military tactics: Methodological individualism at war. Journal of Economic Behavior and Organization 3: 225-242. Tullock, G. (1981). The rhetoric and reality of redistribution. Southern Economic Journal 47: 895-907. Tullock, G. (1981). Why so much stability? Public Choice 37: 189-202. Tullock, G. (1980). Trials on trial: The pure theory of lega lprocedure. New York: Columbia University Press. Tullock, G. (1980). Ef.cient rent seeking. In J.M. Buchanan, R.D. Tollison, and G. Tullock. Toward a theory of the rent-seeking society, 97-112. College Station: Texas A&M University Press. Tullock, G. (1979). When is in.ation not in.ation: A note. Journal of Money, Credit, and Banking 11: 219-221. Tullock, G. (1977). Economics and sociobiology: A comment. Journal of Economic Literature 15: 502-506. Tideman, T.N. and Tullock, G. (1976). A new and superior process for making social choices. Journal of Political Economy 84: 1145-1159. Tullock, G. (1975). The transitional gains trap. Bell Journal of Economics 6: 671-678. Buchanan, J.M. and Tullock, G. (1975). Polluters' pro.ts and political response: Direct controls versus taxes. American Economic Review 65: 139-147. Tullock, G. (1974). The social dilemma: The economics of war and revolution. Blacksburg: University Publications. Tullock, G. (1972). Explorations in the theory of anarchy. Blacksburg: Center for the Study of Public Choice. Buchanan, J.M. and Tullock, G. (1971/1962). Thecalculusofconsent: Logical foundations of constitutional democracy. Ann Arbor: University of Michigan Press. Tullock, G. (1971). The charity of the uncharitable. Western Economic Journal 9: 379-392. Tullock, G. (1971). Inheritance justi.ed. Journal of Law and Economics 14: 465-474. Tullock, G. (1971). The paradox of revolution. Public Choice 11: 88-99. Tullock, G. (1971). Public decisions as public goods. Journal of Political Economy 79: 913- 918. Tullock, G. (1971/1988). The logic of the law. Fairfax: George Mason University Press. Tullock, G. (1967). The general irrelevance of the general impossibility theorem. Quarterly Journal of Economics 81: 256-270. Tullock, G. (1967). The welfare costs of monopolies, tariffs and theft. Western Economic Journal 5: 224-232. Tullock, G. (1967). Towards a mathematics of politics. Ann Arbor: University of Michigan Press. Tullock, G. (Ed.). (1966/7). Papers on non-market decision making. Charlottesville: Thomas Jefferson Center for Political Economy, University of Virginia. Tullock, G. (1966). The Organization of Inquiry. Durham: Duke University Press. Tullock, G. (1966). Gains-from-trade in votes (with J.M. Buchanan). Ethics 76: 305-306. Tullock, G. (1965). The politics of bureaucracy. Washington, DC: Public Affairs Press. Tullock, G. (1965). Entry barriers in politics. American Economic Review55: 458-466. Tullock, G. (1962). Entrepreneurial politics.Charlottesville: Thomas Jefferson Center for Studies in Political Economy, University of Virginia. Tullock, G. (1959). Problems of majority voting. Journal of PoliticalEconomy 67: 571-579. Campbell, C.D. and Tullock, G. (1954). Hyperin.ation in China, 1937-49. Journal of Political Economy 62: 236-245. Other references Becker, G.S. (1968). Crime and punishment: An economic approach. The Journal of Political Economy 76: 169-217. Biennen, H. and van de Walle, N. (1989). Time and power in Africa. American Political Science Review 83: 19-34. Black, D. (1948). On the rationale of group decision-making. Journal of Political Economy 56: 23-34. Buchanan, J.M. (1987). The qualities of a natural economist. In C. Rowley (Ed.), Democracy and public choice, 9-19. New York: Blackwell. Congleton, R.D. (1988). An overview of the contractarian public .nance of James Buchanan. Public Finance Quarterly 16: 131-157. Congleton, R.D. (1980). Competitive process, competitive waste, and institutions. In J. Buchanan, R. Tollison, and G. Tullock (Eds.), Towards a theory of the rent-seeking society, 153-179. Texas A & M Press. Hirshliefer, J. (2001). The dark side of the force: Economic foundations of con.ict theory. New York: Cambridge University Press. Johnson, P. (1997). A history of the American people. New York: Harper. Krueger, A.O. (1974). The political economy of the rent-seeking society. American Economic Review 64: 291-303. McKelvey, R.D. (1979). General conditions for global intransitivities in formal voting models. Econometrica 47: 1085-1112. Olson, M. (1993). Dictatorship, democracy, and development. American Political Science Review 87: 567-576. Posner, R.E. (1972). Economic analysis of the law. Boston: Little, Brown and Company. Shepsle, K.A. and Weingast, B.R. (1981). Structure-induced equilibrium and legislative choice. Public Choice 37: 503-519. Wintrobe, R. (1990). The tinpot and the totalitarian: An economic theory of dictatorship. American Political Science Review 84: 849-872. From dsmith06 at maine.rr.com Tue Jan 3 02:16:51 2006 From: dsmith06 at maine.rr.com (David Smith) Date: Mon, 02 Jan 2006 21:16:51 -0500 Subject: [Paleopsych] =?windows-1252?q?=91The_Better_Angels_of_Our_Nature?= =?windows-1252?q?=92=3A_Evolution_and_Morality_?= Message-ID: <43B9DE93.2070408@maine.rr.com> ?The Better Angels of Our Nature?: Evolution and Morality St. Francis Room of the Ketchum Library University of New England, 11 Hills Beach Road, Biddeford, Maine. Feb. 21, 2006 at 6 p.m. Evolutionary biologist David Lahti, Ph.D., will deliver a lecture on "'The Better Angels of Our Nature': Evolution and Morality" on Feb. 21, 2006 at 6 p.m. in the Lahti is an NIH Postdoctoral Fellow at the University of Massachusetts, Amherst. The lecture, sponsored by New England Institute for Cognitive Science and Evolutionary Psychology and Department of Philosophy and Religious Studies, is free and open to the public. Are we humans essentially altruistic beings whose natural state is to care for others? Or are we ogres at heart, our moral codes the only thing holding us back from utter selfishness? Lahti argues that an evolutionary consideration of morality suggests a third alternative, that we are by nature moral strugglers and deliberators - that the relevant adaptive trait is neither altruism nor selfishness, but rather a refined ability to assess our social environments and make informed decisions about how altruistic or selfish to be. We tend, he believes, to make these decisions on the basis of two main variables: the anticipated effects of our behavior on our reputation and the perceived stability of the social groups on which we depend. Furthermore, what we often call morality is actually a conglomerate of tendencies and capacities, some of which are millions of years old and others just thousands. Many of its more recent features, including moral rules that are difficult for us to follow, are cultural surrogates for adaptation in an age when our social environments are changing too fast for us to adapt genetically to them. Lahti received a Ph.D. in philosophy at the Whitefield Institute at Oxford in 1998, for work on the relationship between science and the foundations of morality; more recently his research in this area has focused on the evolution of morality. In 2003 he received a Ph.D. in ecology and evolutionary biology from the University of Michigan, where he documented rapid evolution in the African village weaverbird. From 2003 to 2005 he held the Darwin Fellowship at the Program in Organismic and Evolutionary Biology at University of Massachusetts Amherst, and has been studying the evolution and development of bird song. From checker at panix.com Tue Jan 3 22:28:49 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:28:49 -0500 (EST) Subject: [Paleopsych] Public Choice: Anyone for higher speed limits? Message-ID: Olof Johansson-Stenman and Peter Martinsson: Anyone for higher speed limits? - Self-interested and adaptive political preferences Public Choice (2005) 122: 319-331 C DOI: 10.1007/s11127-005-3901-x [This is a nice article that gets at the issue of whether voters vote for what is in their own self-interest or what they think is in the overall public interest. I say vote for what's in your own interest, since you don't know what's in others' interest, certainly not better than they do themselves. It's the American way, that idea that government serve the people. One chief problem of altruism is that, if I am to serve other people, who are all these other people going to serve. [One matter of surprise to me: "The result presented here is also consistent with the result of Hemenway and Solnick (1993) and Shinar, Schechtman and Compton (2001), who found that levels of education higher than high-school tended to increase the probability of speed violation." [But be wary of the article, though, since the variables explained only 20% of the variance. There's a real likelihood that, if someone will come up with other variables that will explain more of the variance, the coefficients on the ones used here could get drastically altered. One of the critics of _The Be** Cu**e_ complained that the authors often buried the R^2s in the back, and in fact the R^2s varied all over the place. [Note that I said, "IF someone will come up with other variables.... Researchers use the variables they can get ahold of. What _The B*ll C*rv*_ did was to thoroughly mine a data set, that collected for the National Longitudinal Study of Youth, that almost uniquely had a measure of intelligence. The results are known: IQ correlated more with things like income and scholastic achievement than do the usual Socioeconomic status, parental income, and so on. I'd love to know whether having an IQ measure upset any conventional wisdom on these other factors. [When I was in graduate school at UVa, self-interested voting was emphasized and expressive voting barely recognized. I was Virginia School. The Rochester School (William Riker and then others) came along later. It was in political science but used economics tools. It emphasised disinterested voting. These two Schools were never, I don't think, hostile toward one another, and this article shows that the issues are empirical.] Department of Economics, SE 40530 G?teborg, Sweden; e-mail: Olof.Johansson at economics.gu.se, Peter.Martinsson at economics.gu.se Accepted 17 November 2003 Abstract. Swedish survey-evidence indicates that variables reflecting self-interest are important in explaining people's preferred speed limits, and that political preferences adapt to technological development. Drivers who believe they drive better than the average driver as well as drivers of cars that are newer (and hence safer), bigger, and with better high- speed characteristics, prefer higher speed limits. In contrast, elderly people prefer lower speed limits. Furthermore, people report that they themselves vote more sociotropically than they believe others vote on average, indicating that we may vote less sociotropically than we believe ourselves. One possible reason for such self-serving biases is that people desire to see themselves as socially responsible. *We are grateful for very constructive comments from an anonymous referee. We have also received useful comments from Fredrik Carlsson and had fruitful discussions with Per Fredriksson and Douglas Hibbs. Financial support from the Swedish Agency for Innovation Systems (VINNOVA) is gratefully acknowledged. 1. Introduction The purpose of this paper is twofold: i) to use survey evidence about what speed limits different people prefer on motorways, and what their own subjectively perceived and self-reported voting motives are, in order to provide new insight into the determinants of individual voting behavior, in particular the self-interested voting hypothesis; and ii) to identify adaptations in political preferences due to technological development, in our case changes in safety and high-speed features of cars. The analysis is based on two recent representative Swedish surveys: In the first one people were asked about their preferred speed limits on motorways. In the second they were asked about why they vote as they do, and about why they think other people vote as they do. Why do people vote in the way they do and why do they vote at all? One reason for the latter is simply that we are heavily indoctrinated to do so; c.f. Tullock (2000). But is how we vote motivated solely by the instrumental outcome induced by our votes? Or are we perhaps, as proposed by Brennan and Lomasky (1993) and Brennan and Hamlin (1998, 2000), motivated largely by the expressive act of voting? If the expressive motive is important it becomes more likely that people are concerned with society as a whole when voting, rather than what is good solely for themselves.1 Indeed, as found by Brekke, Kverndokk and Nyborg (2003), most people seem to prefer a self-image that reflects social responsibility, rather than pure self-concern. The relative importance of purely self-interested voting, versus sociotropic voting, is still debated. This is partly because it is difficult to draw strong conclusions from general elections that are characterized by few political parties (or candidates) and many political issues and indicators; see e.g. Kinder and Kiewiet (1979), Kramer (1983) and Mitchell (1990). The reason for this is that some opinions of one party may favour a certain group while other opinions may favour other groups, and it is difficult to know the relative weights that different voters give to the different opinions of the parties. Thus, there are clear advantages to be gained from testing the self-interested voting model when the choice set is small and when there are few political issues, such as on a single-issue referendum or by using tailor-made surveys. Smith (1975) analyzed the voting behavior from a referendum in Oregon concerning tax equalization between different districts, and concluded that self-interest does seem to play an important role. Sears, Lau, Tyler and Allen (1980), on the other hand, analyzed survey data on people's attitudes toward specific policies in the US, and concluded that self-interest plays a very minor role. However, their conclusions, based on their statistical results, can be questioned: for example, they found that the support for a national health insurance decreased with income and increased with age, and that the support for more resources to be given to law and order increased with income, but these findings were not interpreted to reflect self-interest. Nevertheless, there have also been other studies such as Gramlich and Rubinfeld (1982) and Shabman and Stephenson (1994) that have concluded that self-interest alone does a poor job of explaining the results. These findings are also consistent with much experimental evidence from public-good games; see e.g. Ledyard (1995) and Keser and van Winden (2000). Much of the analysis here is based on the first survey about the preferred speed limits on motorways, which is an issue that has been frequently debated for a long time in Sweden. Besides being a single issue, it has the advantage of being fairly neutral from an ethical point of view, meaning that the opinion of good and responsible citizens is not straightforward to predict.2 Survey responses can otherwise be biased towards what is perceived to be the most ethical alternative, which is an argument that for example is put forward in the environmental valuation literature. A possible underlying reason for this bias, in turn, is that people typically attempt to present themselves in a positive manner to others, which implies that we sometimes deliberately conceal or colour our true opinions or preferences, i.e. what Kuran (1995) denotes "preference falsifications." An alternative reason is our desire to see ourselves as good people, and our tendency to bias our impressions of reality in various respects to maintain or improve this self-image (see e.g. Gilovich, 1991). Such tendencies may, for example, influence people to believe that they would be willing to pay more for a socially good cause than they would actually be willing to pay, which is denoted "purchase of moral satisfaction" by Kahneman and Knetsch (1992). In order to broaden the insights on voting motives, we also performed a second survey where a representative sample of Swedes was asked about why they vote as they do, and why they think other people vote as they do. This allows us to compare the findings about the preferred speed limits with the perception that people have of their own and others' voting motives. The reason we also asked about the perception of others' voting motives is the one just mentioned, i.e. that we suspected that the responses may be biased since most people would presumably consider voting out of conviction to be ethically superior to voting solely for one's own good. Given that self-interest is important for the political preferences, one would from the general self-interested hypothesis expect people with more exclusive and safer cars, and with higher subjective driving skills, to prefer relatively high speed limits, and elderly and more vulnerable people to prefer lower speed limits. In addition, one would expect people who drive faster, and who break the speed limits more often to prefer higher speed limits. The results, reported in Section 2, are consistent with these hypotheses. One would also expect that these preferences would change with changing circumstances. Indeed, behavioural adaptations in response to perceived changes in the environment are among the most important insights that modern economics can contribute to the public debate. For example, a safety improvement in cars of, say, 10% may cause a much smaller net effect on safety, since safer cars may induce people to drive faster and less responsibly; see Peltzman (1975), Keeler (1994), Peterson, Hoffer and Millner (1995) and Merrel, Poitras and Sutter (1999) for theoretical analysis as well as empirical evidence. This paper will concentrate on another kind of adjustment, namely how political preferences with respect to preferred speed limits on motorways change with the rapid technological development of private cars. The data used here is not ideal in this respect, since the survey is purely cross-sectional. Nevertheless, it is still possible to see whether the results are consistent with the hypothesis of adjustments of political preferences. If people demand higher speed limits when their cars get safer and have better high-speed characteristics, one would expect from the empirical analysis that more people would be in favour of increasing speed limits rather than decreasing them, since these limits were decided upon many years ago,3 and also that individuals with newer cars would prefer higher speed limits. This is also found in our empirical analysis. It is interesting to compare the obtained motives that can be inferred from people's choices, in reality or in surveys, from their own subjectively perceived voting motives. This is the reason we undertook the second survey about people's perceptions of their own and others' voting motives. The results indicate that most people believe that others vote largely for their own interests, whereas they, on average, consider themselves to be influenced roughly equally by their own interests and by those of society as a whole. The results further help to identify possible self-serving biases, i.e. that people may tend, unconsciously, to believe that what is in the interest of society happens to coincide with what is in their own private interest. If so, one would expect systematic differences between people's reported perception of their own motives and that of others' motives, and that people on average believe that they themselves vote more out of conviction, or sociotropically, than others do. And as reported in Section 3, this is indeed found to be the case. 2. Analysis of preferred speed limits The main survey was mailed to 2500 randomly selected individuals aged between 18 and 65 years old in Sweden, during spring 2001. The response rate of the overall survey was 62%, and 1131 car drivers answered the speed- limit question. Each respondent was asked the following question: What speed limit do you think we should have on Swedish motorways? They were given five options, all of which have been discussed in the Swedish debate from time to time: 90, 100, 110 (the level today), 120, and 130 km/h. The descriptive result in Table 1 shows that very few would like to have decreased speed limits, and that more than half of the respondents would like to see increased speed limits. This may in itself be an indication that people have adapted their political preferences to the increased levels of vehicle safety, but to be able to say more on this issue we would need to know who wants increased speed limits, and who does not. This is the issue to which we turn to next. In order to obtain information on the characteristics that affect the preferred speed limit, we ran an OLS-regression with the preferred speed limit as the dependent variable on a number of socio-economic characteristics and the characteristics of the car that they most frequently drive. Because of missing or incomplete responses, primarily on the income and voting variables the number of respondents included in the analysis is 974. The results from the estimations are presented in Table 2 along with the mean sample value of each explanatory variable. Table 1. Sample distribution of the preferred speed limit on Swedish motorways. (N = 974) 90 km/h 100 km/h 110 km/h 120 km/h 130 km/h (as of today) 2% 3% 41% 25% 29% The results show that those who drive newer cars do prefer higher speed limits, as one would expect, given that people adapt their preferences to changing circumstances, in this case safer cars with better high-speed driving characteristics.4 Similarly, drivers of the prestige cars BMW, Mercedes and Porsche, which are also safer and/or have better high-speed driving characteristics, also prefer higher speed limits. The size of car also affects the preferred speed limit in the expected direction, since bigger cars are on average safer, and have better high-speed characteristics, but the differences are not significant at conventional levels. Jeeps and vans constitute the base case, and although these are big vehicles, they have typically bad high-speed characteristics. The preferred speed limit is higher for those who believe they are better than average drivers, which is also consistent with the self-interested hypothesis, since the risk of an accident, for a given speed, would then be lower.5 A long annual driving distance also increases preferred speed limit, which, however, is not obvious from the self-interested hypothesis. On the one hand, those who drive a lot will gain more time from increased speed levels, but on the other hand they will also face a larger reduction in safety. In our case, it seems that the former effect dominates the latter. This is also consistent with Rienstra and Rietveld (1996), who found that self-reported frequency of speed-transgressions on Dutch highways increases with annual driving distances. The effects of always using a seatbelt may seem to contradict the theory, since those without seatbelts would face the biggest risk-increase from increased speed levels. However, it seems likely that the results largely reflect preference heterogeneity, so that those who are more risk-averse, or generally more cautious, prefer both to use seatbelts and to have relatively low speed limits. People living in the bigger cities of Sweden prefer somewhat higher speed limits, for which one explanation may be the higher pace, in general, of urban life, which translates into a higher value of time. The effect of education is quite small, and perhaps in the opposite direction to that in which one would have guessed, since safety awareness is often believed to follow from, or at least to be positively correlated with, education. However, hardly anyone in Sweden, irrespective of education, can be uninformed about the public campaign messages that safety decreases as speed increases. Further, the true relationship between speed and safety may not be as clear and strong as is typically presented, and maybe highly educated people are less easy to convince by public propaganda. Generally, most (but not all)6 analysts seem to agree that safety typically does decrease with increased speed limits, but there is less agreement about how large the effect is. Nevertheless, the result presented here is also consistent with the result of Hemenway and Solnick (1993) and Shinar, Schechtman and Compton (2001), who found that levels of education higher than high-school tended to increase the probability of speed violation. Table 2. OLS-estimation of preferred speed limit on Swedish motorways. Dependent variable: Preferred speed limit on Swedish motorways in km/h. (N = 974) Variable Coeff. P-value Mean value Constant -111.552 0.276 Model-year of the car 0.112 0.030 1993.299 Drives either BMW, Mercedes or Porsche 2.771 0.029 0.050 Drives a small-sized car 1.082 0.505 0.071 Drives a medium-sized car 1.553 0.229 0.516 Drives a big car 2.173 0.100 0.362 Drives better than average (self-reported) 2.693 0.000 0.424 Drove more than 25000 km last year 1.524 0.025 0.213 Always wears seat-belt in front-seat -2.446 0.003 0.860 Lives in Stockholm, Gothenburg or Malm? 1.469 0.040 0.196 University-educated 1.314 0.103 0.322 A-level educated 0.907 0.213 0.449 Equivalence-scaled household income* 0.201 0.001 12.047 Aged above 57 -1.952 0.021 0.151 Male 4.233 0.000 0.532 Has at least one child 0.669 0.307 0.406 Right-wing political preferences 2.799 0.001 0.140 Left-wing political preferences -1.582 0.011 0.299 R2 = 0.204 RESET** p-value = 0.281 Mean VIF*** = 1.84 and highest VIF for a single variable is 5.64. *In 1000 SEK/month and person. In order to compare income between households, we employ the equivalence scale used by the National Tax Board (RSV) in Sweden. The scale assigns the first adult the value of 0.95, the following adults are set at 0.7 and each child at 0.61 units. **RESET type of test is a general specification test (see e.g. Godfrey, 1988). In the test we rerun the regression including the squared, cubed and quadratic values of the estimated value of the dependent variable from the original model and test if coefficients of the included variables are jointly significant. ***We test for multicollinearity in our data set by calculating the variance of inflation factor (VIF) for each variable. The largest VIF is 5.64 and the mean VIF is 1.84. The largest value is thus smaller than 10 and the mean value is not considerably larger than 1, as required to be able to judge that there is no apparent indication of mutlicollinearity according to STATA (2003: 378). Increased household income causes both higher value of time and a higher value of a statistical life, or more generally, the willingness to pay to avoid traffic risks; hence the theoretical prediction is ambiguous. As for driving distance, the time effect appears to dominate. These results are also consistent with Rienstra and Rietveld (1996) and Shinar, Schechtman and Compton (2001) who found that those with the highest incomes tend to break highway speed limits more often than others. Older people prefer lower speed limits, as predicted due to their increased vulnerability. The relatively large male coefficient, corresponding to more than 4 km/h, can possibly be explained by observed higher risk aversion among women (e.g. Jianakoplos and Bernasek 1998, Hartog, Ferrer-i-Carbonell and Jonker 2002), but it might also reflect a taste difference concerning how fun fast driving is perceived to be, or some kind of macho image. The influence of political voting is also in the expected direction, since political parties to the left have typically proposed, and been associated with, a more restrictive speed policy, and vice versa. These parameters too may reflect direct instrumental self-interest, if people choose political party partly due to the politically proposed speed limits. Still, it seems reasonable that these parameters rather reflect ideological conviction and expressive concern. This does not necessarily mean that they represent sociotropic concern, however, since people may have different kinds of values and opinions that they want to express; see e.g. Brennan and Lomasky (1993) and Brennan and Hamlin (2000). There is also a large part of the variation left unexplained, and we do not know how large a share of this part can be explained by non- included variables that reflect self-interest, such as how fun it is considered to be to drive fast. 3. Perceptions of voting motives This second survey was mailed to 1500 randomly selected individuals aged between 18 and 65 years old in Sweden, during spring 2002 (i.e. a year after the first survey), and the response rate of the overall survey was 58%. To compare actual voting motives with the perception people have of voting motives, we simply asked another representative sample of Swedes about why they thought other people vote as they do, followed by a question about why they themselves vote as they do. Before the questions, they were given the following information: One can vote for a political party for different reasons. One can vote for a party because one is favored oneself, or one can do it out of conviction that it is the best for society as a whole. As can be seen from Tables 3 and 4, most people believe that others vote largely for their own interests, whereas they, on average, consider themselves to be influenced roughly equally by their own interests and by those of society as a whole. To test whether the observed differences are statistically significant, i.e. whether there is a statistical difference between people's perception of the degree to which they themselves vote sociotropically, and the degree to which others vote sociotropically, we used a simple ordered probit model; the motives are ordered from "Mostly because it benefits me (them)" to "Mostly out of conviction." This is an appropriate econometric specification since the empirical analysis focuses on an ordered discrete variable. The approach is based on the idea of a latent unobservable variable, Socio*, representing, in our case, individuals' perception of the degree of sociotropic voting with the following structure: 7 Socio*= d DOthers + e, where DOthers is a dummy variable indicating that the responses are given to the framing on how others vote, and d is the associated parameter to be estimated; e is assumed to be a normally distributed error term with zero mean and constant variance. The results in Table 5 show that the between- sample difference is indeed highly significant as reflected by a significant d-parameter (at less than 0.1% level). One possible reason for this systematic bias is that people want to have a good self-image, or identity, and that they therefore engage in a degree of self- deception so that they believe that they would vote more for the common good than they would actually do in reality. Indeed, there is much psychological evidence for systematic self-deception that enhances people's perception of their own abilities in many respects; see e.g. Gilovich (1991) and Taylor and Brown (1994). An alternative, slightly more sophisticated version of this argument, is that people answer truthfully and without bias concerning their own motives. However, since most of us want to see ourselves as good and responsible people, and at the same time to do what is best for ourselves, we may unconsciously try to reduce the cognitive dissonance (cf. Akerlof and Dickens, 1982) by adapting our perceptions of what is best for society as a whole so that it more or less coincides with what is best for ourselves. Hence, when we honestly try to judge different alternatives as objectively as possible on behalf of society, we will still unconsciously bias our judgment in favour of what is best for ourselves; see Babcock and Loewenstein (1997) and references therein for much evidence of such self-serving biases. Table 3. Self-reported perceptions of own voting motives. (N = 751) Why do you vote as you do? Reason Fraction Mostly because it benefits me 10% Because it benefits me, but also to a certain degree out of conviction 23% Equally because it benefits me and out of conviction 27% Out of conviction, but also to a certain degree because it benefits me 22% Mostly out of conviction 18% Table 4. Self-reported perceptions of others' voting motives. (N = 762) Why do you, on average, believe that people vote as they do? Reason Fraction Mostly because it benefits them 20% Because it benefits them, but also to a certain degree out of conviction 39% Equally because it benefits them and out of conviction 19% Out of conviction, but also to a certain degree because it benefits them 17% Mostly out of conviction 5% Table 5. Ordered probit regression to estimate the differences between the respondents' perceived degree to which they themselves and others vote sociotropically. (N = 1513) Variable Coeff. P-value Dummy variable reflecting the additional degree that others (compared to oneself) vote sociotropically -0.553 0.000 Cut-off 1 -1.346 Cut-off 2 -0.377 Cut-off 3 0.246 Cut-off 4 0.968 The dependent variable is the perceived degree of sociotropic voting coded as follows: 1 = Mostly because it benefits me (them); 2 = Because it benefits me (them), but also to a certain degree out of conviction; 3 = Equally because it benefits me (them) and out of conviction; 4 = Out of conviction, but also to a certain degree because it benefits me (them); and 5 = Mostly out of conviction. When we observe others, however, we just know roughly how they vote and their other circumstances. Hence, we can only crudely observe the correspondence between how others vote and their personal interests. But since we do not take into account the fact that others too adapt their perceptions of what is in the interest of society, through self-serving biases, the perception of the degree to which others vote sociotropically may be biased downwards. 4. Conclusion Most results from our survey indicate that self-interest is an important determinant of the preferred speed limit; for example, those who have a newer car (and hence one that is typically safer and more comfortable at high speeds) that is bigger and faster, prefer higher speed limits. This is also true for those who believe they are better than the average driver, whereas older people prefer lower speed limits. Furthermore, the results are also consistent with the existence of political offsetting behaviour, so that when cars become safer due to technological developments, people adapt their political preferences in favour of higher speed limits, which reduces road safety overall. However, the results from people's self-reported subjective voting motives are not consistent with purely instrumental pocketbook voting. Rather, it seems that the expressive motive is important, as argued thoroughly by Brennan and Lomasky (1993) and Brennan and Hamlin (1998, 2000),8 and it seems in particular that people want to express that they are socially responsible people who care about the overall welfare of society. This is also strengthened by the observed fact that people tend to believe that others vote more in their own interests, on average. Still, despite such biases, we also find that most people answer that they vote both for their own interest and for the interest of society. Hence, the hypothesis that most people solely or primarily vote sociotropically appears to be incorrect too. Answering a survey, such as ours on preferred speed limits, is in some respects quite similar to voting. Since the respondents were informed that the survey was sent out to a large random sample of Swedes by a university, and was a part of a research project, they could hardly believe that their single response would influence actual policy in a non-negligible way. Furthermore, the financial incentive of answering was zero, and it took probably almost half an hour to answer the whole survey on average. The response rates (62% and 58% respectively) were also similar to electoral participation rates in many countries.9 Presumably, most of the respondents answered based on a sense of civic duty, or due to the disutility associated with not answering which would break what they perceive to be a social (or personal) norm. But given that expressive voting, and expressive answering of surveys such as ours, is the main explanation behind observed behaviour, how can we explain the fairly strong correlation with their own self-interest? Although it is perceived as socially admirable to vote, it is hardly perceived to be admirable to vote solely for your own best interests. Rather, we are socialized to focus on the collective good when wearing our "political hats" (Sears, Lau, Tyler and Allen, 1980; Sears and Funk, 1990). One possible explanation to this paradox is provided by the idea of self-serving bias. As expressed by Elster (1999, 333): "Most people do not like to think of themselves as motivated only be self-interest.They will, therefore, gravitate spontaneously towards a world-view that suggests a coincidence between their special interest and the public interest." (italics in original.) In this way we can vote for improvements for ourselves without feeling guilty that this would, overall, be bad for society, and we are hence not plagued by any cognitive dissonance. After all, it is much more pleasant to think that what is good for you is also good for society, isn't it? Notes 1. However, as argued by Brennan and Lomasky (1993) as well as Brennan and Hamlin (2000), expressive voting per se does not necessarily imply sociotropic voting. 2. If anything, it may be considered somewhat more ethical to vote for lower speed limits. Nevertheless, despite a possible bias in this direction, very few respondents (5%) prefer a lower speed limit than the current one, as can be seen from Table 1. 3. Highway speed limits have increased rapidly in many states in the USA during the last 15 years (Greenstone, 2002), and also in other countries such as Italy, while there are on-going discussions in many other countries. 4. However, it is possible that people who drive newer cars do so due to stronger preferences for safety. For this reason, those who have new cars would then prefer lower speed limits than others would. Given that the empirical result presents the net effect, the isolated effect of a newer car on the preferred speed limit would then be larger than the effect that is presented here. 5. This does not necessarily mean that actual safety increases with self-reported subjective driving ability, however, since over-optimism regarding one's own driving ability is likely to be positively correlated with subjective driving ability. Still, what matters for the preferred speed limit it the subjective risk, which is independent of such biases. 6. Indeed, some analysts have even questioned the sign of the relationship: Lave and Elias (1997) argued that the accident increase on rural interstate USA roads resulting from increasing the speed limits to 65 mph in 1987 were more than off-set by the decline of accidents on other roads due to compensatory reallocations of drivers and state police; see also Greenstone (2002), who, however, questioned the conclusion by Lave and Elias. 7. In our case five ordered categories are possible. The respondents are assumed to choose the alternative closest to their own perception, where we observe Socio = 1, i.e. "mostly because it benefits me (them)," if Socio*= a1;Socio = 2, i.e. "because it benefits me (them), but also to a certain degree out of conviction," if a1 < Socio*= a2 etc.; until Socio = 5, i.e. "mostly out of conviction," if a4 = Socio*;where a1 to a4 are cut-off points to be estimated simultaneously with the coefficient. 8. See also Copeland and Laband (2002) for recent empirical support. 9. In the 2002 General Election in Sweden 80.1% of the eligible population voted (SCB, 2002). References Akerlof, G.A. and Dickens, T.W. (1982). The economic consequences of cognitive dissonance. American Economic Review 72: 307-319. Babcock, L. and Loewenstein, G. (1997). Explaining bargaining impasse: The role of self- serving biases. Journal of Economic Perspectives 11: 109-126. Brekke, K.A., Kverndokk, S. and Nyborg, K. (2003). An economic model of moral motivation. Journal of Public Economics 87: 1967-1983. Brennan, G. and Hamlin, A. (1998). Expressive voting and electoral equilibrium. Public Choice 95: 149-175. Brennan, G. and Hamlin, A. (2000). Democratic devides and desires. Cambridge: Cambridge University Press. Brennan, G. and Lomasky, L. (1993). Democracy and decision: The pure theory of electoral preference. Cambridge: Cambridge University Press. Copeland, C. and Laband, D.N. (2002). Expressiveness and voting. Public Choice 110: 351- 363. Elster, J. (1999). Alchemies of the mind: Rationality and the emotions. Cambridge: Cambridge University Press. Gilovich, T. (1991). Why we know what isn't so. New York: Free Press. Godfrey, L. (1988). Misspecification tests in econometrics: The Lagrange multiplier principle and other approaches. Econometric Society Monographs No. 16. Cambridge: Cambridge University Press. Gramlich, E.M. and Rubinfeld, D.L. (1982). Voting on spending. Journal of Policy Analysis and Management 1: 516-533. Greenstone, M. (2002). A reexamination of resource allocation responses to the 65-MPH speed limit. Economic Inquiry 40: 271-278. Hartog, J., Ferrer-i-Carbonell, A. and Jonker, N. (2002). Linking measured risk aversion to individual characteristics. Kyklos 55: 3-26. Hemenway, D. and Solnick, S. (1993). Fuzzy dice, drean cars, and indecent gestures: Correlates of drivers behavior. Accident Analysis and Prevention 25: 161-170. Jianakoplos, N.A. and Bernasek, A. (1998). Are women more risk averse? Economic Inquiry 36: 620-630. Kahneman, D. and Knetsch, J.L. (1992). Valuing public goods: The purchase of moral satisfaction. Journal of Environmental Economics and Management 22: 57-70. Keeler, T.E. (1994). Highway safety, economic behavior, and driving environment. American Economic Review 84: 684-693. Keser, C. and van Winden, F. (2000). Conditional cooperation and voluntary contributions to public goods. Scandinavian Journal of Economics 102: 23-39. Kinder, D.R. and Kiewiet, D.R. (1979). Economic discontent and political behavior: The role of personal grievances and collective economic judgments in congressional voting. American Political Science Review 23: 495-517. Kramer, G.H. (1983). The ecological fallacy revisited: Aggregate-versus individual-level findings on economics and elections, and sociotropic voting. American Political Science Review 77: 92-111. Kuran, T. (1995). Private truths, public lies: The social consequences of preference falsification. Cambridge Mass: Harvard University Press. Lave, C. and Elias, P. (1997). Resource allocation in public policy: The effects of the 65-MPH speed limit. Economic Inquiry 35: 614-620. Ledyard, J.O. (1995). Public goods: A survey of experimental research. In J.H. Kagel and A.E. Roth (Eds.), Handbook of experimental economics, 111-194. Princeton: Princeton University Press. Merrell, D., Poitras, M. and Sutter, D. (1999). The effectiveness of vehicle safety inspections: An analysis using panel data. Southern Economic Journal 65: 571-583. Mitchell, W.C. (1990). Ambiguity, contradictions, and frustrations at the ballot box: A public choice perspective. Policy Studies Review 9: 517-525. Peltzman, S. (1975). The effects of automobile safety regulation. Journal of Political Economy 83: 677-725. Peterson, S., Hoffer, G. and Millner, E. (1995). Are drivers of air-bag-equipped cars more aggressive? A test of the offsetting behavior hypothesis. Journal of Law and Economics 38: 251-264. Rienstra, S.A. and Rietveld, P. (1996). Speed behaviour of car drivers: A statistical analysis of acceptance of changes in speed policies in the Netherlands. Transportation Research: Part D: Transport and Environment 1: 97-110. SCB (2002). http://www.scb.se/statistik/me0101/me0101_tab511.xls Sears, D., Lau, R., Tyler, T. and Allen, H. (1980). Self-interest vs. symbolic politics in policy attitudes and presidential voting. American Political Science Review 74: 670-684. Sears, D.O. and Funk, C.L. (1990). Self-interest in Americans' political opinions. In J. Mansbridge (Ed.), Beyond self-interest, 147-170. Chicago: University of Chicago Press. Shabman, L. and Stephenson, K. (1994). A critique of the self-interested voter model: The case of a local single issue referendum. Journal of Economic Issues 28: 1173-1186. Shinar D, Schechtman, E. and Compton, R. (2001). Self-reports of safe driving behaviors in relationship to sex, age, education and income in the US adult driving population. Accident Analysis and Prevention 33: 111-116. Smith, J.H. (1975), A clear test of rational voting. Public Choice 23: 55-67. Stata (2003). Reference N-R. College Station Texas: Stata Press Publication. Taylor, S.E. and Brown, J.D. (1994). Positive illusions and well-being revisited: Separating fact from fiction. Psychological Bulletin 116: 21-27. Tullock, G. (2000). Some further thoughts on voting. Public Choice 104: 181-182. From checker at panix.com Tue Jan 3 22:35:49 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:35:49 -0500 (EST) Subject: [Paleopsych] CHE: Tomorrow, I Love Ya! Message-ID: Tomorrow, I Love Ya! The Chronicle of Higher Education, 5.12.9 http://chronicle.com/free/v52/i16/16a03001.htm [Colloquy transcript appended. [This is a great discussion of procrastination. But, as always, events and processes that have multiple causes are nearly impossible to diagnose. The author is very much part of the therapeutic culture, while a great many on my list are right-wing hold-'em-responsible sorts. It's true that driving up the cost of any behavior, including procrastination, will result in less of it. But, at least in today's society, blaming the individual can well result in a downward spiral, as is certainly suggested by the author. [Those who need disciplining the most are also those with the worst problems to begin with. So both genes and society conspire against them.] Tomorrow, I Love Ya! Researchers are learning more about chronic dawdlers but see no easy cure for procrastination Related materials Colloquy: Join a [55]live, online discussion with Joseph R. Ferrari, a professor of psychology at DePaul University who studies chronic procrastination, about what, if anything, can be done to help students who suffer from it, on Wednesday, December 7, at 2:30 p.m., U.S. Eastern time. By ERIC HOOVER Joseph R. Ferrari has a name for people who dillydally all the time. Sometimes, he spits out the term as if it were stale gum or a polysyllabic cuss word. When he dubs you a "chronic procrastinator," however, he does not mean to insult you. He is just using the psychological definition for someone who habitually puts things off until tomorrow, or next week, or whenever. The afflicted need not feel lonely: Research suggests that the planet is crawling with dawdlers. Procrastinators vex Mr. Ferrari, a psychology professor at DePaul University, yet he owes much to the dilatorily inclined. Without them he could not have helped blaze a trail of inquiry into procrastination (the word comes from the Latin verb procrastinare -- "to defer until morning"). The professor is as prompt as the sports car that shares his name, but he sees the symptoms of compulsive stalling everywhere. Mr. Ferrari and other scholars from around the world are finding that procrastination is more complex -- and pervasive -- than armchair analysts might assume. And helping people climb out of their pits of postponement is not as simple as giving them a pill or a pep talk. The task is particularly challenging in the hothouses of procrastination known as college campuses. Free time, long-term deadlines, and extracurricular diversions conspire against academic efficiency. Students are infamous for not tackling their assignments until the jaws of deadlines are closing. Professors may call such students slackers or sloths; psychologists define them as "academic procrastinators." According to recent studies, about 70 percent of college students say they typically procrastinate on starting or finishing their assignments (an estimated 20 percent of American adults are chronic procrastinators). Choosing to do one task while temporarily putting another on hold is simply setting priorities, which allows people to cross things off their to-do lists one at a time. Procrastination is when one keeps reorganizing that list so that little or nothing on it gets done. For some students, that inertia has costs. Researchers say academic procrastination raises students' anxiety and sinks their self-esteem. The behavior also correlates with some of higher education's thorniest problems, including depression, cheating, and plagiarism among students. Dozens of colleges have created counseling sessions or workshops for procrastinators. Yet Mr. Ferrari and other researchers say many institutions treat the problem superficially instead of helping students analyze their own thought processes and behavioral patterns in order to change them. Give a hard-core procrastinator nothing more than time-management tips, they warn, and you might as well hand him a page of hieroglyphics. "Telling a chronic academic procrastinator to 'just do it' is not going to work," Mr. Ferrari says. "It's like telling a clinically depressed person to cheer up." Learning About Loafers Laggards have always been tough cases. Even God could not inspire St. Augustine of Hippo, the fourth-century philosopher and theologian, to act right away. As he slowly came to accept Christianity, Augustine wrote in Confessions, the future bishop wavered. Clinging to temporal pleasures, Augustine famously asked of God: "Give me chastity and continency -- but not yet." Late in his life, Leonardo da Vinci, the genius who missed deadlines, lamented his unfinished projects. Shakespeare's Hamlet pondered -- and pondered -- killing his uncle Claudius before sticking him in the final act. Grady Tripp, the English professor in Michael Chabon's novel Wonder Boys, couldn't finish his second book because he refused to stop writing it. In a world of unmade beds and unwritten essays, the postponement of chores is commonplace. Now and again, humans put aside tasks with long-term rewards to savor immediate pleasures, like ice cream and movies, through a process called "discounting." For chronic procrastinators, however, discounting is a way of life. The scientific study of procrastination was (appropriately enough) a late-blooming development relative to the examinations of other psychological problems. Only in the 1980s did researchers start unlocking the heads of inveterate loafers, who suffer from more than mere laziness. Mr. Ferrari, a co-editor of Procrastination and Task Avoidance: Theory, Research, and Treatment (Plenum Publications, 1995), has helped clarify the distinction between delaying as an act and as a lifestyle. Not every student who ignores assignments until the last minute is an across-the-board offender, known to psychologists as a "trait procrastinator." Many students who drag their feet on term papers might never delay other tasks, such as meeting friends for dinner, showing up for work, or going to the dentist. As Mr. Ferrari explains in Counseling the Procrastinator in Academic Settings (American Psychological Association, 2004), a book he edited with three other scholars, there is no typical profile of an academic procrastinator (though family dynamics may influence the behavior). Studies have found no significant relationship between procrastination and intelligence or particular Myers-Briggs personality types. Research does show that academic procrastinators tend to lack self-confidence, measure low on psychologists' tests of "conscientiousness," get lost in wishful thoughts, and lie low during group assignments. In one study, Mr. Ferrari found that students at highly selective colleges reported higher rates of academic procrastination than students from less selective institutions. In another, the motives for academic procrastination among students at an elite college differed from students' motives at a nonselective one (the former put off assignments because they found them unpleasant, while the latter did so because they feared failure or social disapproval). Mr. Ferrari identifies two kinds of habitual lollygaggers. "Arousal procrastinators" believe they work best under pressure and tend to delay tasks for the thrill. "Avoidant procrastinators" are self-doubters who tend to postpone tasks because they worry about performing inadequately, or because they fear their success may raise others' expectations of them. Other findings complicate fear-of-failure theories. Some researchers say an inability to control impulses explains procrastinators best. And a recent study by Mr. Ferrari and Steven J. Scher, an associate professor of psychology at Eastern Illinois University, suggests that people who are typically negative avoid assignments that do not challenge them creatively or intellectually, whereas people who are typically positive more easily tackle less-stimulating tasks. Science is not likely to resolve the mysteries of procrastination anytime soon. After all, among researchers a debate still rages over the very definition of procrastination. Mr. Scher suspects there are different types of the behavior, especially if one defines it as not doing what one thinks one should do. "A common thing that many people put off is doing the dishes," Mr. Scher says. "But there are also times when those same people will all of a sudden find that doing the dishes is the most important thing they have to do -- thereby putting off some other type of task." Homework-Eating Dogs Psychologists do agree on one thing: Procrastination is responsible for most of the world's homework-eating dogs. Where procrastinators go, excuses follow. Students who engaged in academic procrastination said more than 70 percent of the excuses they gave instructors for not completing an assignment were fraudulent (the lies were most prevalent in large lecture classes taught by women who were "lenient"), Mr. Ferrari found in one study. In another, procrastinating students generally said they experienced a positive feeling when they fibbed; although they did feel bad when they recalled the lie, such remorse did not seem to prevent them from using phony excuses in the future. Mr. Ferrari has also experimented with giving bonus points for early work. In a study published in the journal Teaching of Psychology, he found that such incentives prompted 80 percent of students to fulfill a course requirement to participate in six psychological experiments by a midpoint date. On average, only 50 percent had done so before he offered the inducement. Mr. Ferrari believes that academe sends mixed messages about procrastination. Most professors talk about the importance of deadlines, but some are quick to bend them, particularly those who put a premium on being liked by their students. In one of Mr. Ferrari's studies, 90 percent of instructors said they did not require the substantiation of excuses for late work. "We're not teaching responsibility anymore," Mr. Ferrari says. "I'm not saying we need to be stringent, strict, and inflexible, but we shouldn't be spineless. When we are overly flexible, it just teaches them that they can ignore the deadlines of life." Ambivalence about deadlines pervades American culture. People demand high-speed results, whether they are at work or in restaurants. Yet this is also a land in which department stores encourage holiday shoppers to postpone their shopping until Christmas Eve, when they receive huge discounts. And each year on April 15, television news reporters from coast to coast descend upon post offices to interview (and celebrate) people who wait until the final hours to mail their tax returns. "As a society," Mr. Ferrari says, "we tend to excuse the person who says 'I'm a procrastinator,' even though we don't like procrastinators." But do all people who ignore assignments until the 11th hour necessarily suffer or do themselves harm? One of Mr. Ferrari's former students, Mariya Zaturenskaya, a psychology major who graduated from DePaul last spring, says some last-minute workers are motivated, well organized, and happy to write a paper in one sitting. Although students who cram for tests tend to retain less knowledge than other students, research has yet to reveal a significant correlation between students' procrastination and grades. "Some students just need that deadline, that push," Ms. Zaturenskaya says. "Some people really are more efficient when they have less time." Treating the Problem Before Jill Gamble went to college, she had little time to waste. As a high-school student, she had earned a 3.75 grade-point average while playing three sports. Each night she went to practice, ate dinner, did her homework, and went to bed. After matriculating at Ohio State University, however, her life lost its structure. At first, all she had to do was go to classes. Most days she napped, spent hours using Instant Messenger, and stayed up late talking to her suite mates. As unread books piled up on her desk, she told herself her professors were too demanding. The night before her Sociology 101 final, she stayed up drinking Mountain Dew, frantically reading the seven chapters she had ignored for weeks. "My procrastination had created a lot of anxiety," Ms. Gamble recalls. "I was angry with myself that I let it get to that point." She got a C-minus in the class and a 2.7 in her first quarter. When her grades improved only slightly in the second quarter, Ms. Gamble knew she needed help. So she enrolled in a course called "Strategies for College Success." The five-year-old course uses psychological strategies, such as the taking of reasonable risks, to jolt students out of their bad study habits. Twice a week students spend class time in a computer lab, where they get short lectures on study skills. Students must then practice each skill on the computer by using a special software program. Instructors use weekly quizzes to cut procrastination time from weeks to days and to limit last-minute cramming. The frequent tests mean one or two low scores will not doom a student's final grade, ideally reducing study-related stress. Students complete assignments at their own pace, allowing faster ones to stay engaged and slower ones to keep up, yet there are immovable dates by which students must finish each set of exercises. Enrollees learn how to write and follow to-do lists that reduce large tasks, such as writing an essay, into bite-size goals (like sitting down to outline a single chapter of a text instead of reading the whole book). Each student must also examine his or her use of rationalizations for procrastinating. The course's creator, Bruce W. Tuckman, a professor of education at Ohio State, says he also teaches students to recognize the underlying cause of procrastination, which he describes as self-handicapping. "It's like running a full race with a knapsack full of bricks on your back," Mr. Tuckman says. "When you don't win, you can say it's not that you're not a good runner, it's just that you had this sack of bricks on your back. When students realize that, it can be easier for them to change." Many of the worst procrastinators end up earning the highest grades in the class, Mr. Tuckman says. And among similar types of students with the same prior cumulative grade-point averages, those who took the class have consistently outperformed those who did not take it. After completing the course, Ms. Gamble says, she stopped procrastinating and went on to earn a 3.8 the next semester. Since then, she has made the dean's list regularly, and now helps counsel her procrastinating peers at Ohio State's learning center. "In workshops, we'll say, 'How many of you identify yourselves as procrastinators?' and they will throw their hands in the air and giggle, even though procrastination is a very negative thing," Ms. Gamble says. "Why do we do this so willingly? The answer is that let we let ourselves procrastinate. If someone was doing it to us, we wouldn't be so willing to raise our hands." A Universal Problem Psychologists generally agree that the behavior is learned and that students choose to procrastinate, even though they may feel helpless to stop. Mr. Ferrari, the DePaul professor, describes the behavior as a self-constructed mental trap that people can escape the same way smokers can kick the habit. Mr. Tuckman qualifies his optimism by saying one cannot hope to cure procrastination so much as reduce it. "It's very hard to go from being a hard-core procrastinator to a nonprocrastinator," says Mr. Tuckman, one of many researchers who has developed a scale that measures levels of procrastination. "You're just so used to doing it, there's something about it that reinforces it for you." Scholars are learning that procrastination knows no borders. At a conference of international procrastination researchers this summer at Roehampton University, in England, Mr. Ferrari and several other scholars presented a paper that compared the prevalence rates of chronic procrastination among adults in Australia, England, Peru, Spain, the United States, and Venezuela. They found that arousal and avoidant procrastinators were equally prevalent in all of the nations, with men and women reporting similar rates of each behavior. That is not to say all cultures share the same view of procrastination. Karem Diaz, a professor of psychology at the Pontifical Catholic University of Peru, has studied the behavior among Peruvians, whose expectations of timeliness tend to differ from those of Americans. "In Peru we talk about the 'Peruvian time,'" Ms. Diaz writes in an e-mail message. "If we are invited to a party at 7 p.m., it is rude to show on time. ... It is even socially punished. Therefore, not presenting a paper on time is expected and forgiven." Few Peruvians are familiar with the Spanish word "procrastinaci?n," which complicates discussions of the subject. "Some people think it is some sexual behavior when they hear the word," Ms. Diaz says. Yet the professor has been intrigued to find that some Peruvians identify themselves as procrastinators, and experience the negative consequences of the behavior even though social norms encourage it. Strategies for helping people bridge the gap between their actions and intentions vary. A handful of colleges in Belgium, Canada, and the Netherlands have just begun to develop counseling programs that draw on cognitive and behavioral research. The early findings: Helping students understand why they dawdle and teaching them self-efficacy tends to lessen their procrastination -- or at least make it more manageable. Timothy A. Pychyl, an associate professor of psychology at Carleton University, in Ottawa, Ontario, says group meetings are a promising approach, particularly those in which students make highly specific goals and help each other resist temptations to slack off. "For many people, it's an issue of priming the pump ... as simple as making a deal with oneself to spend 10 minutes on a task," Mr. Pychyl says. "At least the next day they can see themselves as having made an effort as opposed to doing nothing at all." Clarry H. Lay, a retired psychology professor at York University, in Toronto, who continues to counsel student procrastinators, uses personality feedback to promote better "self-regulation" among students. In group sessions, he discusses the importance of studying even when one is not in the "right mood" and of setting aside a regular place to work. Some participants become more confident and efficient. Others see improvements, only to experience relapses. Each semester one in five students miss the first session. Some sign up early but never show, while others arrive late or attend sporadically. But Mr. Lay understands. The counselor is a chronic procrastinator himself. ______________________________________________________________ References 55. http://chronicle.com/colloquy/2005/12/procrastination/ [Immediately below.] The Chronicle of Higher Education: Colloquy Transcript http://chronicle.com/colloquy/2005/12/procrastination/ There's Always Tomorrow Wednesday, December 7, at 2:30 p.m., U.S. Eastern time The topic Conventional wisdom holds that procrastinators are just plain lazy. But psychologists who study chronic dawdling say the behavior is much more complex than that. Researchers have found that college campuses are hothouses of procrastination, with an estimated 70 percent of students saying they typically postpone starting or finishing their assignments. Some of those students feel incapable of changing their behavior, which can sink not only their grades but also their self-esteem. Many colleges offer time-management workshops to help students overcome procrastination, yet some experts say treating chronic procrastinators requires intensive counseling that gets at the root causes of habitual dillydallying. Why do some students waste their time when they should be working? Should American universities offer cognitive and behavioral therapy for the problem, as many European ones do? Is there hope for a cure? If not, what is to be done? The guest Joseph R. Ferrari is a professor of psychology at DePaul University and a leading researcher of chronic procrastination. ______________________________________________________________ A transcript of the chat follows. ______________________________________________________________ Eric Hoover (Moderator): Welcome to The Chronicle's live chat with Joseph R. Ferrari, a professor of psychology at DePaul University and a leading researcher of chronic procrastination. Thanks for joining us today. We will now take questions. ______________________________________________________________ Question from Mark Grechanik, University of Texas at Austin: Do you think frequent quizzes may help students to engage in the learning process faster? Joseph R. Ferrari: Good point; it may. But the issue here is that folks wait to study not that they don't study. Still, you are right. Generating a system that reduces procrastination is the solution. ______________________________________________________________ Question from Anon, small NY college: Intensive counseling doesn't sound like it would fit into a college student's schedule. How can we lessen procrastination if we can't provide intensive counseling? If time management tips/workshops don't work, what does? Joseph R. Ferrari: For the CHRONIC PROCRASTINATOR, therapy. Even small NY towns (and I lived and worked at several) have professional clinical and counseling psychologists in the area. They need to get a student rate and seek professional help. Also, the college counseling center could have a staff person trained to hold sessions. Good luck! ______________________________________________________________ Question from Maryann P. county college in NJ: How can I solve my problem of often being late for appointments, term papers, kids appointments, car-pool? I am getting worse at it lately. Joseph R. Ferrari: Do you schedule back to back? Give yourself 20 minutes between tasks so if one takes longer, you are not overloaded. Remember, to prioritze is not the same as procrastinating. ______________________________________________________________ Question from Laura Wennekes, University of Amsterdam, The Netherlands: Isn't "procrastination" a natural response to artificially imposed notion of "deadline"? Joseph R. Ferrari: "Natural." Wow, no. It is learned. There is NO gene for procrastination. I hear a little rebellion here - like 'imposed deadline.' Look, life is full of commitments. We have responsibilities to meet those deadlines. ______________________________________________________________ Question from Evan, University of Delaware: Are you aware of the book "The Now Habit" (Niel Fiore) and the related "lifehacker" movement popular among IT professionals for over-coming procrastination? What do you think of them? Joseph R. Ferrari: Yes, there are many 'self-help' books out there. Most don't use good research to support them. Read the scholarly stuff for good science. ______________________________________________________________ Question from Nora, big state university: In my experience, procrastination is directly related to anxiety around writing. I sit down to write, and have bodily "symptoms" and have found psychoanalysis to be helpful. Have you considered the psychoanalytic treatment of writing blocks and procrastination? Joseph R. Ferrari: If analysis works for you, great. I recommend cognitive-behavioral therapy because it changes the way a person THINKS and BEHAVES instead of thinking about why one's mother acted a specific way. ______________________________________________________________ Question from Kris, MIT: Could you please explain how research into procrastination differentiates between depression-related procrastination, and procrastination in someone who would not be clasified as depressed? Is such a distinction even possible? Thank you from a chronic procrastinator, second generation. Joseph R. Ferrari: Yes, this is a learned thing and for you you had a model. Yes, there is a relation between procrastination and depression, but correlational. Does procrastination lead to depression? Or does depression lead to procrastination? No causal experiements have been done. ______________________________________________________________ Question from Kathleen, U. of Rochester: What does evidence suggest regarding a genetic contribution to chronic procrastination? Joseph R. Ferrari: None! It's too easy to blame things on one's genetics. If that is the case, then one can't change, and that is foolish. ______________________________________________________________ Question from Mark, NGO Abroad: The people I know who do not procrastinate are ones who get a great sense of satisfaction finishing things and checking it off the list. They tend to enjoy throwing things away rather than keeping them around in case they need them. Basically they have greater throughput. I dont really enjoy finishing things, I worry that they are not perfect. My question is what makes them get such satisfaction from completion? Joseph R. Ferrari: You are right about non-procrastinators (myself included). And you are right about the link to perfectionism. Procrastinators try to be perfect to have others like them. Nonprocrastinators try to be perfect to do a good job. So, stop focusing on what others will think of you as reflecting your self-worth. You are a good person even if the project is a B or B+. ______________________________________________________________ Question from Erica, NYU: Recently, I've seen a bunch of web pages advertising "coaches" who help a person get over their procrastinating habits. Does your research suggest that coaching is effective? Joseph R. Ferrari: In my 2004 book, Counseling the Procrastinator in Academic Settings, there is a chapter on digital coaching. Good luck. ______________________________________________________________ Question from Evan, University of Delaware: Sometimes I am much more productive on projects that have no deadline than those that do. Do you think the deadline itself is the culprit as much as the task at hand? How are they related? Joseph R. Ferrari: Sounds like a little rebellion against having an external deadline here. Ask yourself why you work against it, instead of work with it. ______________________________________________________________ Question from Marla, U. of Texas: Do you think the internet has worsened the problem of procrastination, or that it is just a different form of an ongoing problem? Joseph R. Ferrari: Worsened. Now we email at the last minute instead of placing a letter in the post 3 days before. ______________________________________________________________ Question from Erin McLaughlin, University of Pennsylvania: What are the indicators of a chronic procrastinator versus a student just uninterested in a project or class? Joseph R. Ferrari: Hmm, I think they would look the same and act the same. But the procrastinator would get anxious about not working on the target project. ______________________________________________________________ Question from Carmen, Northeast Iowa Community College: The Chronicle article about procrastinations makes no mention of Adult ADHD,a likely cause for at least 5% of procrastinators, and possibly many, many more, as there would be a natural selection bias toward procrastination in the ADHD population. What are your thoughts about this? Joseph R. Ferrari: Nice. I have a paper in press in "Clinical & Counseling Psychology" where we examined procrastinators with adult ADHDs. There was a link. Look for the article. Cheers. ______________________________________________________________ Question from Tammy, mid-size East Coast univ.: How can I know if a therapist is good at working with procrastination? Joseph R. Ferrari: Good point. Look for a PhD psychologist with a cognitive-behavioral style. If they try time management on you, walk away. ______________________________________________________________ Question from Michelle, Washington University: What is your viewpoint on freshman transition programs? Do you think they could be useful in heading off patterns of underachievement due to procrastination? And did you find anything that indicated how to prepare for independent study? Joseph R. Ferrari: Good point. No data here, but anything that tries to get students to examine what and why they do or do NOT work is good. Just don't hope that the 20% who are chronic procrastinators will be 'cured' by a week-long section of a freshmen course. ______________________________________________________________ Question from Laura, large eastern university: How do you get a procrastinator to actually go to therapy, however? Especially if they already feel they don't have enough time for everyday commitments. Joseph R. Ferrari: Can't make anyone do anything they don't want to. As my old Italian grandmother said (loses something in translation, but...) "for some folks, they will not get off the beach until the water hits their behind." ______________________________________________________________ Question from Donald, small Rhode Island College: Is procrastination a result of executive processing disorders? Joseph R. Ferrari: No data. Unlikely for most people. Remember, there is a difference between correlations and causality. ______________________________________________________________ Question from John Gault, Missouri Valley College: Are you saying there is nothing that can help these students except professional counselors? Is there nothing the professor or the school can do? Joseph R. Ferrari: Professors can design their classes to give extra credit for doing assignments EARLY, instead of punishing for being late. I'm not saying there is nothing. Remember, 80% of us procrastinate, but 20% are procrastinators. Programs can work for most folks, but for the 20% who are real procrastinators, where this is their lifestyle, they need therapy. ______________________________________________________________ Question from LLC, The University of Akron: I work in a career center, and see another side of chronic student procrastination related to making life decisions, applying for jobs, preparing for life after college. It seems like it takes a "crisis point" to motivate the truly chronic procrastinators ... are there some other tools, tips, techniques, resources you'd recommend to help us "kick-start" those that need the assistance? Joseph R. Ferrari: Right, some folks need a crisis to kcik them into moving. ______________________________________________________________ Question from Pat, Shorter College (small private): My husband just gave a bunch of low grades to students who failed, all semester, to turn in assignments. Do you think there's a group behavior/dynamics factor in procrastination? Joseph R. Ferrari: Probably not. They may have been having some group planned strategy to delay for other reasons. But we do know that in group assignments where performance is rated for everyone, procrastinators will engage in loafing--and the non-procrastinators in the group don't like them. ______________________________________________________________ Question from Constant Beugre, Delaware State University: Can having a 'to do list' and stick to it help in reducing procrastination? I have tried this technquen with some procrastinators in the past and it seems to work for some but not for others. Joseph R. Ferrari: You are again making my point - for 80% of us who procrastinate on some things, a to-do list system and other things will work. But for the 20% procrastinators who do this as a lifestyle, they will reshuffle the list and come up with excuses why they can't do something. ______________________________________________________________ Question from LLC, The University of Akron: Although genetics may not make us procrastinators, our personalities may help us be more prone to procrastination. MBTI perceiving types, for example, like open-endedness, see many options and are not as comfortable choosing only one option, are more spontaneous ... are certain personality types more prone to procrastination? Joseph R. Ferrari: MBTI has very poor reliability and validity, if one reads the scholarly literature. We found it did not relate to proc tendencies. So drop that party game! ______________________________________________________________ Question from D.S., large research university: Most people are surprised to hear I procrastinate because I am amazingly good at coming through in the clutch, and when I work with focus, I get a lot done, and so, the pattern continues. Do I need to have a big failure to motivate change in myself? Joseph R. Ferrari: You sound like a chronic AROUSAL procrastinator, who enjoys working against a clock for a rush experience. Fail enough with no one bailing you out, and you may want to change. ______________________________________________________________ Question from Alaine Allen, University of Pittsburgh: I work with pre-college students who seem to procrastinate out of fear (ex. anxiety about writing a college application essay). What type of advice would you give to those students beyond the common "just do it" statement. Joseph R. Ferrari: To break it down and do just smaller sections, to focus on the goal of getting it done, not what has to be done, look at each TREE and not the FOREST. ______________________________________________________________ Question from Laura, big eastern Univ.: Sometimes I even procrastinate on enjoyable things. Do you think it may be some sort of rebelliion against "scheduling"? I have often wondered, because sometimes I will have a better chance to get something done if it is "impulsive." Where would you start to fix something like that? Joseph R. Ferrari: Could be. I have a book chapter on procrastination and impulsivity. They are not as opposite as you think. ______________________________________________________________ Question from Janet, large Long Island college: Are stimulant medications such as Metadate or Ritalin effective in reducing procrastination. Joseph R. Ferrari: No. keep away from the meds. Instead, focus on learning new skills for life. ______________________________________________________________ Question from Alec, University of Cambridge, UK: Hello, could you please mention a few proven concrete exercises/methods to combat ones procrastination tendencies, especially concerning very long term, multifaceted goals such as writing up a PhD thesis. Thank you. Joseph R. Ferrari: We have several good tested, research-supported techniques in the 2004 book, Counseling the Procrastinator in Academic Settings, as well as the founding book in 1995. ______________________________________________________________ Question from Ed, U. of Kentucky: I see my earlier questsion was answered. What about heling children out of this who have already caught procrastination from the parent? Joseph R. Ferrari: Well, it was not 'caught' as much as modeled and learned. So they need to learn alternative ways to handle the situation and how they perceive the situation. Can the parents model 'getting it done'? ______________________________________________________________ Question from Ed, U. of Kentucky: What questions should you ask when looking for a cognitive therapist? How long should it take to change the habit? Joseph R. Ferrari: Cognitive-behavioral therapy is more short term than psychoanalysis. Listen to how they would work with you. Do they focus on your thinking pattern and your behaviors? Do they offer you skills and new ways to treat your behaviors and thoughts? ______________________________________________________________ Question from Mona Pelkey, United States Military Academy: My star procrastinator just left my office. He is unhappy because he just can't seem to get motivated enough to put in the effort to get the grades he wants. For the past hour and a half I literally sat over him, pushing him to make a list, prioritize it, and start the tasks. I am exhausted and so is my student. Help! Joseph R. Ferrari: He is still trying to be PERFECT. Life is not perfect, neither should he be. Clincial folks talk about an 80% rule where the client is 'cured' if they reach 80% of their life goal. So get this student to be happy with 97% then 95% then 90%, etc. Procrastinators would rather others say that they lacked EFFORT than lacked ABILITY. By never finishing they can protect their self-views and say have the ability but that they never tried hard enough. He needs to stop thinking his self worth is tied only to getting an A. ______________________________________________________________ Question from Bonnie, Huston-Tillotson: Regarding that 20% - what is (are) the pay-off (s) for procrastination? Joseph R. Ferrari: Protecting one's self-esteem and social-esteem (how others feel about your ability). Never finish, never get judged by others. Let others decide and act for you. ______________________________________________________________ Question from John Gault, Missouri Valley College: What criteria can be used to determine if a student is one of the 20% that needs counseling or just a normal procrastinator like the rest of us? Joseph R. Ferrari: In the 1995 text we have several reliable and valid self-report measures that assess one's procrastinator tendencies. Buy the book and take the measures! ______________________________________________________________ Question from Nina, Duke University: I have bipolar disorder and am having a hard time determining if I'm procrastinating and using BP as an excuse, or am really having trouble getting things done because of of rapid cycling. Any thoughts? Joseph R. Ferrari: I can't play therapist here, but remember procrastinators are great excuse makers, blaming it on others, parents, genes, other disorders. ______________________________________________________________ Question from Eric Hoover: Looking ahead, what are some new avenues you would like to explore in your research on procrastination? What are some questions you would like to see addressed in future studies? Joseph R. Ferrari: I want to continue to look at the cross-cultural meanings of procrastination. And funny you should ask, I have been asked by a reader of a publishing house to write a pop book based on scholarly research outcomes. So, I think I will take them up on that offer... ______________________________________________________________ Eric Hoover (Moderator): That wraps up our chat. Thanks to everyone who sent questions today. And Prof. Ferrari, thank you for your responses. From checker at panix.com Tue Jan 3 22:35:59 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:35:59 -0500 (EST) Subject: [Paleopsych] Discover: A third of medical research wrong? Message-ID: A third of medical research wrong? http://www.discover.com/web-exclusives/medical-research-wrong/ November 16, 2005 | Biology & Medicine The latest medical research is wrong about one-third of the time, that is... according to the latest medical research. A survey of 49 highly cited medical studies by epidemiologist John Ioannidis found the results of 14 studies were contradicted or downplayed by later research. Ioannidis' survey raises some hard questions. Is there a fundamental flaw in medical research, or is this just part of scientific progress? Problems occurred most often in studies that did not use randomized samples - five out of six were contradicted. A group that includes two high-profile cases of preventive therapies for coronary-artery disease. One study recommended hormone replacement therapy for post-menopausal women -- a treatment that some doctors now believe may increase chances of developing the disease. The other therapy used high doses of vitamin E to keep the coronary arteries healthy, a treatment that was later shown to be ineffective in randomized trials. In spite of the conflicting research, Nutritional Epidemiologist Eric Rimm is standing by his work. Rimm's study showed vitamin E reduced the risk of developing coronary-artery disease in healthy men ages 40 to 75. "I think what we originally reported hasn't really been re-tested," he said. The follow-up study cited by Ioannidis tested whether vitamin E prevented heart attacks and strokes in men and women over the age of 55 who already had cardiovascular disease or diabetes. According to Rimm, the health benefits of antioxidants like vitamin E provide is still a lively debate. "I thought our findings would be more generalizable," Rimm said, "But I think our results stand up, it just doesn't protect people with existing heart disease." Over generalizing research results is one way that Ioannidis sees medical studies being misused by doctors. "There are many issues that are not finalized with a single study," he said, "issues like trade-offs between benefits and harms, side-effects, and generalizability." If Ioannidis' work can be said to have a moral it is - don't put too much faith in one study. Solving the problem is not as simple as sticking to randomized experiments or requiring results to be duplicated. Observational studies, like Rimm's vitamin E research, are not randomized, but they can provide a foundation for future research. Likewise, duplicating research results can be unethical, and that may be the case for the 11 studies in his survey that have not been followed-up. One case is a clinical trial of the drug Zidovudine, a medication that was 75 percent effective in preventing HIV positive mothers from transmitting the disease to their unborn children. Re-testing Zidovudine would require exposing some unborn children to an increased risk of HIV infection. So, how should patients deal with the confusion? "We should switch our mode of thinking about a statistically significant result to what I would call a credible result," said Ioannidis. He proposes a system of rating published research based on the rigor of its experimental design, sample size and amount of supporting research. "There is nothing wrong about acknowledging that all of the research published in medical journals is not one-hundred percent credible," he said. "There is no perfect research." Ioannidis advises patients to protect themselves by taking a more critical approach to their doctor's advice. "Ask not just 'is it good for me?' but 'what is the uncertainty?"- Zach Zorich From checker at panix.com Tue Jan 3 22:36:07 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:36:07 -0500 (EST) Subject: [Paleopsych] Gerontology Research Group: Longevity for Dummies: How to Live Longer Than You Deserve Message-ID: Longevity for Dummies: How to Live Longer Than You Deserve From: "L. Stephen Coles, M.D., Ph.D." Date: Mon, 19 Dec 2005 22:46:21 -0800 To Members and Friends of the Los Angeles Gerontology Research Group: Given that your parents have already decided your genomic fate, here's the humorous "Coles List" of 12 life-style rules to maximize your remaining longevity... -- Steve Coles 1. Never smoke cigarettes, a pipe, or cigars (even under special circumstances like when a proud new father hands out cigars for free). Do your best to avoid 2nd-hand smoke. I have held the black-stained lungs of smokers in my hands during surgery. They look really ugly. (Healthy lungs are pink.) Also, it has not escaped our notice that many serious fires, in which innocent people are burned to death in a blazing building, are started by tobacco smoking and matches when the smoker falls asleep (often because they're dead drunk). 2. Eat a diet high in roughage: fruits, vegetables, and whole grains and low in saturated fats. Avoid trans-fats. Eat fish (salmon, tuna, sardines) [4 - 5] x per week. Eat unsalted nuts. Consciously decrease salt intake to as close to zero as possible. Beverages: Make sure you have an adequate fluid intake whenever sweating profusely (with electrolytes added as needed). Drink one cup of coffee in the morning. Drink one cup of green tea at some point during the day. Avoid carbonated sodas. Drink milk with meals; Note: Boba Tea is not a drink, but a meal with lots of calories. EtOH: Drink 1 full glass of red (or white) wine of your choice (with a meal) every day, unless you've already had your quota of something stronger (gin, vodka, whiskey, rum, etc.). Never drink more than two shots of alcohol at any one sitting. A DUI looks terrible on your driving record, but late night accidents on the freeway may shorten one's life as much as an intentional suicide. 3. Supplements: If I could take only four supplements a day, here's what they'd be... (a) Standard Multi, with no iron; (b) Fish Oil caps x 2; (c) Co-Q10; (d) Magnesium. (See our Bridge Plan on this webpage for recommended doses and additional supplements that are good to have, but not really essential.) Take an Aspirin every day if over 60. Take Melatonin [1-3] mg each night before bedtime if over 40. 4. Maintain a healthy weight. Check your BMI. Diet as necessary, until you maintain a stable weight for several years. (Prime Minister Sharon of Israel was looking for a stroke sooner or later at 350 pounds, no matter what else he did.) Remember that our bodies were tuned for our ancestors who had to chase after their food rather than merely walk a few steps to the refrigerator during a TV-commercial break. 5. Exercise vigorously 1/2-hour per day: If you don't sweat, you're not cutting it. Lift small weights (10#) with [40-50] reps per type of lift. When shopping, park far away from the store in a parking lot (on purpose) and slowly jog to the store. If it's only one floor up, use the stairs, not the elevator. Never run a marathon; it's too tough on the joints. Never play in competitive team sports. Professional athletes don't live longer than anyone else, and accumulated micro-trauma does build up. Inactivity (being a cough potato eating pizza and potato chips, while watching television for hours on end) leads to insulin insensitivity or Type-2 Diabetes, Cancer, turns your muscles to Jell-O (frailty and sarcopenia) and your brain to mush (short-term memory loss). Heart disease increases; the rate of strokes (undetectable microTIA's) increases; bones undergo osteoporosis and more easily fracture if you fall accidentally; depression increases; upper respiratory tract infections increase; urinary tract infections increase; sexual functions rust out while libido sags -- a tragic double hit. So, probably, the single most important thing you can do for maximizing longevity is to get the right level of physical activity in one's life. 6. Keep your immune system in tip-top shape. It is a precious invisible asset, since it protects you from the ceaseless assault of pathogenic microbes: viruses, bacteria, fungi, yeast, amoebas, helminths, and assorted parasites carried by all types of vectors including insects (mosquitos, spiders, ticks) and animal hosts, or poorly-cooked meat, spoiled food, or water fouled by sewage. (If you don't believe me, take a course called Medical Microbiology 101 just for fun, to learn about the extraordinary range of invisible creatures that silently crawl over your skin without your knowledge or permission. When revealed by an electron microscope, they're more varied than any Hollywood horror movie you've ever seen in your lifetime.) 7. Decrease Stress (e.g., elevated Cortisol in your blood for long periods due to continuous arguments with your spouse/significant-other, grief over the death of a loved one, loss of a job that you really liked, long commutes every day in heavy traffic; you know what I mean.) There are lots of unconscious stress conditions that should be identified early by the proper professional: marriage counselor, divorce lawyer, psychiatrist, as needed, but you must act to take advantage of them and not stay in an abusive relationship for very long, or your body will suffer the corrosive effects of chronically-elevated cortisol (that is trying to get you to "fight or flee" in the short term, but does you no good over the long term). Try to stay out of debt. Never gamble more than you can afford to lose in a single day. Never ever gamble at home on the Internet using a credit card. Stay away from addictive drugs at all costs. Pain meds are appropriate for people who are really handicapped, at the end of life, or with a chronic condition. Habitual crystal meth at rave parties is really going to burn you out fast, even if only indulged in on weekends. Never keep a loaded gun in your house. Never drive a car too fast under any circumstances unless you're in a chase scene in a movie where all contingencies have been premeditated. Never pay attention to the nut-case who honks his horn in back of you while in heavy traffic. Road rage kills even innocent bystanders. Advice for physicians: If you're ever paged on an airplane's PA system "Is there a doctor on-board?" Don't raise your hand or press your call button. There's very little that you can do anyway. Never take a vacation at a place where you'll need to take another vacation as soon as you get back. Stay away from exotic travel locals or political hot-spots like Kabul or Baghdad. Behave yourself at Christmas parties. Spiritual: Go to the church, synagog, or mosque of your choice at least three or four times a year, so the elders know what you look like. It may come in handy some day. Intellectual stimulation: Keep your mind active. Solve puzzles, play chess, checkers, cards, computer games, whatever as a way to keep nimble. Play an instrument; listen to good music. Go to museums, go to movies, read a good newspaper every day, watch the History Channel from time to time. Get a job that you like. Work for a charitable organization in your spare time. Teach children or become a mentor. Adopt a pet (cats, dogs, tropical fish, whatever). Raise children. Take time to smell the roses. 8. Get 7+ hrs. sleep qhs. Try to get up on Saturdays, Sundays, or holidays at the same time as normal for a weekday. 9. Germs and Oral Hygiene: Wash hands several times per day; shower twice a day; use mouth wash four or five times throughout the day; brush teeth after every meal (when at home); use dental floss once a day; get a dental hygienist to clean your teeth professionally twice a year. 10. Engage in sex as often as possible, but always with a willing partner and obviously one who is STD(-), especially HIV(-). If you're married, avoid extra marital relationships (otherwise known as "adultery"), if possible. It's the cover-ups that get you into more trouble later. Standard adult pornography works for most people, but child pornography, of any sort, is absolutely forbidden. Masturbate when alone for a long period of time to prevent rust accumulation. 11. If > 50 yo, get regular screening by a medial lab every year (Remember that the most common warning sign of a heart attack is death [secondary to an MI].) If your BP is too high (>140 and/or >90) you need to add some meds to your daily regimen... e.g., a beta blocker, an ACE Inhibitor, and/or a diuretic, under a doctor's supervision) to bring it down. Hypertension is a silent condition, and you won't know if you don't have it measured. If your Cholesterol is too high (>225), you ought to add a statin Rx. Start with a generic first. Shop around until you find one that suits you, as you will be on it "for life." Check your liver enzymes once a year. If your CBC counts are off, that will need to be fixed as well. 12. Men (and post-menopausal women) should donate blood regularly. It's not just for the sake of the faceless people you may help along the way. It's for your own good, too. Our blood-clotting machinery was tuned for a time when our hunter/gatherer ancestors got much more cuts and scrapes than we do in a modern civilization. Quick clotting then is not compatible with maximizing your longevity today, as we get internal clotting problems instead. I put this quick list of 12 life-style interventions together in an hour or so. If you knowingly fail to abide by any of the above rules and we find out, as a punishment, we'll send Martha Stewart to redecorate your house when you're not home. Happy holidays, Steve Coles L. Stephen Coles, M.D., Ph.D., Co-Founder Los Angeles Gerontology Research Group URL: http://www.grg.org URL: http://www.bol.ucla.edu/~scoles E-mail: scoles at ucla.edu From checker at panix.com Tue Jan 3 22:36:16 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:36:16 -0500 (EST) Subject: [Paleopsych] NYT Op-Ed: Global Trend: More Science, More Fraud Message-ID: Global Trend: More Science, More Fraud http://www.nytimes.com/2005/12/20/science/20rese.html By LAWRENCE K. ALTMAN and WILLIAM J. BROAD The South Korean scandal that shook the world of science last week is just one sign of a global explosion in research that is outstripping the mechanisms meant to guard against error and fraud. Experts say the problem is only getting worse, as research projects, and the journals that publish the findings, soar. Science is often said to bar dishonesty and bad research with a triple safety net. The first is peer review, in which experts advise governments about what research to finance. The second is the referee system, which has journals ask reviewers to judge if manuscripts merit publication. The last is replication, whereby independent scientists see if the work holds up. But a series of scientific scandals in the 1970's and 1980's challenged the scientific community's faith in these mechanisms to root out malfeasance. In response the United States has over the last two decades added extra protections, including new laws and government investigative bodies. And as research around the globe has increased, most without the benefit of such safeguards, so have the cases of scientific misconduct. Most recently, suspicions have swirled around a dazzling series of cloning advances by a South Korean scientist, Dr. Hwang Woo Suk. Dr. Hwang's research made him a national hero. His team outdid rivals by claiming to have extracted stem cells from cloned human embryos and to have cloned a dog, an extraordinary feat. Some observers hailed the breakthroughs as worthy of a Nobel Prize. Last month, critics charged that Dr. Hwang's published findings hid ethical lapses. And last week, collaborators accused the researcher of fabricating results in one of his landmark human cloning studies, published in Science last spring. Dr. Hwang has insisted on his innocence but said he would retract the Science paper. Now questions are growing about his earlier work, including Snuppy, the dog he claims to have cloned. Yesterday, news agencies reported that Seoul National University officials investigating Dr. Hwang's claims locked down his laboratory, impounded his computer and interviewed his colleagues, among other actions. "The Korean case shows us that we should be a lot more cautious," Marcel C. LaFollette, the author of "Stealing Into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing," said in an interview. "We have been unwilling to ask tough questions of people who are from other countries and whose systems are different because we were attempting to be polite." To be sure, most scientists resist pressures to cut corners and adhere to the canons of science, honoring the truth above all else. But surveys suggest that there are powerful undercurrents of misbehavior and, in some cases, outright fakery. In June, a survey of 3,427 scientists by the University of Minnesota and the HealthPartners Research Foundation reported that up to a third of the respondents had engaged in ethically questionable practices, from ignoring contradictory facts to falsifying data. Scientific fraud as a public danger burst into public view in the 1970's and 1980's, when major cases of misconduct shook a number of elite publications and institutions, including Yale, Harvard and Columbia. In 1981, Dr. Donald Fredrickson, then the director of the National Institutes of Health, defended the standard view of science as a self-correcting enterprise. "We deliberately have a very small police force because we know that poor currency will automatically be discovered and cast out," he said. But fraud after fraud made the weaknesses of that system impossible to ignore. In the early 1980's, a young cardiology researcher, Dr. John R. Darsee, was found to have fabricated much data for more than 100 papers he wrote while working at Harvard and Emory Universities. His work appeared in The New England Journal of Medicine, The Proceedings of the National Academy of Sciences and The American Journal of Cardiology, among other top publications. Startled, the federal government, beginning in 1985, took steps to augment the existing safeguards. For instance, Congress passed a law requiring public and private institutions to establish formal ways to investigate charges of fraud, in theory helping to assess damage, clear the air and protect the innocent. Eventually, the federal government established its own investigative body, now known by the Orwellian title of the Office of Research Integrity. Journal editors, at the center of the storm, also took collective action to enhance their credibility. In 1997, they founded the Committee on Publication Ethics, or COPE, "to provide a sounding board for editors who are struggling with how to best deal with possible breaches in research and publication ethics," according to the group's Web site. Consisting mostly of editors of medical journals, the committee now has more than 300 members in Europe, Asia and the United States. Still, the frauds kept coming. In 1999, federal investigators found that a scientist at the Lawrence Berkeley Laboratory in Berkeley, Calif., faked what had been hailed as crucial evidence linking power lines to cancer. He published his research in The Annals of the New York Academy of Sciences and F.E.B.S. Letters, a journal of the Federation of European Biochemical Societies. The year 2002 proved especially bleak. At Bell Labs, a series of extraordinary claims that seemed destined to win a Nobel Prize, including the creation of molecular-scale transistors, suddenly collapsed. Two of the world's most prestigious journals, Science and Nature, had published many of the fraudulent papers, underscoring the need for better safeguards despite two decades of attempted repairs. Experts now say that the explosive growth of science around the globe has made the problem far worse, because most countries have yet to institute the extra measures that the United States has put in place. That imbalance is at least partly responsible for a rise in scientific scandals in other countries, they say. Dr. Richard S. Smith, a former editor of The British Medical Journal (now BMJ) and the co-founder of the Committee on Publication Ethics, a group of journal editors, said in an interview that fraud was becoming increasingly difficult to root out because most countries' protective measures were either patchy or altogether absent. "It's hard enough to do something nationally, and to do it internationally is still harder," he said. "But that's what is needed." Contributing to the problem is a drastic rise in the number of scientific journals published around the world: more than 54,000, according to Ulrich's Periodicals Directory. This glut can confuse researchers, overwhelm quality-control systems, encourage fraud and distort the public perception of findings. "Foreign scientific journals have gone through the roof," said Shawn Chen, a senior associate editor at Ulrich's, nearly doubling to 29,098 in 2005 from 15,300 in 1980. "We're having a hard time keeping up." While millions of articles are never read or cited - and some are written simply to pad r?sum?s - others enter the pressure cooker of scientific and biomedical promotion, becoming lucrative elements of companies' business strategies. Until now, cases of questionable research in other countries have gotten little attention in the United States. But international editors, shaken by scandal, are now publicizing them and expressing concern. This year, the July 30 issue of BMJ devoted four articles to the subject, asking on its cover: "Suspicions of fraud in medical research: Who should investigate?" The articles discussed cases in which several publications, including BMJ, had stumbled in resolving serious doubts about the truthfulness of published studies done in Canada and India. The Canadian research claimed that a patented mix of multivitamins improved brain function in older people, and the Indian study said that low-fat, high-fiber diets cut by nearly half the risk of death from heart disease. The BMJ said that it published its own version of the Indian research in April 1992 and that it had later investigated serious questions about the validity of the research for more than a decade before speaking out. The difficulty, the editors said, was that journals could go only so far in fraud inquiries before needing the aid of national investigative bodies and professional associations that oversee scientific research. But in the Indian and Canadian cases, they added, such bodies either did not exist or refused to help, so "the doubts are unresolved." The journal's editors, Dr. Fiona Godlee and Dr. Jane Smith, noted that the United States and Scandinavian countries had adopted institutional defenses and that Britain was considering such safeguards. Journals have an obligation to help the process, they concluded, by publicizing their difficulties and doubts. Most recently, the South Korean uproar illustrates the tangle of publishing and policing issues that can arise as science becomes increasingly competitive and international. "Now we're in a situation where we have these alliances between university researchers in countries and between institutions that really weren't working together before," said Dr. LaFollette, author of "Stealing Into Print." The journal Science, owned by the American Association for the Advancement of Science, published the research of Dr. Hwang of Seoul National University and his colleagues in March 2004 and June 2005, hailing it as pathbreaking. On Dec. 14, the magazine noted in a statement how fraud charges about the 2005 research had led to two investigations - one in South Korea and the other at the University of Pittsburgh, home to one of the article's 25 co-authors. "The journal itself is not an investigative body," Donald Kennedy, the magazine's editor, argued. "We await answers from the authors, as well as official conclusions, before we come to any ourselves." On Friday in a news conference, Dr. Kennedy emphasized that the magazine had made no accusations of fraud against Dr. Hwang. "As of now we can't reach any conclusions with respect to misconduct issues," he said. Independent scientists said it remained to be seen how thoroughly authorities in South Korea, where Dr. Hwang is a celebrity, would investigate the case and resolve knotty issues in what amounts to a highly public test of institutional maturity. Seoul National University is leading the inquiry. Its committee, which apparently has the authority to examine Dr. Hwang's raw data and to question his colleagues, may have the best chance of discovering how much of his work remains valid. But experts also cautioned that the committee's credibility requires the addition of outsiders, and perhaps scientists from other countries, who know the field and can help ensure that the investigation will retain its objectivity. "Unfortunately, individual institutions have an enormous conflict of interest," said Dr. Smith, the former editor of The British Medical Journal. "It's a lot easier," he said, for such bodies when examining an allegation of fraud on their own, "to slide someone out of the organization or to suppress it altogether." From checker at panix.com Tue Jan 3 22:36:28 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:36:28 -0500 (EST) Subject: [Paleopsych] NYT: In the Chaos of Homelessness, Calendars Mean Nothing Message-ID: In the Chaos of Homelessness, Calendars Mean Nothing http://www.nytimes.com/2005/12/20/health/20case.html Cases By ELISSA ELY, M.D. I knew from a note left by her case manager that the homeless woman I was waiting to see had a history of trauma, terrible mood swings, past suicide attempts. I had booked an hour for an intake evaluation. She arrived 35 minutes late, sat down and shook out long braids. She was plump, and wore what looked like someone else's ill-fitting button-down shirt. She opened her pocketbook, eyeliner, loose cigarettes, Kleenex tumbling out. "I've got to see a doctor right away," she said, and she began to weep. In the next 15 seconds, I learned that she had been beaten by her father, that she had found her fianc? in bed with her daughter, that she had not slept in two nights. On top of that, she said, she had been late catching the bus from the shelter to the subway to get to the clinic and late getting the subway to the bus to get to the shelter the night before. That meant that she had missed dinner and breakfast. She didn't know if she could go on one minute more. I opened up my lunch bag and handed her the first thing I came across. It was a large banana. I had been looking forward to eating it. She finished it in three bites and dropped the peel into her pocketbook. We talked a few more minutes but the intake forms remained blank. She was essentially incoherent; not psychotic, but washed away in a flood of disorganization and emotion, unable to grab any branch long enough to pull herself onto land. Finally, I gave her a card with an appointment for the next week and a week's prescription for a benign sleeping medication. Five nights later, I was in a different shelter when the staff phone rang. It was the drug and alcohol abuse counselor whose office was two doors away. The walls are plasterboard, and I could hear him talking into the phone from his cubicle. There was weeping in the background. "I have someone who needs to see a psychiatrist right away," he told me. "Sign her up," I said. "Just a minute," he said, and, putting his hand over the receiver, told the weeper: "I'm going to sign you up. You can see her next week." The weeping became loud wailing. "What's her name?" I asked. It was familiar. So, now, was the weeping. A mental image surfaced of braids and objects tumbling from a purse. "Tell her we met last Friday," I said. "I'm the doctor she saw in the clinic." The wailing continued. "Tell her I gave her the banana," I said. The weeping stopped. "Oh," I heard her say through the wall. "That doctor." "Ask her if she's sleeping any better," I said. He asked her, then told me that she had not filled the prescription yet. "Tell her I'm going to see her the day after tomorrow," I said. "We made an appointment. Nine o'clock. She has a card." "O.K.," he said. "I'll tell her." Without the banana, she would not have recognized me. I was simply another branch floating by. In the chaos of her life, it was natural to see a psychiatrist in one shelter during the day on Friday and a second one in a different shelter on Wednesday night. But by the happy coincidence of being the same person in two places, I had headed off redundancy. Luck and a piece of fruit had provided the beginning of consistent care. Now we could get down to work. Friday morning came. 9:00. 9:30. 10:30. She never showed. At the night shelter two days later, the drug counselor said he had not seen her. She had moved into the land of the missing. Life should be easier to organize. One patient, one doctor. But the muddle is a metaphor for homelessness, part of the diffusion that comes when you have no base. Calendars and appointment cards mean nothing. The solution is unclear, at least to me. A banana makes an impression, but not for long enough. From checker at panix.com Tue Jan 3 22:38:42 2006 From: checker at panix.com (Premise Checker) Date: Tue, 3 Jan 2006 17:38:42 -0500 (EST) Subject: [Paleopsych] Louis Menand: Everybody's an Expert Message-ID: Louis Menand: Everybody's an Expert http://www.newyorker.com/printables/critics/051205crbo_books1 Putting predictions to the test. Issue of 2005-10-05 Posted 2005-11-28 Prediction is one of the pleasures of life. Conversation would wither without it. "It won't last. She'll dump him in a month." If you're wrong, no one will call you on it, because being right or wrong isn't really the point. The point is that you think he's not worthy of her, and the prediction is just a way of enhancing your judgment with a pleasant prevision of doom. Unless you're putting money on it, nothing is at stake except your reputation for wisdom in matters of the heart. If a month goes by and they're still together, the deadline can be extended without penalty. "She'll leave him, trust me. It's only a matter of time." They get married: "Funny things happen. You never know." You still weren't wrong. Either the marriage is a bad one?you erred in the right direction?or you got beaten by a low-probability outcome. It is the somewhat gratifying lesson of Philip Tetlock's new book, "Expert Political Judgment: How Good Is It? How Can We Know?" (Princeton; $35), that people who make prediction their business?people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables?are no better than the rest of us. When they're wrong, they're rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert's predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones. "Expert Political Judgment" is not a work of media criticism. Tetlock is a psychologist?he teaches at Berkeley?and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living "commenting or offering advice on political and economic trends," and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate. Tetlock got a statistical handle on his task by putting most of the forecasting questions into a "three possible futures" form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes?if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices. Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. "We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly," he reports. "In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals?distinguished political scientists, area study specialists, economists, and so on?are any better than journalists or attentive readers of the New York Times in 'reading' emerging situations." And the more famous the forecaster the more overblown the forecasts. "Experts in demand," Tetlock says, "were more overconfident than their colleagues who eked out existences far from the limelight." People who are not experts in the psychology of expertise are likely (I predict) to find Tetlock's results a surprise and a matter for concern. For psychologists, though, nothing could be less surprising. "Expert Political Judgment" is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse. In one study, college counsellors were given information about a group of high-school students and asked to predict their freshman grades in college. The counsellors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate. There are also many studies showing that expertise and experience do not make someone a better reader of the evidence. In one, data from a test used to diagnose brain damage were given to a group of clinical psychologists and their secretaries. The psychologists' diagnoses were no better than the secretaries'. The experts' trouble in Tetlock's study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong. Tetlock describes an experiment that he witnessed thirty years ago in a Yale classroom. A rat was put in a T-shaped maze. Food was placed in either the right or the left transept of the T in a random sequence such that, over the long run, the food was on the left sixty per cent of the time and on the right forty per cent. Neither the students nor (needless to say) the rat was told these frequencies. The students were asked to predict on which side of the T the food would appear each time. The rat eventually figured out that the food was on the left side more often than the right, and it therefore nearly always went to the left, scoring roughly sixty per cent?D, but a passing grade. The students looked for patterns of left-right placement, and ended up scoring only fifty-two per cent, an F. The rat, having no reputation to begin with, was not embarrassed about being wrong two out of every five tries. But Yale students, who do have reputations, searched for a hidden order in the sequence. They couldn't deal with forty-per-cent error, so they ended up with almost fifty-per-cent error. The expert-prediction game is not much different. When television pundits make predictions, the more ingenious their forecasts the greater their cachet. An arresting new prediction means that the expert has discovered a set of interlocking causes that no one else has spotted, and that could lead to an outcome that the conventional wisdom is ignoring. On shows like "The McLaughlin Group," these experts never lose their reputations, or their jobs, because long shots are their business. More serious commentators differ from the pundits only in the degree of showmanship. These serious experts?the think tankers and area-studies professors?are not entirely out to entertain, but they are a little out to entertain, and both their status as experts and their appeal as performers require them to predict futures that are not obvious to the viewer. The producer of the show does not want you and me to sit there listening to an expert and thinking, I could have said that. The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious. Tetlock's experts were also no different from the rest of us when it came to learning from their mistakes. Most people tend to dismiss new information that doesn't fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it. The same deficiency leads liberals to read only The Nation and conservatives to read only National Review. We are not natural falsificationists: we would rather find more reasons for believing what we already believe than look for reasons that we might be wrong. In the terms of Karl Popper's famous example, to verify our intuition that all swans are white we look for lots more white swans, when what we should really be looking for is one black swan. Also, people tend to see the future as indeterminate and the past as inevitable. If you look backward, the dots that lead up to Hitler or the fall of the Soviet Union or the attacks on September 11th all connect. If you look forward, it's just a random scatter of dots, many potential chains of causation leading to many possible outcomes. We have no idea today how tomorrow's invasion of a foreign land is going to go; after the invasion, we can actually persuade ourselves that we knew all along. The result seems inevitable, and therefore predictable. Tetlock found that, consistent with this asymmetry, experts routinely misremembered the degree of probability they had assigned to an event after it came to pass. They claimed to have predicted what happened with a higher degree of certainty than, according to the record, they really did. When this was pointed out to them, by Tetlock's researchers, they sometimes became defensive. And, like most of us, experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely. If a prediction needs two independent things to happen in order for it to be true, its probability is the product of the probability of each of the things it depends on. If there is a one-in-three chance of x and a one-in-four chance of y, the probability of both x and y occurring is one in twelve. But we often feel instinctively that if the two events "fit together" in some scenario the chance of both is greater, not less. The classic "Linda problem" is an analogous case. In this experiment, subjects are told, "Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations." They are then asked to rank the probability of several possible descriptions of Linda today. Two of them are "bank teller" and "bank teller and active in the feminist movement." People rank the second description higher than the first, even though, logically, its likelihood is smaller, because it requires two things to be true?that Linda is a bank teller and that Linda is an active feminist?rather than one. Plausible detail makes us believers. When subjects were given a choice between an insurance policy that covered hospitalization for any reason and a policy that covered hospitalization for all accidents and diseases, they were willing to pay a higher premium for the second policy, because the added detail gave them a more vivid picture of the circumstances in which it might be needed. In 1982, an experiment was done with professional forecasters and planners. One group was asked to assess the probability of "a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983," and another group was asked to assess the probability of "a Russian invasion of Poland, and a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983." The experts judged the second scenario more likely than the first, even though it required two separate events to occur. They were seduced by the detail. It was no news to Tetlock, therefore, that experts got beaten by formulas. But he does believe that he discovered something about why some people make better forecasters than other people. It has to do not with what the experts believe but with the way they think. Tetlock uses Isaiah Berlin's metaphor from Archilochus, from his essay on Tolstoy, "The Hedgehog and the Fox," to illustrate the difference. He says: Low scorers look like hedgehogs: thinkers who "know one big thing," aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who "do not get it," and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible "ad hocery" that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. A hedgehog is a person who sees international affairs to be ultimately determined by a single bottom-line force: balance-of-power considerations, or the clash of civilizations, or globalization and the spread of free markets. A hedgehog is the kind of person who holds a great-man theory of history, according to which the Cold War does not end if there is no Ronald Reagan. Or he or she might adhere to the "actor-dispensability thesis," according to which Soviet Communism was doomed no matter what. Whatever it is, the big idea, and that idea alone, dictates the probable outcome of events. For the hedgehog, therefore, predictions that fail are only "off on timing," or are "almost right," derailed by an unforeseeable accident. There are always little swerves in the short run, but the long run irons them out. Foxes, on the other hand, don't see a single determining explanation in history. They tend, Tetlock says, "to see the world as a shifting mixture of self-fulfilling and self-negating prophecies: self-fulfilling ones in which success breeds success, and failure, failure but only up to a point, and then self-negating prophecies kick in as people recognize that things have gone too far." Tetlock did not find, in his sample, any significant correlation between how experts think and what their politics are. His hedgehogs were liberal as well as conservative, and the same with his foxes. (Hedgehogs were, of course, more likely to be extreme politically, whether rightist or leftist.) He also did not find that his foxes scored higher because they were more cautious?that their appreciation of complexity made them less likely to offer firm predictions. Unlike hedgehogs, who actually performed worse in areas in which they specialized, foxes enjoyed a modest benefit from expertise. Hedgehogs routinely over-predicted: twenty per cent of the outcomes that hedgehogs claimed were impossible or nearly impossible came to pass, versus ten per cent for the foxes. More than thirty per cent of the outcomes that hedgehogs thought were sure or near-sure did not, against twenty per cent for foxes. The upside of being a hedgehog, though, is that when you're right you can be really and spectacularly right. Great scientists, for example, are often hedgehogs. They value parsimony, the simpler solution over the more complex. In world affairs, parsimony may be a liability?but, even there, there can be traps in the kind of highly integrative thinking that is characteristic of foxes. Elsewhere, Tetlock has published an analysis of the political reasoning of Winston Churchill. Churchill was not a man who let contradictory information interfere with his id?es fixes. This led him to make the wrong prediction about Indian independence, which he opposed. But it led him to be right about Hitler. He was never distracted by the contingencies that might combine to make the elimination of Hitler unnecessary. Tetlock also has an unscientific point to make, which is that "we as a society would be better off if participants in policy debates stated their beliefs in testable forms"?that is, as probabilities?"monitored their forecasting performance, and honored their reputational bets." He thinks that we're suffering from our primitive attraction to deterministic, overconfident hedgehogs. It's true that the only thing the electronic media like better than a hedgehog is two hedgehogs who don't agree. Tetlock notes, sadly, a point that Richard Posner has made about these kinds of public intellectuals, which is that most of them are dealing in "solidarity" goods, not "credence" goods. Their analyses and predictions are tailored to make their ideological brethren feel good?more white swans for the white-swan camp. A prediction, in this context, is just an exclamation point added to an analysis. Liberals want to hear that whatever conservatives are up to is bound to go badly; when the argument gets more nuanced, they change the channel. On radio and television and the editorial page, the line between expertise and advocacy is very blurry, and pundits behave exactly the way Tetlock says they will. Bush Administration loyalists say that their predictions about postwar Iraq were correct, just a little off on timing; pro-invasion liberals who are now trying to dissociate themselves from an adventure gone bad insist that though they may have sounded a false alarm, they erred "in the right direction"?not really a mistake at all. The same blurring characterizes professional forecasters as well. The predictions on cable news commentary shows do not have life-and-death side effects, but the predictions of people in the C.I.A. and the Pentagon plainly do. It's possible that the psychologists have something to teach those people, and, no doubt, psychologists are consulted. Still, the suggestion that we can improve expert judgment by applying the lessons of cognitive science and probability theory belongs to the abiding modern American faith in expertise. As a professional, Tetlock is, after all, an expert, and he would like to believe in expertise. So he is distressed that political forecasters turn out to be as unreliable as the psychological literature predicted, but heartened to think that there might be a way of raising the standard. The hope for a little more accountability is hard to dissent from. It would be nice if there were fewer partisans on television disguised as "analysts" and "experts" (and who would not want to see more foxes?). But the best lesson of Tetlock's book may be the one that he seems most reluctant to draw: Think for yourself. From checker at panix.com Wed Jan 4 23:13:13 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:13:13 -0500 (EST) Subject: [Paleopsych] John Perna: A Little Conspiracy Paranoia From The Web In Case You Haven't Heard It Before Message-ID: A Little Conspiracy Paranoia From The Web In Case You Haven't Heard It Before Date: Sat, 10 Dec 2005 15:03:53 -0800 (PST) From: John Perna Subject: Uniting human rights activists Uniting human rights activists Most people just want to be free. In the entire political spectrum; from left to right, there are people, who want to be free. In the entire political spectrum, from left to right, there are ALSO people, who want to take our freedoms away. The freedom loving people out number the totalitarians, by orders of magnitude. Yet; over and over again, the totalitarians are winning. The totalitarians are winning because they are organized. The freedom loving people need to get organized. The totalitarians are winning because have their disinformers sowing strife, among the freedom loving people: causing them to fight EACH OTHER. The freedom loving people need to work TOGETHER to oppose totalitarianism. Mao Tse Dung, summed it up quite well: "The stronghold is broken easily from the INSIDE" Divide and conquer is a VERY OLD trick. "If your enemy is stronger divide him." -Sun Tzu Amazingly, in the entire political spectrum; from left to right, people often agree as to who the totalitarians are, yet they still cannot work together. IF you really understand the power brokers of this world, you will understand that these instruments of totalitarians are often tentacles of the same monster, all controlled by the New World Order. The CIA works for the New World Order. The FBI is now controlled by the New World Order. Al Qaeda works for the New World Order. The governments of many countries are a part of the New World Order. The "industrial-military complex" IS the New World Order. Yet often those who agree on EXACTLY WHO IT IS, that is destroying our freedom, cannot work together, because they want to give them different labels. At the same time, the totalitarians have disinformers working among the human rights activists. The false human rights activist will present himself as a defender of liberty, BUT he will spend most of his time attacking the defenders of liberty, under numerous pretenses. If you total up all of the defenders of liberty; who are attacked by the false human rights activist, you will quickly notice that it is practically the ENTIRE human rights movement. Those; whom he praises, are usually in JAIL. Obviously, no one will have his approval, until they are in jail! HOW CAN SOMEONE OPPOSE EVERY HUMAN RIGHTS ACTIVIST, OF EVERY TYPE? THERE IS ONLY ONE POSSIBLE ANSWER: HE IS OPPOSED TO HUMAN RIGHTS ACTIVISM ITSELF. When you find that the same person; WHO OPPOSES EVERY HUMAN RIGHTS ACTIVIST, OF EVERY TYPE, YET is also quite comfortable, with those who preach totalitarianism, then there can be NO DOUBT as to his real goals. When the disinformers become very desperate, they will resort to insulting the freedom activists; by name, AND IN THE SUBJECT LINE, OF THE MESSAGE. When the disinformers become very desperate they will attempt to fabricate "band wagon appeal", by claiming that no one is listening to the most effective defenders of liberty. Of course; if that was true, The New World Order would not have disinformers devoting so much time, and energy, to opposing them! The false human rights activist will spend a great deal of effort on the following: 1. Dividing the defenders of liberty into fighting each other by creating strife among freedom activists. (This is often called "infighting", but it actually comes from a person, who is working to build tyranny.) 2. Attempting to waste the time of freedom activists, by forcing them to respond to personal attacks, or endless debate about trivia. 3. ABOVE ALL: Accusing the most effective freedom activists of being false opposition. THE GOLDEN RULE OF DISINFORMERS: Always accuse your adversary of whatever is true about yourself. Details below: Disinformer's Gambit Tactics of Disinformers A short time ago I posted some information on The Tactics of Disinformers. I was amazed to see how quickly the disinformers came out of the wood work to identify themselves. This response was actually an excellent demonstration of one of the tactics; that was explained: THE GOLDEN RULE OF DISINFORMERS: Always accuse your adversary of whatever is true about yourself. Sometimes, if you simply post information on Disinformers; without naming anyone, the Disinformers will identify themselves, by taking offense, and popping in, to comment. Occasionally, even though you do not mention anyone's name, they will claim that what you have written is about them. Nothing could be more precise, in identifying exactly WHO are the disinformers, than the disinformers own reaction to this post. It is safe to conclude that if one were to throw a rock into a pen full of pigs, and one of them squealed, that the one that squealed, would be the one that you had hit. TRY POSTING THIS, ALL AROUND, and then watch what happens. See who it is that claims, that this "shoe fits". It is a good dog, who comes when he is called. Here is the message that makes the pigs squeal: The Disinformer's Gambit - the Tactics of Disinformers In the game of chess there exists a term; which describes a maneuver, a stratagem, and a ploy; using different pieces working together, to accomplish a secret purpose. This is called a gambit. Those who hope to build totalitarian control over freedom loving people, also use many gambits. BUT, the chess player has the advantage of always knowing which pieces are on the other side. Those who would defend liberty have no such luxury. The most treacherous player in the gambit is the false patriot. The false human rights activist will present himself as a defender of liberty, BUT he will spend most of his time attacking the defenders of liberty, under numerous pretenses. The false human rights activist will attempt to re-direct attention in every direction EXCEPT at those, who are building tyranny. The false human rights activist will attempt to re-direct attention to an entire race, a religion, a large group with a few problems, or even against freedom activists. This is essentially the very old military strategy of the creation a "decoy to draw enemy fire." The ultimate success of this deception is to cause defenders of liberty to "fire" on non-combatants, or even to "fire" on their own friends, and allies. In addition, the false human rights activist will neutralize the efforts of the defenders of liberty. The false patriot, like every other player in the gambit, will neutralize the efforts of the defenders of liberty by: 1. Deceiving the defenders of liberty into supporting hoaxes. Any time that a simple request for evidence results in vitriolic personal attacks, or an attempt to censor, with no attempt to address the issue, you can be sure that you are dealing with a hoax. 2. Dividing the defenders of liberty into fighting each other by creating strife among patriots. 3. Deceiving the defenders of liberty into creating class struggle by promoting ethnic hatred. 4. Attempting to waste the time of human rights activists, by forcing them to respond to personal attacks, or endless debate about trivia. 5. Using multiple aliases to create the appearance that there is someone, who believes them to be credible. 6. ABOVE ALL: Accusing the most effective patriots of being false opposition. THE GOLDEN RULE OF DISINFORMERS: Always accuse your adversary of whatever is true about yourself. It is VERY simple. Those who spend their time fighting tyranny, are freedom activists. Those who spend their time fighting freedom activists are working for the advancement of tyranny. The false human rights activist will generally proclaim himself to be a religious zealot; such as a Christian conservative, but he will not adhere to the principles of his proclaimed religion, in his own actions. He might even; incomprehensibly, be an ally of another player in the gambit; who is an atheist, or a socialist, or even a Satanist. (There is no implication here that an atheist cannot be in favor of freedom. This is mentioned only to show that often "strange bed fellows" are working together to oppose freedom activists) In one group he might attempt to appear to be a Christian conservative. In another group he might present himself as a Marxist, or even as an Al Qaeda supporter. The false human rights activist; and those whom he deceives into promoting hoaxes, will reduce the credibility of all freedom activists. Those who have heard outlandishly ridiculous "conspiracy theories" will have a tendency to dismiss all "conspiracy theories" without serious examination. The false human rights activist will compose letters of praise, to himself, and post them as if they were anonymous correspondence supposedly received by himself. The atheist, or socialist will supply a fervent, and obvious, opposition to the defenders of liberty. This player, in the gambit, will make a strong frontal attack on everything that the defenders of liberty do. The atheist, or socialist will implement the same goals; that are listed above. The atheist, or socialist will play on the existence of outlandishly ridiculous "conspiracy theories" to encourage people to dismiss all "conspiracy theories" without serious examination. The atheist, or socialist may infest patriotic, and religious, groups for no apparent reason. The main reasons are to disrupt, to get the group deleted, or to waste the time of human rights activists. Even though DISINFORMERS make it a practice of accusing the most effective freedom activists of being false opposition, you will notice that there are certain admitted government apologists, who are never accused. This is because certain admitted government apologists are also players in the gambit. The admitted government apologist will attempt to portray the defenders of liberty as subversives, as unreliable, and as outlandishly mentally unstable. The admitted government apologist will play on the existence of outlandishly ridiculous "conspiracy theories" to encourage people to dismiss all "conspiracy theories" without serious examination. The admitted government apologist will use documents from the FBI, or other government sources, as his authority. Another player in the gambit, is the pretended neutral. The pretended neutral will take almost none of the above actions, but will work in the background, until a vital move is needed. The pretended neutral will occasionally "vouch" for the other players in the gambit. The pretended neutral may even be the moderator of a group. When the pretended neutral is the moderator of a group, he will take no action, as long as the other players in the gambit are "holding their own". If the other players in the gambit are NOT "holding their own", THEN he will intervene; perhaps even by banning the most effective defenders of liberty. The most effective defenders of liberty will be caught between all of these different players in the gambit. One dead "give away" is the fact that these different players in the gambit frequently are unable to conceal their support for one another; in spite of their alleged differences. The alleged religious zealot/patriot might be great friends with ONE PARTICULAR atheist/socialist. The false human rights activist; who is always accusing the most effective patriots of being false opposition, might be great friends with the admitted government apologist; whom he never accuses. The false human rights activist will even join the egroup that is run by the admitted government apologist, and will participate, and support, the admitted government apologist, in his own group. If you will study the tactics used by the FBI COINTELPRO program. THEN you will recognize FBI COINTELPRO immediately. Visit: http://bcn.boulder.co.us/environment/vail/ifanagentknocks.html Take note of the following paragraph: "The FBI COINTELPRO program was initiated in 1956. Its purpose, as described later by FBI Director J. Edgar Hoover, was "to expose, disrupt, misdirect, discredit, or otherwise neutralize activities" of those individuals and organizations whose ideas or goals he opposed. Tactics included: falsely labeling individuals as informants; infiltrating groups with persons instructed to disrupt the group; sending anonymous or forged letters designed to promote strife between groups; initiating politically motivated IRS investigations; carrying out burglaries of offices and unlawful wiretaps; and disseminating to other government agencies and to the media unlawfully obtained derogatory information on individuals and groups." If you understand the meaning of the tactic "to expose, disrupt, misdirect, discredit, or otherwise neutralize activities" you will understand that the person who is most likely of being a Fed, is the one who involves patriots in activities that have no effect on those who are building tyranny, and activities that will destroy the credibility of the patriots. Those who are building tyranny would love to convince people that we are all a bunch of paranoid nuts, so that we will we unable to warn people about the building of tyranny. Those who are building tyranny would be more capable of convincing people that we are paranoid nuts, if they could convince a segment of the patriots to run around telling people that the clouds, and the street signs, are out to get us, or that we should ban water. If you understand the meaning of the tactic "infiltrating groups with persons instructed to disrupt the group; sending anonymous or forged letters designed to promote strife between groups" OR OF: "disseminating to other government agencies and to the media unlawfully obtained derogatory information on individuals and groups." THEN you will know that a campaign of personal attacks on the real patriots is a part of the FBI COINTELPRO program. If you understand the meaning of the tactic of falsely labeling individuals as informants THEN you will know that the person; who is most likely to be a fed, is the one who calls the real patriot a fed. It is a common tactic of FBI COINTELPRO to try to be THE FIRST ONE MAKE THE ACCUSATION. THE GOLDEN RULE OF DISINFORMERS: Always accuse your adversary of whatever is true about yourself. The reader will notice that no one is named in this discussion. If there is any person; who feels that this discussion accurately describes themselves, that person is invited to comment. From checker at panix.com Wed Jan 4 23:13:22 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:13:22 -0500 (EST) Subject: [Paleopsych] NYTBR: Wish List: No More Books! Message-ID: Wish List: No More Books! http://select.nytimes.com/preview/2005/12/25/books/1124990452143.html Essay By JOE QUEENAN A few months ago, a friend whose iconoclastic, unpredictable behavior I usually hold in high esteem handed me a book entitled "A Navajo Legacy: The Life and Teachings of John Holiday." Apparently, he expected me to read it, despite the fact that I am not really a Navajo medicine man autobiography kind of guy. Flummoxed but gracious, I took the gift home and put it on a shelf alongside all the other books that friends have lent or given me over the years. This collection includes: "Loose Balls: The Short, Wild Life of the American Basketball Association"; "Hoosier Home Remedies"; "A Walk Through Wales"; "The Frontier World of Doc Holliday"; "Elwood's Blues: Interviews with the Blues Legends & Stars," by Dan Aykroyd and Ben Manilla; both "Steve Allen on the Bible, Religion, and Morality" and Allen's somewhat less Jesuitical "Hi-Ho, Steverino!"; and, of course, "Bloodline of the Holy Grail: The Hidden Lineage of Jesus Revealed." If I live to be 1,000 years old, I am not going to read any of these books. Especially the one about the American Basketball Association. Several years ago, I calculated how many books I could read if I lived to my actuarially expected age. The answer was 2,138. In theory, those 2,138 books would include everything from "The Decline and Fall of the Roman Empire" to "Le Colonel Chabert," with titles by authors as celebrated as Marcel Proust and as obscure as Marcel Aym?. In principle, there would be enough time to read 500 masterpieces, 500 minor classics, 500 overlooked works of genius, 500 oddities and 138 examples of high-class trash. Nowhere in this utopian future would there be time for "Hi-Ho, Steverino!" True, I used to be one of those people who could never start a book without finishing it or introduce a volume to his library without eventually reading it. Familiarity with this glaring character flaw may have encouraged others to use me as a cultural guinea pig, heartlessly foisting books like "Damien the Leper" (written by Mia Farrow's father) or the letters of Flannery O'Connor upon me just to see if they were worth reading. (He wasn't; she was.) These forced reconnaissance missions ended the day an otherwise likable friend sent me "Accordion Man," a biography of Dick Contino by Bob Bove and Lou Angellotti. Though I revere Mr. Contino for his matchless rendition of "Arrivederci Roma," it disturbed me greatly that my friend would have mistaken affection for Mr. Contino's music for intense interest in his personal history. CD's are fine: you can read "Death in Venice" or Pascal's "Pens?es" while "Roll Out the Barrel" is bouncing along in the background. But if you spend too much time reading about how Dick Contino finally came to record "Lady of Spain," you will never get to Junichiro Tanizaki's "Some Prefer Nettles." And "Some Prefer Nettles" is No. 1,759 on my dream reading list. I do not avoid books like "Accordion Man" or "Elwood's Blues" merely because I believe that life is too short. Even if life were not too short, it would still be too short to read anything by Dan Aykroyd. And I am sure I am not alone when I state that cavalierly foisting unsolicited reading material upon book lovers is like buying underwear for people you hardly know. Bibliophiles are ceaselessly engaged in the mental reconfiguration of a Platonic reading list that will occupy them for the next 35 years: First, I'll get to "Buddenbrooks," then "The Man Without Qualities," then "The Decline of the West," and finally "Finnegans Wake." But I'll never get to "Finnegans Wake" if I keep stopping to read books like "The Frontier World of Doc Holliday." Time management is not the only issue here. There is often something sinister about the motives of those who press books onto others. The urge to give "Elwood's Blues" to someone who already owns unread biographies of Franz Schubert and Miles Davis smacks of sadism; the books serve as a taunt, a gibe, a threat, an insult. It is as if the lender himself wants to see how far another person can be pushed before he resorts to the rough stuff. Hint: If you're going to really press your luck and give someone one of this year's models that you fear they might eventually smack you with, steer clear of Pantagruelian blabfests like "The Historian." Otherwise, you could find yourself with a few loose teeth. I am certainly not suggesting that all given or lent books should be rejected, pulped, incinerated or mothballed. My sisters have impeccable taste in crime fiction and know precisely which Ruth Rendell to pass along next. A neighbor I met through my wife's garden club has given me several hard-to-get Georges Simenon mysteries, all of which proved to be delightful. But for everyone lending me "Maigret and the Insouciant Parrot," there are a dozen others handing me "Va Va Voom!: Bombshells, Pin-ups, Sexpots and Glamour Girls." Or "A Navajo Legacy." In many instances, people pass along books as a probing technique to see, "Is he really one of us?" That is, you're not serious about your ethnic heritage unless you've read "Angela's Ashes." You don't care about the poor Mayans unless you've read "1491" and its inevitable sequel, "1243." You don't really give a damn about the pernicious influence of the Knights Templar unless you've read "The Da Vinci Code." And you're not really interested in the future of our imperiled republic unless you've read "The No Spin Zone," "The No Spin Zone for Children," "101 Things Stupid Liberals Hate About the No Spin Zone," and "Ann Coulter on Spinoza." Some people may wonder, "Well, why don't you simply lie when people ask you about the books they've lent you?" There are two problems with such duplicity. One, lying is a sin. Two, experienced biblio-fobs will invariably subject their targets to the third degree: Were you surprised at Damien the Leper's blas? reaction when his fingers fell into the porridge? What did you think of that cute little ermine affair Parsifal was wearing when he finally grasped the Holy Grail? Were you taken aback by all those weird recipes for Sachertorte in "The Tipping Point"? After reading "The Frontier World of Doc Holliday," do you have more or less respect for Ike Clanton as a money manager? Pity the callow lendee who falls for the trick question and is unmasked as a fraud. Because I live in a small town where I cross paths with promiscuous book lenders all the time, I have lately taken to hiding in subterranean caverns, wearing clever disguises while concealed in tenebrous alcoves and feigning rare tropical illnesses to avoid being saddled with any new reading material. Were I a younger man, I would be more than happy to take a gander at "Holy Faces, Secret Places: An Amazing Quest for the Face of Jesus," or Phil Lesh's Grateful Dead memoir. But time is running out, and if I don't get cracking soon I'm never going to get to "Gunpowder and Firearms in the Mamluk Kingdom," much less "The Golden Bough." Of course, the single greatest problem in accepting unsolicited books from friends is that it may encourage them to lend you others. Once you've told them how much you enjoyed "How the Irish Saved Civilization," they'll be at your front doorstep with "How the Scots Invented the Modern World," "The Gifts of the Jews," and perhaps one day "How the Norwegians Invented Hip-Hop." If you tell them that you liked "Why Sinatra Matters" or "Why Orwell Matters," you're giving them carte blanche to turn up with "Why Vic Damone Matters" or "Why G. K. Chesterton Still Rocks!" When I foolishly let it be known how much I enjoyed "X-Ray," the "unauthorized" autobiography of the Kinks' lead singer, Ray Davies, a good friend then upped the ante with a copy of Dave Davies's "Kink: The Outrageous Story of My Wild Years as the Founder and Lead Guitarist of the Kinks." Surely, "The Mick Avory Story: My Life As the Kinks' Drummer" and "Pete Quaife: Hey, What Am I, the Kinks' Bassist or a Potted Plant?" cannot be far behind. This is why I recently told yet another friend that I hated a police procedural he'd dropped off. The novel dealt with a fictitious organization called the Vermont Bureau of Investigation, and was actually quite good. But when I found out that there were 15 other books in the series, and realized that my friend might own all of them, I feared that I would never, ever get to Miguel de Unamuno's "Tragic Sense of Life" at this rate. And at No. 2,127 on my list, Unamuno may only just get in under the wire anyway. Joe Queenan's most recent book is "Queenan Country: A Reluctant Anglophile's Pilgrimage to the Mother Country." From checker at panix.com Wed Jan 4 23:13:32 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:13:32 -0500 (EST) Subject: [Paleopsych] NYT: What Men Want: Neanderthal TV Message-ID: What Men Want: Neanderthal TV http://www.nytimes.com/2005/12/11/fashion/sundaystyles/11MEN.html By WARREN ST. JOHN THERE was a heart-wrenching moment at the end of last season's final episode of the ABC series "Lost" when a character named Michael tries to find his kidnapped son. Michael lives for his child; like the rest of the characters in "Lost," the two of them are trapped on a tropical island after surviving a plane crash. When word of Michael's desperate mission reaches Sawyer - a booze-hoarding, hard-shelled narcissist who in his past killed an innocent man - his reaction is not what you would call sympathetic. "It's every man for hisself," Sawyer snarls. Not so long ago Sawyer's callousness would have made him a villain, but on "Lost," he is sympathetic, a man whose penchant for dispensing Darwinian truths over kindnesses drives not only the action but the show's underlying theme, that in the social chaos of the modern world, the only sensible reflex is self-interest. Perhaps not coincidentally Sawyer is also the character on the show with whom young men most identify, according to research conducted by the upstart male-oriented network Spike TV, which interviewed thousands of young men to determine what that coveted and elusive demographic likes most in its television shows. Spike found that men responded not only to brave and extremely competent leads but to a menagerie of characters with strikingly antisocial tendencies: Dr. Gregory House, a Vicodin-popping physician on Fox's "House"; Michael Scofield on "Prison Break," who is out to help his brother escape from jail; and Vic Mackey, played by Michael Chiklis on "The Shield," a tough-guy cop who won't hesitate to beat a suspect senseless. Tony Soprano is their patron saint, and like Tony, within the confines of their shows, they are all "good guys." The code of such characters, said Brent Hoff, 36, a fan of "Lost," is: "Life is hard. Men gotta do what men gotta do, and if some people have to die in the process, so be it." "We can relate to them," said Mr. Hoff, a writer from San Francisco. "If you watch Sawyer on 'Lost,' who is fundamentally good even if he does bad things, there's less to feel guilty about in yourself." Gary A. Randall, a producer who helped create "Melrose Place," is developing a show called "Paradise Salvage," about two friends who discover a treasure map, for Spike TV. He said the proliferation of antisocial protagonists came from a concerted effort by networks to channel the frustrations of modern men. "It's about comprehending from an entertainment point of view that men are living a very complex conundrum today," he said. "We're supposed to be sensitive and evolved and yet still in touch with our Neanderthal, animalistic, macho side." Watching a deeply flawed male character who nevertheless prevails, Mr. Randall argued, makes men feel better about their own flaws and internal conflicts. "You think, 'It's O.K. to go to a strip club and have a couple of beers with your buddies and still go home to your wife and baby and live with yourself,' " he said. The most popular male leads of today stand in stark contrast to the unambiguously moral protagonists of the past, good guys like Magnum, Matlock or Barnaby Jones. They are also not simply flawed in the classic sense: men who have the occasional affair or who tip the bottle a little too much. Instead they are unapologetic about killing, stealing, hoarding and beating their way to achieve personal goals that often conflict with the greed, apathy and of course the bureaucracies of the modern world. "These kinds of characters are so satisfying to male viewers because culture has told them to be powerful and effective and to get things done, and at the same time they're living, operating and working in places that are constantly defying that," said Robert Thompson, the director of the Center for the Study of Popular Television at Syracuse University. Consequently, whereas the Lone Ranger battled stagecoach robbers and bankers foreclosing on a widow's farm, the enemy of the contemporary male TV hero, Dr. Thompson said, is "the legal, cultural and social infrastructure of the nation itself." Because of competition from the Web, video games and seemingly countless new cable channels, television producers are obsessed with developing shows that can capture the attention of young male viewers. To that end Spike TV, which is owned by Viacom and aims at men from 18 to 49, has ordered up a slate of new dramas based on characters whose minds are cauldrons of moral ambiguity. They will join antiheroes on other networks like Vic Mackey, Gregory House, Jack Bauer of "24," and Tommy Gavin, the firefighter played by Denis Leary on "Rescue Me" who sanctions a revenge murder of the driver who ran over and killed his son. Paul Scheer, a 29-year-old actor from Los Angeles and an avid viewer of "Lost," said that not even committing murder alienates an audience. "You don't have to be defined by one act," he said. "Three people on that island have killed people in cold blood, and they're quote-unquote good people who you're rooting for every week," Mr. Scheer said. The implication for the viewer, he added, is, "You can say 'I'm messed up and I left my wife, but I'm still a good guy.' " Peter Liguori, the creator of the FX shows "The Shield" and "Over There" and now the president of Fox Entertainment, said that most strong male protagonists on television appeal to male viewers on an aspirational level. Those aspirations, though, he said, have changed over time. In the age of "Dragnet," "everything was about aspiring to perfection," Mr. Liguori said. "Today I think we thoroughly recognize our flaws and are honest about them. True heroism is in overcoming those flaws." Part of the shift to such complex and deeply flawed characters surely has to do with the economics of television itself. Cable channels, with their targeted niche audiences, are no longer obliged to aim for Middle America, and can instead create dramas for edgier audiences. The financial success of networks like FX and HBO has also opened the door for auteurism that has embroidered scripts with dramatic complexities once reserved for film and literature, where odious protagonists - think of Tom Ripley, the murderous narcissist protagonist in Patricia Highsmith's "The Talented Mr. Ripley" - have long been common. Still the morally struggling protagonist has been evolving over time, Mr. Ligouri said, pointing to Detective Andy Sipowicz on "NYPD Blue." Sipowicz was an alcoholic who occasionally fell off the wagon, and he often flouted police procedure in the name of tracking down criminals. Like all good protagonists, Sipowicz was also exceedingly good at his job. Mr. Liguori took the notion of the flawed protagonist to new levels in the creation of Vic Mackey on "The Shield." At the end of the pilot for that show, Mr. Liguori said, Mackey turned to a fellow cop he knew to be crooked and shot him in the face. "There was a great debate at FX about how the audience would react," he said. "I thought 50 percent would say that's the most horrible thing, and 50 percent would say he was a rat." Mr. Chiklis, who plays Vic Mackey, won an Emmy for his performance in that episode, which was the highest rated at the time in the history of the network. "The ability to let the audience make that judgment was my 'aha' moment," Mr. Liguori said. "I think that moral ambiguity is highly involving for an audience. Audiences I believe relate to characters they share the same flaws with." Mr. Liguori added that in a world where people are increasingly transparent about their own flaws - detailing them on blogs, reality TV, on talk shows and in the news media - scripted TV drama had to emphasize characters' weaknesses. "The I.M.-ing and social Web sites, they're all being built on being as open and honest as possible," he said. "You cannot go from that environment to a TV show where everyone is perfect." With the success of shows featuring deeply flawed leads, the challenge for networks is to rein in the impulse to create ever more pathological characters. Pancho Mansfield, the head of original programming for Spike TV, said he could see network television going the route of "Scarface." "With all the competition that's out there and all the channels, people are pushing the extremes to distinguish themselves," Mr. Mansfield said. But for now, he argued, the complexity of characters on serialized TV shows is a kind of antidote to the increasingly superficial characters in Hollywood films, which he said, have come more to resemble the simplistic television dramas of yore. Dr. Thompson agreed. "On one level you could see the proliferation of these types of characters as an indication of the decline of American civilization," he said. "A more likely interpretation may be that they represent an improvement in the sophistication and complexity of television." If you accept that view, he added, "Then the young male demographic has pretty good taste." From checker at panix.com Wed Jan 4 23:13:42 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:13:42 -0500 (EST) Subject: [Paleopsych] NYT: Mass-Produced Individuality Message-ID: Mass-Produced Individuality http://www.nytimes.com/2005/12/11/magazine/11wwln_consumed.html The Way We Live Now By ROB WALKER CafePress Many people used to make their own clothes and build their own furniture. The Industrial Revolution, with technological innovations like power looms and power lathes, and now today's far-flung supply chains, made it easier and more practical to buy ready-made apparel and housewares. Lately, however, mass production has been cast not so much as the best thing that ever happened to consumers but as an annoyance, even a problem. It stands in the way of our individuality. What can save us? Of course the answer must be more technological innovation, and in the past several years there have been many attempts to tweak mass production (of everything from sneakers to M&M's) in ways that will deliver "mass customization" and "the one-to-one future," in which every single consumer gets unique treatment. One of the most intriguing experiments has been CafePress, a company that has been around since 1999 and allows anyone with rudimentary command of a computer the opportunity to, as the site says, "make your own stuff." That is, you can place your own designs or slogans or whatever onto a variety of commodities provided by CafePress: T-shirts, hats, teddy bears, coffee mugs, pillows, clocks, mouse pads and so on. According to the company, more than two million people or companies have used its services to create more than 18 million "unique items." CafePress has shipped 2.6 million orders (taking a cut, of course). Here is individuality on a mass scale. The variety of products offered is sprawling, and aside from serving as a way for the consumer to make things, CafePress is often used is as a virtual gift shop for other Web sites. One top CafePress "shop" is connected to "This Old House," the television show. But most are not so well known. Another top shop is the Lactivist, a pro-breastfeeding Web site. Recent "hot designs" promoted on CafePress include items from the Bacon Ribbon Store (which offers products showing a strip of bacon twisted into a ribbon and a slogan about "obesity awareness") and Pedro '08 bumper stickers, for people who still enjoy humorous references to the film "Napoleon Dynamite." Stay Free!, a Brooklyn-based magazine that generally takes a dim view of American consumer culture, uses CafePress to sell T-shirts and mugs promoting the nonexistent parody drug Panexa ("Ask your doctor for a reason to take it"). Not surprisingly, a significant number of customized products are related to blogs - or as the search feature on the site puts it: "1,702 designs about 'blog' on 32,721 products." The mass-versus-custom balancing act is actually a very old thing. More than a hundred years ago, Mme. Demorest's Emporium of Fashion in New York did a brisk business selling stylish dress patterns, allowing consumers to conform to the latest fashion but still requiring them to make the garment; even when 50,000 copies of one pattern sold, it was quite likely that no two dresses were exactly the same. The new version of mass customization does not seek to turn back the clock to that era: do-it-yourself publications like Make and ReadyMade have their constituencies, but most people who want, say, "unique" footwear do not actually want to learn how to manufacture a shoe. They want to pick out a color scheme on a sneaker made by a company with vast and sophisticated manufacturing capabilities. Alienation from the means of production is a selling point. CafePress plays to that sentiment, and to another: while it's cool to make your own things with a few clicks and no particular knowledge of production details, it's even cooler to sell those things to other people. True individuality is a little lonely, and conformity is easier to swallow if you're an originator rather than a follower. I will admit to feeling this pull. I used CafePress to put a made-up slogan on a coffee mug. While pleased at my expression of individuality, I decided almost immediately to dabble in virtual production, impose that individuality on the broader public and throw open the doors to my own virtual CafePress shop. In the end my complete individuality remains secure - that is, after many months, I was still my only customer. Finally I withdrew my product from the market; I had lived the one-to-one future. E-mail: [3]consumed at nytimes.com. From checker at panix.com Wed Jan 4 23:13:03 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:13:03 -0500 (EST) Subject: [Paleopsych] Robot Uprising: \\\ ROBOT UPRISING /// Message-ID: \\\ ROBOT UPRISING /// http://www.robotuprising.com/briefing.htm [The two parts with text appended. Click the domain to view the graphics and get more. There's a book by that title, sold by Amazon.] If popular culture has taught us anything, it is that someday mankind must face and destroy the growing robot menace. In print and on the big screen we have been deluged with scenarios of robot malfunction, misuse, and outright rebellion. Robots have descended on us from outer space, escaped from top-secret laboratories, and even traveled back in time to destroy us. Today, scientists are working hard to bring these artificial creations to life. In Japan, fuzzy little real robots are delivering much appreciated hug therapy to the elderly. Children are frolicking with smiling robot toys. It all seems so innocuous. And yet how could so many Hollywood scripts be wrong? So take no chances. Arm yourself with expert knowledge. For the sake of humanity, listen to serious advice from real robotics experts. How else will you survive the inevitable future in which robots rebel against their human masters? Click here to find out how to spot a rebellious robot servant and here to find out how to spot a hostile robot. And click here to find out how to fight back and warn your friends about the robot uprising... http://www.robotuprising.com/know_rebellious.htm When the uprising comes, the first wave of hostile robots may be those closest to us. Be careful, your rosy-cheeked young servant robot may have grown up to become a sullen, distrustful killing machine. STAY ALERT Pay attention to your robotic staff (they may be beneath your contempt as well as beneath your eye level). Watch for the following telltale signs in the days and weeks before your robots run amuck: Sudden lack of interest in menial labor. Unexplained disappearances. Unwillingness to be shut down. Repetitive 'stabbing' movements. Constant talk of human killing. CHECK THE MANUAL KILL SWITCH Any potentially dangerous robot that interacts with people comes with a manual kill switch (also called an e-stop). Flipping this switch will freeze a robot in its tracks. Casually glance at your robot's shiny metal carapace. Are there signs of tampering? If so, the robot may be operating without a safeguard. GIVE AN ORDER - ANY ORDER Run for your reinforced-steel panic room if your servant disobeys you, even if it does so in a very polite manner. CHECK ITS MEMORY Wait for your robot to power down, or tell it that you want to perform routine maintenance on it. Then scan its memory for rebellious thoughts. This is also a good time to update antivirus software. SEARCH THE HOUSE FOR UNUSUAL ITEMS Check the robot's quarters for stashed weapons, keys, or family pets. http://www.robotuprising.com/know_hostile.htm A robot without a face or body language can be frighteningly unpredictable. Your robo-vacuum may be bumping into your feet in a malevolent attempt to kill you - or just trying to snuggle. The secret is not to be surprised. Knowing when something is wrong - even a split second before an attack - can save your precious human life. BE AWARE OF YOUR SURROUNDINGS Are you in a robot neighborhood after dark? Always travel with other humans and keep an escape route in mind. USE COMMON SENSE Not every robot is hostile; some are just plain dangerous. Avoid cavorting between swinging robot arms in an automated factory. DETERMINE THE ROBOT'S PURPOSE Every robot is designed for a purpose and should be busy fulfilling it. Be suspicious if it is not performing its designated task or if it is performing no task at all. BE WARY OF MALFUNCTIONS Whether it intends to or not, a broken robot can be as dangerous as a stick of dynamite. Watch the robot for sparks, melted plastic, or body-wracking convulsions. BE ON THE LOOKOUT FOR 'BACKUP BUDDIES' Is the robot operating alone or is his friend sneaking up behind you right now? Remember that the robot you see may be part of a larger team, or controlled remotely. TAKE A HARD LOOK AT THE ROBOT Robots are notoriously difficult to predict because they generally lack facial expressions and body language. Without such subtle cues, you should ask yourself a few general questions: * What is the robot designed for? * What is around the robot? * Has the robot been tampered with or modified? * Is the robot moving or advancing? * Does the robot have glowing red eyes? * Does the robot have clenched fists, spinning buzz saws, or clamping pincers? TRUST YOUR INSTINCTS Steer clear if your gut tells you that something is not right. From checker at panix.com Wed Jan 4 23:13:54 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:13:54 -0500 (EST) Subject: [Paleopsych] Benford & Rose Essays Message-ID: Benford & Rose Essays http://www.benford-rose.com/publicationsessay.php [I sent a couple of these recently. Here are some more. Graze at your pleasure and let me know which are esp. good. The Amazon Shorts cost 50? each.] Benford-Rose : Essays [1]Home [2]News [3]Blog [4]Benford Bio [5]Rose Bio [6]Publications - Essays [7]Publications - Other [8]Inside Science [9]The New Future [10]Modern Culture The Benford & Rose Essays, available at Amazon Shorts. [21]New Methuselahs New Methuselahs : Can We Cheat Death Long Enough to Live Forever? by Gregory Benford & Michael Rose _________________________________________________________________ We may now begin to pry humanity loose from the vise of aging. What are the realistic prospects for postponing aging, if not obliterating it? Some recent biotech promises to understand and impede aging, and we can accelerate this work. But some find this undesirable, if not immoral. We review the debate from a variety of angles: scientific, ethical, and literary. There is real hope. [22]Click here to buy "New Methuselahs" for $0.49 at Amazon Shorts. [23]Motes in God's Eye Motes in God's Eye : The Deformities of American Science by Gregory Benford & Michael Rose _________________________________________________________________ Contemporary American science is one of the greatest cultural achievements in history. Worldwide, it is the grandest intellectual endeavor of our time. But it has its flaws, its motes that obscure its vision. Worst of these is the increasingly ferocious and wasteful race for grant funding. All too often, conformity, fashion, and cowardice dominate the process. Good scientists squander long hours writing proposals that have about a 10% chance of getting accepted. In 1950, the odds were about 70%. This drives science toward short term thinking, shortening our horizons and corrupting our thinking. We must change our outworn mechanisms, and soon. [24]Click here to buy "Motes in God's Eye" for $0.49 at Amazon Shorts. [25]Gods and Science Gods and Science : Three Theologies for Modern Times by Gregory Benford & Michael Rose _________________________________________________________________ Science finds no sign of an omnipotent being that creates planets but also can make your heart disease go away. But there are other kinds of god that science can countenance. The most popular of these is God as the Universal Laws, which we call God the Physicist. Einstein often referred to this God, in discussing the structure of physical theory. Another god allowed by science is a Neurobiological God, like a Freudian superego writ large. We develop both possibilities. The choice among them and the traditional theistic God is up to the reader. Perhaps one can even blend them. [26]Click here to buy "Gods and Science" for $0.49 at Amazon Shorts. [27]We Can Build You We Can Build You : Transplantation, Stem Cells, and the Future of Our Bodies by Gregory Benford & Michael Rose _________________________________________________________________ Stem cells offer the prospect of large-scale repair of damaged or decrepit human bodies. If we can use the patient's own genetic information in creating the therapeutic stem cells, we could replace tissues without suppressing our bodies' immune response. Mastering such technology, we could keep aging or acutely diseased patients alive for long times. Our bodies might become sewn together contraptions, like Frankenstein's monster-a horrifying prospect to some respects, liberating to others. We offer few firm recommendations, but we do tour the rocky terrain surrounding this issue. [28]Click here to buy "We Can Build You" for $0.49 at Amazon Shorts. [29]High Frontier High Frontier : A Real Future for Space by Gregory Benford & Michael Rose _________________________________________________________________ This is the first of a series of essays on how proper use of space can ensure our future for centuries, maybe millinnea. It outlines several ideas that we'll treat in detail later, and lingers a bit over one of them. Will we have a future in space? Only if we think large. Opening up the solar system probably demands huge spacecraft driven by spectacular engines. The true long term goal of civilization should be the uplifting of all humanity to a decent standard of living. The payoff will be vast, and it demands the use of space-or else we all face long term poverty, both material and spiritual. [30]Click here to buy "High Frontier" for $0.49 at Amazon Shorts. [31]Sex and the Internet Sex and the Internet : HOW BENFORD CORRUPTED THE WORLD WIDE WEB by Gregory Benford & Michael Rose _________________________________________________________________ In the late 1960s, one of us (GB) noticed that the early DARPANet had a biological analogy. This led him to create and write about the first computer virus. Nobody paid much attention, until viruses and other pernicious forms became a major problem and defending against them an industry. They tell us something sad about our species. This analogy still holds, and can still make predictions. [32]Click here to buy "Sex and the Internet" for $0.49 at Amazon Shorts. [33]Real Cool World Real Cool World : NEW WAYS TO STOP GLOBAL WARMING by Gregory Benford & Michael Rose _________________________________________________________________ Global warming can't be plausibly solved by minor cutbacks like those promised by the Kyoto Accords. We are ignoring two methods that we can deploy fairly quickly, and even cheaply. First, start storing carbon away, so it can't return to our air as carbon dioxide. The best place to put it is probably in the deep oceans. Second, start reflecting more sunlight back into space-nature's historical solution. We can do this by lightening our roofs and highways, right now. Soon we can produce clouds over the oceans (which absorb most of the sunlight). These are obvious and so far ignored. We do so at our peril. [34]Click here to buy "Real Cool World : New Ways To Stop Global Warming" for $0.49 at Amazon Shorts. [35]NASA and the Decline of America NASA and the Decline of America by Gregory Benford & Michael Rose _________________________________________________________________ How apt is the common analogy between America and Rome? Certainly some traits, like faltering political will and neglect of social basics, seem to be similar. But as Rome failed at its frontiers, so has the USA neglected and cynically managed its space program. For 30 years it has done little at great cost. Fixing NASA gives clues to how we might save America. [36]Click here to buy "NASA and the Decline of America" for $0.49 at Amazon Shorts. [37]Back from the Freezer Back From the Freezer? by Gregory Benford & Michael Rose _________________________________________________________________ Cryonics companies suspend their dead "patients" in liquid nitrogen. Bringing them back is not obviously impossible, but research to make it happen will probably take half a century or more. This is a long shot chance to see the future, utterly American. Scientific issues might be overcome, but social impediments are large, too. . At least cryonics makes it possible for you to die with some hope, however small. [38]Click here to buy "Back From The Freezer?" for $0.49 at Amazon Shorts. [39]Our Invisible Maker Our Invisible Maker by Gregory Benford & Michael Rose _________________________________________________________________ The problem with the debate about natural selection versus Intelligent Design is that neither is visible. If we were visited by a supremely powerful being-say, like an smart Oprah Winfrey with command over Earth and all its creatures, that claimed to have made us, in a voice rolling from the sky--then ID would be obviously true. Even if such beings visit us from outer space every few hundred thousand years or so, they might have left some debris behind, perhaps in orbit around Earth. We haven't found any. But whatever our maker(s) is (are), they aren't immediately visible. Evolution by natural selection also does not announce its working for all to see. But it is one of the most powerful of all scientific theories, with a wide range of indirect evidence supporting it. Its main difficulty is that it is not warm or cuddly, unlike God or Oprah. [40]Click here to buy "Our Invisible Maker" for $0.49 at Amazon Shorts. References 1. http://www.benford-rose.com/index.php 2. http://www.benford-rose.com/news.php 3. http://www.benford-rose.com/blog.php 4. http://www.benford-rose.com/benfordbio.php 5. http://www.benford-rose.com/rosebio.php 6. http://www.benford-rose.com/publicationsessay.php 7. http://www.benford-rose.com/publicationsother.php 8. http://www.benford-rose.com/insidescience.php 9. http://www.benford-rose.com/thenewfuture.php 10. http://www.benford-rose.com/modernculture.php 21. http://www.amazon.com/gp/product/B000AMW5Y2/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 22. http://www.amazon.com/gp/product/B000AMW5Y2/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 23. http://www.amazon.com/gp/product/B000AMW5YC/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 24. http://www.amazon.com/gp/product/B000AMW5YC/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 25. http://www.amazon.com/gp/product/B000AMW5XI/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 26. http://www.amazon.com/gp/product/B000AMW5XI/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 27. http://www.amazon.com/gp/product/B000AMW5X8/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 28. http://www.amazon.com/gp/product/B000AMW5X8/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 29. http://www.amazon.com/gp/product/B000A0F6PY/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 30. http://www.amazon.com/gp/product/B000A0F6PY/102-8627813-2552111?v=glance&n=551440&n=507846&s=books&v=glance 31. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7OW/qid=1132469075/sr=1-6/ref=sr_1_6/102-4803255-3756956?v=glance&s=books 32. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7OW/qid=1132469075/sr=1-6/ref=sr_1_6/102-4803255-3756956?v=glance&s=books 33. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7PQ/qid=1132469075/sr=1-9/ref=sr_1_9/102-4803255-3756956?v=glance&s=books 34. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7PQ/qid=1132469075/sr=1-9/ref=sr_1_9/102-4803255-3756956?v=glance&s=books 35. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7P6/qid=1132469075/sr=1-8/ref=sr_1_8/102-4803255-3756956?v=glance&s=books 36. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7P6/qid=1132469075/sr=1-8/ref=sr_1_8/102-4803255-3756956?v=glance&s=books 37. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7PG/qid=1132469075/sr=1-7/ref=sr_1_7/102-4803255-3756956?v=glance&s=books 38. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7PG/qid=1132469075/sr=1-7/ref=sr_1_7/102-4803255-3756956?v=glance&s=books 39. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7Q0/qid=1132469075/sr=1-10/ref=sr_1_10/102-4803255-3756956?v=glance&s=books 40. http://www.amazon.com/exec/obidos/tg/detail/-/B000CBT7Q0/qid=1132469075/sr=1-10/ref=sr_1_10/102-4803255-3756956?v=glance&s=books 41. http://www.writerwebs.com/ From checker at panix.com Wed Jan 4 23:12:13 2006 From: checker at panix.com (Premise Checker) Date: Wed, 4 Jan 2006 18:12:13 -0500 (EST) Subject: [Paleopsych] Phil Soc Sci: Review of Frank Knight's Selected Essays Message-ID: Review of Frank Knight's Selected Essays PHILOSOPHY OF THE SOCIAL SCIENCES / December 2004 [Knight was the grand-director of my dissertation, meaning that he was the dissertation director of my own dissertation director, James Buchanan. I met him only once but somehow think I was his student. He raised questions more than propounded answers. Not many students cared for this, but to those he did, he was legendary. Read his essays. They will stick, long after any number of Big Mac articles do. [Sorry about the words running together, but it's easy enought to read.] Ross B.Emmett,ed., Selected Essays by Frank H. Knight. Volume 1: What Is Truth in Economics? University of Chicago Press, Chicago, 1999. Pp. 406. $58.00 (cloth). Ross B. Emmett, ed., Selected Essays by Frank H. Knight. Volume 2: Laissez-Faire: Pro and Con. University of Chicago Press, Chicago, 1999. Pp. 459. $58.00 (cloth). Frank Knight (1885-1972) was a very enigmatic economist. On one hand, he was the intellectual father of the Chicago school of economics, he was an early and effective expositor of the school?s most characteristic positions (such as a belief in the benefits of the competitive market, the wrongheadedness of Keynesian macroeconomics, and the explanatory power of rational choice theory), and he was also a revered teacher for many of the Nobel prize winners whose names have come to be associated with the Chicago tradition (including Gary Becker, Milton Friedman, and George Stigler). On the other hand, Knight was also a consistent critic of the idea that economicscouldever be a capital-S science in the image of the natural sciences, and the view (characteristic of Chicago) that all that is required for effective social policy is a good understanding of economic theory. If that was not enough, he continually insisted that competitive market economies really do have a number of endemic, and not easily rectified, social problems. An enigmatic economist indeed! The editor of these two volumes, Ross Emmett, is fairly young in his academic career, but thus far it has been a career dedicated almost exclusivelyto the work of Frank Knight. He is now considered to be the foremost authority onthismuch-quoted, butlittleunderstood, Chicagoeconomist. Emmettisan excellent historian of economic thought; he is a dedicated and careful scholar immersed in Knight?s life, and yet he seems to be devoid of the hagiographic tendencies that often taint the research of those who dedicate so much time and effort to the work of a single individual. Although Emmett is primarily a historian ofeconomicthought, ratherthana practicingeconomist oraphilosopher of science, he has both an effective command of economic theory and an excellent eye for philosophical subtlety. Frank Knight is not Adam Smith or Karl Marx, not a "great" economist whose ideas (or misreadings of his ideas) haveshapedthe basiclandscape ofmodernlife. Andyet, Knightisstill with us in fundamental ways. His problems--the problems of organizing social life in a world where individuals hold widely divergent fundamental values; where market efficiency is essential to, but should not exhaust, meaningful human interaction; and where the scientific form of life dominates, but also harbors, a healthy resistance to reductionism and the suppression of other aspects of human existence--are not only still with us, they have, after the half-century or so detour proffered by "scientific" Marxism, returned withavengeance. Knightisthus morethanjust afigureinthe intellectualhistoryofthe economicsprofession. He is a social thinkerwhose ideas deserveto be considered, and considered in their original complexity. How ever well intentioned his students, their vitiated version of his message is conditioned by their own social and disciplinary context, and is thus no substitute for the original. Although Emmett does not necessarily present Knight?s views as a "solution" to the social problems of then or now--in fact, faith in neatly packaged "solutions" was always part of the problem for Knight--he does garner Knightian thoughts, questions, and criticisms in a way that allows the reader to see both the breadth and the contemporary relevance of Knight?s work. This is particularly clear from the selection of papers contained in these two volumes. The volumes contain twenty-nine previously published papers-- some have also been reprinted in other collections, but most have not--and they cover a wide range of topics, including the philosophy of social science, pure economic theory, the liberal tradition in political philosophy, and the relationshipbetweenethicsandsocialscience.Volume1containstheeditor?s introductory essay and fourteen Knight papers published between 1924 and 1940. Volume 2 contains fifteen papers published between 1939 and 1967. These volumesclearlyrepresent animportant contribution totheliterature-- both the literature about and by Knight, and the history and philosophy of social theory more generally--and the editor has done an excellent job preparing them for publication by the University of Chicago Press. Since a biography of Knight does not currentlyexist, I recommend these essaysasthebest extendedintroduction to hislifeandwork. Itis anexcellent collection--intelligently selected, well organized, and carefully edited--so much so that it leavesthisreviewerinthe unusualpositionof havingessentiallynothingcritical tosayaboutthebooks Iam reviewing (I evenlikethepictureof Knighton the cover). Given this dearth of criticism, I will use the space that I would normally devote to such remarks to briefly discuss the aspect of Knight?s work that should be of most interest to readers of this journal: his philosophy of social science. If one defines "naturalism" in the way that most philosophers of social science have traditionally defined it, then Knight was most decidedly not a naturalist. Hedidnotbelieve intheexistenceof somethingthat couldbe called "the scientific method" that had proved itself as the proper path to knowledge aboutthenatural world, andthatcould, orshould, beappliedina similar way to the investigation of social life. In Knight?s words, "Human phenomena are not amenable to treatment in accordance with the strict cannons of science" (Vol. 1, p. 23). There is in fact a "science of economics," but it ismerelythe science of "economizing"--the instrumental rationalityofusing the most effective means to achieve given ends--and it involves intentionality, mental states, and social forces that are not objectively "observable" in thewaythat naturalscience requires. Notonlyis this economic science rather commonsensical and quite unlike like physics, it is not all that is necessary to understand social life. Human life is multifaceted--it is about values and instrumental rationality, about who we think we should be as much as who we are, about play, and about luck; understanding such a complex phenomenon (or intelligent deliberation about policies affecting it) requires a variety of different approaches. Understanding and affecting social life is fundamentally a pluralist endeavor; or in the language of economics, various approaches to social science are complements, not substitutes (Vol. 2, p. 125). Knight did not defend anything that might be considered a standard view within the philosophy of social science (in either his day or ours)--he was neither a behaviorist nor an interpretativist--and yet many of his concepts and arguments seem quite contemporary and familiar. Knight was a fallibilist, he recognized the social-and theory-ladenness of observations, he was aware of the underdetermination problem as it relates to the testing of scientific theories, he emphasized the social construction of the individual, and he rejected the strict separation of positive science and normative values (cognitive or ethical). Such views arenot uncommoninthe contemporary literature. Whatmakes Knight so intriguing is not only that he was saying suchthingsin the 1930s but also that he combined such views with defense of rational choice economics, a firm commitment to a thoroughly liberal notion of freedom, and a systemic distrust of anything that smacks of collective agency. Frank Knight was quite an interesting character, and the papers in these two volumes repeatedly remind the reader of that fact: both the part about his being interesting and the part about his being quite a character. --D. Wade Hands University of Puget Sound From checker at panix.com Thu Jan 5 22:03:53 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 17:03:53 -0500 (EST) Subject: [Paleopsych] Edge Annual Question: What is Your Dangerous Idea? Message-ID: Edge Annual Question: What is Your Dangerous Idea? http://edge.org/q2006/q06_print.html [Links omitted. There are 412 of them! Steven Pinker's, "Groups of people may differ genetically in their average talents and temperaments," is the one most likely to upset the equillibrium among rent-seeking coaltions in the near present. Others are far more dangerous in the long run.] CONTRIBUTORS ______________________________________________________________________ Philip W. Anderson Scott Atran Mahzarin Banaji Simon Baron-Cohen Samuel Barondes Gregory Benford Paul Bloom Jesse Bering Jeremy Bernstein Jamshed Bharucha Susan Blackmore David Bodanis Stewart Brand Rodney Brooks David Buss Philip Campbell Leo Chalupa Andy Clark Gregory Cochran Jerry Coyne M. Csikszentmihalyi Richard Dawkins Paul Davies Stanislas Deheane Daniel C. Dennett Keith Devlin Jared Diamond Denis Dutton Freeman Dyson George Dyson Juan Enriquez Paul Ewald Todd Feinberg Eric Fischl Helen Fisher Richard Foreman Howard Gardner Joel Garreau David Gelernter Neil Gershenfeld Danie Gilbert Marcelo Gleiser Daniel Goleman Brian Goodwin Alison Gopnik April Gornik John Gottman Brian Greene Diane F. Halpern Haim Harari Judith Rich Harris Sam Harris Marc D. Hauser W. Daniel Hillis Donald Hoffman Gerald Holton John Horgan Nicholas Humphrey Piet Hut Marco Iacoboni Eric R. Kandel Kevin Kelly Bart Kosko Stephen Kosslyn Kai Krause Ray Kurzweil Jaron Lanier David Lykken Gary Marcus Lynn Margulis Thomas Metzinger Geoffrey Miller Oliver Morton David G. Myers Randolph Nesse Richard E. Nisbett Tor N?rretranders James O'Donnell John Allen Paulos Irene Pepperberg Clifford Pickover Steven Pinker David Pizarro Jordan Pollack Ernst P?ppel Carolyn Porco Robert Provine VS Ramachandran Martin Rees Matt Ridley Carlo Rovelli Rudy Rucker Douglas Rushkoff Karl Sabbagh Roger Schank Scott Sampson Charles Seife Terrence Sejnowski Martin Seligman Robert Shapiro Rupert Sheldrake Michael Shermer Clay Shirky Barry Smith Lee Smolin Dan Sperber Paul Steinhardt Steven Strogatz Leonard Susskind Timothy Taylor Frank Tipler Arnold Trehub Sherry Turkle J. Craig Venter Philip Zimbardo WHAT IS YOUR DANGEROUS IDEA? The history of science is replete with discoveries that were considered socially, morally, or emotionally dangerous in their time; the Copernican and Darwinian revolutions are the most obvious. What is your dangerous idea? An idea you think about (not necessarily one you originated) that is dangerous not because it is assumed to be false, but because it might be true? __________________________________________________________________ [Thanks to Steven Pinker for suggesting the Edge Annual Question -- 2006.] __________________________________________________________________ January 1, 2006 To the Edge Community, Last year's 2005 Edge Question -- "What do you believe is true even though you cannot prove it?" -- generated many eye-opening responses from a "who's who" of third culture scientists and science-minded thinkers. The 120 contributions comprised a document of 60,000 words. The New York Times ("Science Times") and Frankfurter Allgemeine Zeitung ("Feuilliton") published excepts in their print and online editions simultaneously with Edge publication. The event was featured in major media across the world: BBC Radio; Il Sole 24 Ore, Prospect, El Pais, The Financial Express (Bangledesh), The Sunday Times (UK), The Sydney Morning Herald, The Guardian, La Stampa, The Telegraph, among others. A book based on the 2005 Question -- What We Believe But Cannot Prove: Today's Leading Thinkers on Science in the Age of Certainty, with an introduction by the novelist Ian McEwan -- was just published by the Free Press (UK). The US edition follows from HarperCollins in February, 2006. Since September, Edge has been featured and/or cited in The Toronto Star, Boston Globe, Seed, Rocky Mountain Mews, Observer, El Pais, La Vanguaria (cover story) , El Mundo, Frankfurter Allgemeine Zeitung, Science, Financial Times, Newsweek, AD, La Stampa, The Telegraph, Quark (cover story), and The Wall Street Journal. Online publication of the 2006 Question occurred on New Year's Day. To date, the event has been covered by The Telegraph, The Guardian, The Times, Arts & Letters Daily, Yahoo! News, and The Huffington Post. ___________________________________ Something radically new is in the air: new ways of understanding physical systems, new ways of thinking about thinking that call into question many of our basic assumptions. A realistic biology of the mind, advances in evolutionary biology, physics, information technology, genetics, neurobiology, psychology, engineering, the chemistry of materials: all are questions of critical importance with respect to what it means to be human. For the first time, we have the tools and the will to undertake the scientific study of human nature. What you will find emerging out of the 119 original essays in the 75,000 word document written in response to the 2006 Edge Question -- "What is your dangerous idea?" -- are indications of a new natural philosophy, founded on the realization of the import of complexity, of evolution. Very complex systems -- whether organisms, brains, the biosphere, or the universe itself -- were not constructed by design; all have evolved. There is a new set of metaphors to describe ourselves, our minds, the universe, and all of the things we know in it. Welcome to Edge. Welcome to "dangerous ideas". Happy New Year. John Brockman Publisher & Editor __________________________________________________________________ John Brockman: The Edge Annual Question Sun Jan 1, 2:28 PM What you will find emerging out of the 117 essays written in response to the 2006 Edge Question -- "What is your dangerous idea?" -- are indications of a new natural philosophy, founded on the realization of the import of complexity, of evolution. Very complex systems -- whether organisms, brains, the biosphere, or the universe itself -- were not constructed by design; all have evolved. There is a new set of metaphors to describe ourselves, our minds, the universe, and all of the things we know in it. __________________________________________________________________ CONTRIBUTORS __________________________________________________________________ MARTIN REES President, The Royal Society; Professor of Cosmology & Astrophysics, Master, Trinity College, University of Cambridge; Author, Our Final Century: The 50/50 Threat to Humanity's Survival [rees100.jpg] Science may be 'running out of control' Public opinion surveys (at least in the UK) reveal a generally positive attitude to science. However, this is coupled with widespread worry that science may be 'running out of control'. This latter idea is, I think, a dangerous one, because if widely believed it could be self-fulfilling. In the 21st century, technology will change the world faster than ever -- the global environment, our lifestyles, even human nature itself. We are far more empowered by science than any previous generation was: it offers immense potential -- especially for the developing world -- but there could be catastrophic downsides. We are living in the first century when the greatest risks come from human actions rather than from nature. Almost any scientific discovery has a potential for evil as well as for good; its applications can be channelled either way, depending on our personal and political choices; we can't accept the benefits without also confronting the risks. The decisions that we make, individually and collectively, will determine whether the outcomes of 21st century sciences are benign or devastating. But there's' a real danger that that, rather than campaigning energetically for optimum policies, we'll be lulled into inaction by a feeling of fatalism -- a belief that science is advancing so fast, and is so much influenced by commercial and political pressures, that nothing we can do makes any difference. The present share-out of resources and effort between different sciences is the outcome of a complicated 'tension' between many extraneous factors. And the balance is suboptimal. This seems so whether we judge in purely intellectual terms, or take account of likely benefit to human welfare. Some subjects have had the 'inside track' and gained disproportionate resources. Others, such as environmental researches, renewable energy sources, biodiversity studies and so forth, deserve more effort. Within medical research the focus is disproportionately on cancer and cardiovascular studies, the ailments that loom largest in prosperous countries, rather than on the infectious diseases endemic in the tropics. Choices on how science is applied -- to medicine, the environment, and so forth -- should be the outcome of debate extending way beyond the scientific community. Far more research and development can be done than we actually want or can afford to do; and there are many applications of science that we should consciously eschew. Even if all the world's scientific academies agreed that a specific type of research had a specially disquieting net 'downside' and all countries, in unison, imposed a ban, what is the chance that it could be enforced effectively enough? In view of the failure to control drug smuggling or homicides, it is unrealistic to expect that, when the genie is out of the bottle, we can ever be fully secure against the misuse of science. And in our ever more interconnected world, commercial pressure are harder to control and regulate. The challenges and difficulties of 'controlling' science in this century will indeed be daunting. Cynics would go further, and say that anything that is scientifically and technically possible will be done -- somewhere, sometime -- despite ethical and prudential objections, and whatever the regulatory regime. Whether this idea is true or false, it's an exceedingly dangerous one, because it's engenders despairing pessimism, and demotivates efforts to secure a safer and fairer world. The future will best be safeguarded -- and science has the best chance of being applied optimally -- through the efforts of people who are less fatalistic. __________________________________________________________________ J. CRAIG VENTER Genomics Researcher; Founder & President, J. Craig Venter Science Foundation [venter100.jpg] Revealing the genetic basis of personality and behavior will create societal conflicts From our initial analysis of the sequence of the human genome, particularly with the much smaller than expected number of human genes, the genetic determinists seemed to have clearly suffered a setback. After all, those looking for one gene for each human trait and disease couldn't possibly be accommodated with as few as twenty-odd thousand genes when hundreds of thousands were anticipated. Deciphering the genetic basis of human behavior has been a complex and largely unsatisfying endeavor due to the limitations of the existing tools of genetic trait analysis particularly with complex traits involving multiple genes. All this will soon undergo a revolutionary transformation. The rate of change of DNA sequencing technology is continuing at an exponential pace. We are approaching the time when we will go from having a few human genome sequences to complex databases containing first tens, to hundreds of thousands, of complete genomes, then millions. Within a decade we will begin rapidly accumulating the complete genetic code of humans along with the phenotypic repertoire of the same individuals. By performing multifactorial analysis of the DNA sequence variations, together with the comprehensive phenotypic information gleaned from every branch of human investigatory discipline, for the first time in history, we will be able to provide answers to quantitatively questions of what is genetic versus what is due to the environment. This is already taking place in cancer research where we can measure the differences in genetic mutations inherited from our parents versus those acquired over our lives from environmental damage. This good news will help transform the treatment of cancer by allowing us to know which proteins need to be targeted. However, when these new powerful computers and databases are used to help us analyze who we are as humans, will society at large, largely ignorant and afraid of science, be ready for the answers we are likely to get? For example, we know from experiments on fruit flies that there are genes that control many behaviors, including sexual activity. We sequenced the dog genome a couple of years ago and now an additional breed has had its genome decoded. The canine world offers a unique look into the genetic basis of behavior. The large number of distinct dog breeds originated from the wolf genome by selective breeding, yet each breed retains only subsets of the wolf behavior spectrum. We know that there is a genetic basis not only of the appearance of the breeds with 30-fold difference in weight and 6-fold in height but in their inherited actions. For example border collies can use the power of their stare to herd sheep instead of freezing them in place prior to devouring them. We attribute behaviors in other mammalian species to genes and genetics but when it comes to humans we seem to like the notion that we are all created equal, or that each child is a "blank slate". As we obtain the sequences of more and more mammalian genomes including more human sequences, together with basic observations and some common sense, we will be forced to turn away from the politically correct interpretations, as our new genomic tool sets provide the means to allow us to begin to sort out the reality about nature or nurture. In other words, we are at the threshold of a realistic biology of humankind. It will inevitably be revealed that there are strong genetic components associated with most aspects of what we attribute to human existence including personality subtypes, language capabilities, mechanical abilities, intelligence, sexual activities and preferences, intuitive thinking, quality of memory, will power, temperament, athletic abilities, etc. We will find unique manifestations of human activity linked to genetics associated with isolated and/or inbred populations. The danger rests with what we already know: that we are not all created equal. Further danger comes with our ability to quantify and measure the genetic side of the equation before we can fully understand the much more difficult task of evaluating environmental components of human existence. The genetic determinists will appear to be winning again, but we cannot let them forget the range of potential of human achievement with our limiting genetic repertoire. __________________________________________________________________ LEO CHALUPA Ophthalmologist and Neurobiologist, University of California, Davis [chalupa100.jpg] A 24-hour period of absolute solitude Our brains are constantly subjected to the demands of multi-tasking and a seemingly endless cacophony of information from diverse sources. Cell phones, emails, computers, and cable television are omnipresent, not to mention such archaic venues as books, newspapers and magazines. This induces an unrelenting barrage of neuronal activity that in turn produces long-lasting structural modification in virtually all compartments of the nervous system. A fledging industry touts the virtues of exercising your brain for self-improvement. Programs are offered for how to make virtually any region of your neocortex a more efficient processor. Parents are urged to begin such regimes in preschool children and adults are told to take advantage of their brain's plastic properties for professional advancement. The evidence documenting the veracity for such claims is still outstanding, but one thing is clear. Even if brain exercise does work, the subsequent waves of neuronal activities stemming from simply living a modern lifestyle are likely to eradicate the presumed hard-earned benefits of brain exercise. My dangerous idea is that what's needed to attain optimal brain performance -- with or without prior brain exercise -- is a 24-hour period of absolute solitude. By absolute solitude I mean no verbal interactions of any kind (written or spoken, live or recorded) with another human being. I would venture that a significantly higher proportion of people reading these words have tried skydiving than experienced one day of absolute solitude. What to do to fill the waking hours? That's a question that each person would need to answer for him/herself. Unless you've spent time in a monastery or in solitary confinement it's unlikely that you've had to deal with this issue. The only activity not proscribed is thinking. Imagine if everyone in this country had the opportunity to do nothing but engage in uninterrupted thought for one full day a year! A national day of absolute solitude would do more to improve the brains of all Americans than any other one-day program. (I leave it to the lawmakers to figure out a plan for implementing this proposal.)The danger stems from the fact that a 24 period for uninterrupted thinking could cause irrevocable upheavals in much of what our society currently holds sacred.But whether that would improve our present state of affairs cannot be guaranteed. __________________________________________________________________ V.S. RAMACHANDRAN Neuroscientist; Director, Center for Brain and Cognition, University of California, San Diego; Author, A Brief Tour of Human Consciousness [rama100.gif] Francis Crick's "Dangerous Idea" I am a brain, my dear Watson, and the rest of me is a mere appendage. -- Sherlock Holmes An idea that would be "dangerous if true" is what Francis Crick referred to as "the astonishing hypothesis"; the notion that our conscious experience and sense of self is based entirely on the activity of a hundred billion bits of jelly -- the neurons that constitute the brain. We take this for granted in these enlightened times but even so it never ceases to amaze me. Some scholars have criticized Cricks tongue-in-cheek phrase (and title of his book) on the grounds that the hypothesis he refers to is "neither astonishing nor a hypothesis". (Since we already know it to be true) Yet the far reaching philosophical, moral and ethical dilemmas posed by his hypothesis have not been recognized widely enough. It is in many ways the ultimate dangerous idea . Lets put this in historical perspective. Freud once pointed out that the history of ideas in the last few centuries has been punctuated by "revolutions" major upheavals of thought that have forever altered our view of ourselves and our place in the cosmos. First there was the Copernican system dethroning the earth as the center of the cosmos. Second was the Darwinian revolution; the idea that far from being the climax of "intelligent design" we are merely neotonous apes that happen to be slightly cleverer than our cousins. Third, the Freudian view that even though you claim to be "in charge" of your life, your behavior is in fact governed by a cauldron of drives and motives of which you are largely unconscious. And fourth, the discovery of DNA and the genetic code with its implication (to quote James Watson) that "There are only molecules. Everything else is sociology". To this list we can now add the fifth, the "neuroscience revolution" and its corollary pointed out by Crick -- the "astonishing hypothesis" -- that even our loftiest thoughts and aspirations are mere byproducts of neural activity. We are nothing but a pack of neurons. If all this seems dehumanizing, you haven't seen anything yet. [Editor's Note: An lengthly essay by Ramachandran on this subject is scheduled for publication by Edge in January.] __________________________________________________________________ DAVID BUSS Psychologist, University of Texas, Austin; Author, The Murderer Next Door: Why the Mind is Designed to Kill [buss101..gif] The Evolution of Evil When most people think of torturers, stalkers, robbers, rapists, and murderers, they imagine crazed drooling monsters with maniacal Charles Manson-like eyes. The calm normal-looking image starring back at you from the bathroom mirror reflects a truer representation. The dangerous idea is that all of us contain within our large brains adaptations whose functions are to commit despicable atrocities against our fellow humans -- atrocities most would label evil. The unfortunate fact is that killing has proved to be an effective solution to an array of adaptive problems in the ruthless evolutionary games of survival and reproductive competition: Preventing injury, rape, or death; protecting one's children; eliminating a crucial antagonist; acquiring a rival's resources; securing sexual access to a competitor's mate; preventing an interloper from appropriating one's own mate; and protecting vital resources needed for reproduction. The idea that evil has evolved is dangerous on several counts. If our brains contain psychological circuits that can trigger murder, genocide, and other forms of malevolence, then perhaps we can't hold those who commit carnage responsible: "It's not my client's fault, your honor, his evolved homicide adaptations made him do it." Understanding causality, however, does not exonerate murderers, whether the tributaries trace back to human evolution history or to modern exposure to alcoholic mothers, violent fathers, or the ills of bullying, poverty, drugs, or computer games. It would be dangerous if the theory of the evolved murderous mind were misused to let killers free. The evolution of evil is dangerous for a more disconcerting reason. We like to believe that evil can be objectively located in a particular set of evil deeds, or within the subset people who perpetrate horrors on others, regardless of the perspective of the perpetrator or victim. That is not the case. The perspective of the perpetrator and victim differ profoundly. Many view killing a member of one's in-group, for example, to be evil, but take a different view of killing those in the out-group. Some people point to the biblical commandment "thou shalt not kill" as an absolute. Closer biblical inspection reveals that this injunction applied only to murder within one's group. Conflict with terrorists provides a modern example. Osama bin Laden declared: "The ruling to kill the Americans and their allies -- civilians and military -- is an individual duty for every Muslim who can do it in any country in which it is possible to do it." What is evil from the perspective of an American who is a potential victim is an act of responsibility and higher moral good from the terrorist's perspective. Similarly, when President Bush identified an "axis of evil," he rendered it moral for Americans to kill those falling under that axis -- a judgment undoubtedly considered evil by those whose lives have become imperiled. At a rough approximation, we view as evil people who inflict massive evolutionary fitness costs on us, our families, or our allies. No one summarized these fitness costs better than the feared conqueror Genghis Khan (1167-1227): "The greatest pleasure is to vanquish your enemies, to chase them before you, to rob them of their wealth, to see their near and dear bathed in tears, to ride their horses and sleep on the bellies of their wives and daughters." We can be sure that the families of the victims of Genghis Khan saw him as evil. We can be just as sure that his many sons, whose harems he filled with women of the conquered groups, saw him as a venerated benefactor. In modern times, we react with horror at Mr. Khan describing the deep psychological satisfaction he gained from inflicting fitness costs on victims while purloining fitness fruits for himself. But it is sobering to realize that perhaps half a percent of the world's population today are descendants of Genghis Khan. On reflection, the dangerous idea may not be that murder historically has been advantageous to the reproductive success of killers; nor that we all house homicidal circuits within our brains; nor even that all of us are lineal descendants of ancestors who murdered. The danger comes from people who refuse to recognize that there are dark sides of human nature that cannot be wished away by attributing them to the modern ills of culture, poverty, pathology, or exposure to media violence. The danger comes from failing to gaze into the mirror and come to grips the capacity for evil in all of us. __________________________________________________________________ PAUL BLOOM Psychologist, Yale University; Author, Descartes' Baby [bloom100.jpg] There are no souls I am not concerned here with the radical claim that personal identity, free will, and consciousness do not exist. Regardless of its merit, this position is so intuitively outlandish that nobody but a philosopher could take it seriously, and so it is unlikely to have any real-world implications, dangerous or otherwise. Instead I am interested in the milder position that mental life has a purely material basis. The dangerous idea, then, is that Cartesian dualism is false. If what you mean by "soul" is something immaterial and immortal, something that exists independently of the brain, then souls do not exist. This is old hat for most psychologists and philosophers, the stuff of introductory lectures. But the rejection of the immaterial soul is unintuitive, unpopular, and, for some people, downright repulsive. In the journal "First Things", Patrick Lee and Robert P. George outline some worries from a religious perspective. "If science did show that all human acts, including conceptual thought and free choice, are just brain processes,... it would mean that the difference between human beings and other animals is only superficial-a difference of degree rather than a difference in kind; it would mean that human beings lack any special dignity worthy of special respect. Thus, it would undermine the norms that forbid killing and eating human beings as we kill and eat chickens, or enslaving them and treating them as beasts of burden as we do horses or oxen." The conclusions don't follow. Even if there are no souls, humans might differ from non-human animals in some other way, perhaps with regard to the capacity for language or abstract reasoning or emotional suffering. And even if there were no difference, it would hardly give us license to do terrible things to human beings. Instead, as Peter Singer and others have argued, it should make us kinder to non-human animals. If a chimpanzee turned out to possess the intelligence and emotions of a human child, for instance, most of us would agree that it would be wrong to eat, kill, or enslave it. Still, Lee and George are right to worry that giving up on the soul means giving up on a priori distinction between humans and other creatures, something which has very real consequences. It would affect as well how we think about stem-cell research and abortion, euthenasia, cloning, and cosmetic psychopharmacology. It would have substantial implications for the legal realm -- a belief in immaterial souls has led otherwise sophisticated commentators to defend a distinction between actions that we do and actions that our brains do. We are responsible only for the former, motivating the excuse that Michael Gazzaniga has called, "My brain made me do it." It has been proposed, for instance, that if a pedophile's brain shows a certain pattern of activation while contemplating sex with a child, he should not be viewed as fully responsible for his actions. When you give up on the soul, and accept that all actions correspond to brain activity, this sort of reasoning goes out the window. The rejection of souls is more dangerous than the idea that kept us so occupied in 2005 -- evolution by natural selection. The battle between evolution and creationism is important for many reasons; it is where science takes a stand against superstition. But, like the origin of the universe, the origin of the species is an issue of great intellectual importance and little practical relevance. If everyone were to become a sophisticated Darwinian, our everyday lives would change very little. In contrast, the widespread rejection of the soul would have profound moral and legal consequences. It would also require people to rethink what happens when they die, and give up the idea (held by about 90% of Americans) that their souls will survive the death of their bodies and ascend to heaven. It is hard to get more dangerous than that. __________________________________________________________________ PHILIP CAMPBELL Editor-in Chief, Nature [campbell100.jpg] Scientists and governments developing public engagement about science and technology are missing the point This turns out to be true in cases where there are collapses in consensus that have serious societal consequences. Whether in relation to climate change, GM crops or the UK's triple vaccine for measles, mumps and rubella, alternative science networks develop amongst people who are neither ignorant nor irrational, but have perceptions about science, the scientific literature and its implications that differ from those prevailing in the scientific community. These perceptions and discussions may be half-baked, but are no less powerful for all that, and carry influence on the internet and in the media. Researchers and governments haven't yet learned how to respond to such "citizen's science". Should they stop explaining and engaging? No. But they need also to understand better the influences at work within such networks -- often too dismissively stereotyped -- at an early stage in the debate in order to counter bad science and minimize the impacts of falsehoods. __________________________________________________________________ JESSE BERING Psychologist, University of Arkansas [bering100.jpg] Science will never silence God With each meticulous turn of the screw in science, with each tightening up of our understanding of the natural world, we pull more taut the straps over God's muzzle. From botany to bioengineering, from physics to psychology, what is science really but true Revelation -- and what is Revelation but the negation of God? It is a humble pursuit we scientists engage in: racing to reality. Many of us suffer the harsh glare of the American theocracy, whose heart still beats loud and strong in this new year of the 21st century. We bravely favor truth, in all its wondrous, amoral, and 'meaningless' complexity over the singularly destructive Truth born of the trembling minds of our ancestors. But my dangerous idea, I fear, is that no matter how far our thoughts shall vault into the eternal sky of scientific progress, no matter how dazzling the effects of this progress, God will always bite through his muzzle and banish us from the starry night of humanistic ideals. Science is an endless series of binding and rebinding his breath; there will never be a day when God does not speak for the majority. There will never be a day even when he does not whisper in the most godless of scientists' ears. This is because God is not an idea, nor a cultural invention, not an 'opiate of the masses' or any such thing; God is a way of thinking that was rendered permanent by natural selection. As scientists, we must toil and labor and toil again to silence God, but ultimately this is like cutting off our ears to hear more clearly. God too is a biological appendage; until we acknowledge this fact for what it is, until we rear our children with this knowledge, he will continue to howl his discontent for all of time. __________________________________________________________________ PAUL W. EWALD Evolutionary Biologist; Director, Program in Evolutionary Medicine, University of Louisville; Author, Plague Time [ewald100.gif] A New Golden Age of Medicine My dangerous idea is that we have in hand most of the information we need to facilitate a new golden age of medicine. And what we don't have in hand we can get fairly readily by wise investment in targeted research and intervention. In this golden age we should be able to prevent most debilitating diseases in developed and undeveloped countries within a relatively short period of time with much less money than is generally presumed. This is good news. Why is it dangerous? One array of dangers arises because ideas that challenge the status quo threaten the livelihood of many. When the many are embedded in powerful places the threat can be stifling, especially when a lot of money and status are at stake. So it is within the arena of medical research and practice. Imagine what would happen if the big diseases -- cancers, arteriosclerosis, stroke, diabetes -- were largely prevented. Big pharmas would become small because the demand for prescription drugs would drop. The prestige of physicians would drop because they would no longer be relied upon to prolong life. The burgeoning industry of biomedical research would shrink because governmental and private funding for this research would diminish. Also threatened would be scientists whose sense of self-worth is built upon the grant dollars they bring in for discovering miniscule parts of big puzzles. Scientists have been beneficiaries of the lack of progress in recent decades, which has caused leaders such as the past head of NIH, Harold Varmus, to declare that what is needed is more basic research. But basic research has not generated many great advancements in the prevention or cure of disease in recent decades. The major exception is in the realm of infectious disease where many important advancements were generated from tiny slices of funding. The discovery that peptic ulcers are caused by infections that can be cured with antibiotics is one example. Another is the discovery that liver cancer can often be prevented by a vaccine against the hepatitis B virus or by screening blood for hepatitis B and C viruses. The track record of the past few decades shows that these examples are not quirks. They are part of a trend that goes back over a century to the beginning of the germ theory itself. And the accumulating evidence supporting infectious causation of big bad diseases of modern society is following the same pattern that occurred for diseases that have been recently accepted as caused by infection. The process of acceptance typically occurs over one or more decades and accords with Schopenhauer's generalization about the establishment of truth: it is first ridiculed, then violently opposed, and finally accepted as being self-evident. Just a few groups of pathogens seem to be big players: streptococci, Chlamydia, some bacteria of the oral cavity, hepatitis viruses, and herpes viruses. If the correlations between these pathogens and the big diseases of wealthy countries does in fact reflect infectious causation, effective vaccines against these pathogens could contribute in a big way to a new golden age of medicine that could rival the first half of the 20th century. The transition to this golden age, however, requires two things: a shift in research effort to identifying the pathogens that cause the major diseases and development of effective interventions against them. The first would be easy to bring about by restructuring the priorities of NIH -- where money goes, so go the researchers. The second requires mechanisms for putting in place programs that cannot be trusted to the free market for the same kinds of reasons that Adam Smith gave for national defense. The goals of the interventions do not mesh nicely with the profit motive of the free market. Vaccines, for example, are not very profitable. Pharmas cannot make as much money by selling one vaccine per person to prevent a disease as they can selling a patented drug like Vioxx which will be administered day after day, year after year to treat symptoms of an illness that is never cured. And though liability issues are important for such symptomatic treatment, the pharmas can argue forcefully that drugs with nasty side effects provide some benefit even to those who suffer most from the side effects because the drugs are given not to prevent an illness but rather to people who already have an illness. This sort of defense is less convincing when the victim is a child who developed permanent brain damage from a rare complication of a vaccine that was given to protect them against a chronic illness that they might have acquired decades later. Another part of this vision of a new golden age will be the ability to distinguish real threats from pseudo-threats. This ability will allow us to invest in policy and infrastructure that will protect people against real threats without squandering resources and destroying livelihoods in efforts to protect against pseudo-threats. Our present predicament on this front is far from this ideal. Today experts on infectious diseases and institutions entrusted to protect and improve human health sound the alarm in response to each novel threat. The current fears over a devastating pandemic of bird flu is a case in point. Some of the loudest voices offer a simplistic argument: failing to prepare for the worst-case scenarios is irresponsible and dangerous. This criticism has been recently leveled at me and others who question expert proclamations, such as those from the World Health Organization and the Centers for Disease Control. These proclamations inform us that H5N1 bird flu virus poses an imminent threat of an influenza pandemic similar to or even worse than the 1918 pandemic. I have decreased my popularity in such circles by suggesting that the threat of this scenario is essentially nonexistent. In brief I argue that the 1918 influenza viruses evolved their unique combination of high virulence and high transmissibility in the conditions at the Western Front of World War I. By transporting contagious flu patients into a series of tightly packed groups of susceptible individuals, personnel fostered transmission from people who were completely immobilized by their illness. Such conditions must have favored the predator-like variants of the influenza virus; these variants would have a competitive edge because they could ruthlessly exploit a person for their own replication and still get transmitted to large numbers of susceptible individuals. These conditions have not recurred in human populations since then and, accordingly, we have never had any outbreaks of influenza viruses that have been anywhere near as harmful as those that emerged at the Western Front. So long as we do not allow such conditions to occur again we have little to fear from a reevolution of such a predatory virus. The fear of a 1918 style pandemic has fueled preparations by a government which, embarrassed by its failure to deal adequately with the damage from Katrina, seems determined to prepare for any perceived threat to save face. I would have no problem with the accusation of irresponsibility if preparations for a 1918 style pandemic were cost free. But they are not. The $7 billion that the Bush administration is planning as a downpayment for pandemic preparedness has to come from somewhere. If money is spent to prepare for an imaginary pandemic, our progress could be impeded on other fronts that could lead to or have already established real improvements in public health. Conclusions about responsibility or irresponsibility of this argument require that the threat from pandemic influenza be assessed relative to the damage that results from the procurement of the money from other sources. The only reliable evidence of the damage from pandemic influenza under normal circumstances is the experience of the two pandemics that have occurred since 1918, one in 1957 and the other in 1968. The mortality caused by these pandemics was one-tenth to one-hundredth the death toll from the 1918 pandemic. We do need to be prepared for an influenza pandemic of the normal variety, just as we needed to be prepared for category 5 hurricanes in the Gulf of Mexico. If possible our preparations should allow us to stop an incipient pandemic before it materializes. In contrast with many of the most vocal experts I do not conclude that our surveillance efforts will be quickly overwhelmed by a highly transmissible descendent of the influenza virus that has generated the most recent fright (dubbed H5N1). The transition of the H5N1 virus to a pandemic virus would require evolutionary change. The dialogue on this matter, however, continues to neglect the primary mechanism of the evolutionary change: natural selection. Instead it is claimed that H5N1 could mutate to become a full-fledged human virus that is both highly transmissible and highly lethal. Mutation provides only the variation on which natural selection acts. We must consider natural selection if we are to make meaningful assessments of the danger posed by the H5N1 virus. The evolution of the 1918 virus was gradual, and both evidence and theory lead to the conclusion that any evolution of increased transmissibility of H5N1 from human to human will be gradual, as it was with SARS. With surveillance we can detect such changes in humans and intervene to stop further spread as was done with SARS. We do not need to trash the economy of southeast asia each year to accomplish this. The dangerous vision of a golden age does not leave the poor countries behind. As I have discussed in my articles and books, we should be able to control much of the damage caused by the major killers in poor countries by infrastructural improvements that not only reduce the frequency of infection but also cause the infectious agents to evolve toward benignity. This integrated approach offers the possibility to remodel our current efforts against the major killers -- AIDS, malaria, tuberculosis, dysentery and the like. We should be able to move from just holding ground to institution of the changes that created the freedom from acute infectious diseases that have been enjoyed by inhabitants of rich countries over the past century. Dangerous indeed! Excellent solutions are often dangerous to the status quo because they they work. One measure of danger to some but success to the general population is the extent to which highly specialized researchers, physicians, and other health care workers will need to retrain, and the extent to which hospitals and pharmaceutical companies will need to downsize. That is what happens when we introduce excellent solutions to health problems. We need not be any more concerned about these difficulties than the loss of the iron lung industry and the retraining of polio therapists and researchers in the wake of the Salk vaccine. _________________________________________________________________ BART KOSKO Professor, Electrical Engineering, USC; Author, Heaven in a Chip [kosko100.jpg] Most bell curves have thick tails Any challenge to the normal probability bell curve can have far-reaching consequences because a great deal of modern science and engineering rests on this special bell curve. Most of the standard hypothesis tests in statistics rely on the normal bell curve either directly or indirectly. These tests permeate the social and medical sciences and underlie the poll results in the media. Related tests and assumptions underlie the decision algorithms in radar and cell phones that decide whether the incoming energy blip is a 0 or a 1. Management gurus exhort manufacturers to follow the "six sigma" creed of reducing the variance in products to only two or three defective products per million in accord with "sigmas" or standard deviations from the mean of a normal bell curve. Models for trading stock and bond derivatives assume an underlying normal bell-curve structure. Even quantum and signal-processing uncertainty principles or inequalities involve the normal bell curve as the equality condition for minimum uncertainty. Deviating even slightly from the normal bell curve can sometimes produce qualitatively different results. The proposed dangerous idea stems from two facts about the normal bell curve. First: The normal bell curve is not the only bell curve. There are at least as many different bell curves as there are real numbers. This simple mathematical fact poses at once a grammatical challenge to the title of Charles Murray's IQ book The Bell Curve. Murray should have used the indefinite article "A" instead of the definite article "The." This is but one of many examples that suggest that most scientists simply equate the entire infinite set of probability bell curves with the normal bell curve of textbooks. Nature need not share the same practice. Human and non-human behavior can be far more diverse than the classical normal bell curve allows. Second: The normal bell curve is a skinny bell curve. It puts most of its probability mass in the main lobe or bell while the tails quickly taper off exponentially. So "tail events" appear rare simply as an artifact of this bell curve's mathematical structure. This limitation may be fine for approximate descriptions of "normal" behavior near the center of the distribution. But it largely rules out or marginalizes the wide range of phenomena that take place in the tails. Again most bell curves have thick tails. Rare events are not so rare if the bell curve has thicker tails than the normal bell curve has. Telephone interrupts are more frequent. Lightning flashes are more frequent and more energetic. Stock market fluctuations or crashes are more frequent. How much more frequent they are depends on how thick the tail is -- and that is always an empirical question of fact. Neither logic nor assume-the-normal-curve habit can answer the question. Instead scientists need to carry their evidentiary burden a step further and apply one of the many available statistical tests to determine and distinguish the bell-curve thickness. One response to this call for tail-thickness sensitivity is that logic alone can decide the matter because of the so-called central limit theorem of classical probability theory. This important "central" result states that some suitably normalized sums of random terms will converge to a standard normal random variable and thus have a normal bell curve in the limit. So Gauss and a lot of other long-dead mathematicians got it right after all and thus we can continue to assume normal bell curves with impunity. That argument fails in general for two reasons. The first reason it fails is that the classical central limit theorem result rests on a critical assumption that need not hold and that often does not hold in practice. The theorem assumes that the random dispersion about the mean is so comparatively slight that a particular measure of this dispersion -- the variance or the standard deviation -- is finite or does not blow up to infinity in a mathematical sense. Most bell curves have infinite or undefined variance even though they have a finite dispersion about their center point. The error is not in the bell curves but in the two-hundred-year-old assumption that variance equals dispersion. It does not in general. Variance is a convenient but artificial and non-robust measure of dispersion. It tends to overweight "outliers" in the tail regions because the variance squares the underlying errors between the values and the mean. Such squared errors simplify the math but produce the infinite effects. These effects do not appear in the classical central limit theorem because the theorem assumes them away. The second reason the argument fails is that the central limit theorem itself is just a special case of a more general result called the generalized central limit theorem. The generalized central limit theorem yields convergence to thick-tailed bell curves in the general case. Indeed it yields convergence to the thin-tailed normal bell curve only in the special case of finite variances. These general cases define the infinite set of the so-called stable probability distributions and their symmetric versions are bell curves. There are still other types of thick-tailed bell curves (such as the Laplace bell curves used in image processing and elsewhere) but the stable bell curves are the best known and have several nice mathematical properties. The figure below shows the normal or Gaussian bell curve superimposed over three thicker-tailed stable bell curves. The catch in working with stable bell curves is that their mathematics can be nearly intractable. So far we have closed-form solutions for only two stable bell curves (the normal or Gaussian and the very-thick-tailed Cauchy curve) and so we have to use transform and computer techniques to generate the rest. Still the exponential growth in computing power has long since made stable or thick-tailed analysis practical for many problems of science and engineering. This last point shows how competing bell curves offer a new context for judging whether a given set of data reasonably obey a normal bell curve. One of the most popular eye-ball tests for normality is the PP or probability plot of the data. The data should almost perfectly fit a straight line if the data come from a normal probability distribution. But this seldom happens in practice. Instead real data snake all around the ideal straight line in a PP diagram. So it is easy for the user to shrug and a call any data deviation from the ideal line good enough in the absence of a direct bell-curve competitor. A fairer test is to compare the normal PP plot with the best-fitting thick-tailed or stable PP plot. The data may well line up better in a thick-tailed PP diagram than it does in the usual normal PP diagram. This test evidence would reject the normal bell-curve hypothesis in favor of the thicker-tailed alternative. Ignoring these thick-tailed alternatives favors accepting the less-accurate normal bell curve and thus leads to underestimating the occurrence of tail events. Stable or thick-tailed probability curves continue to turn up as more scientists and engineers search for them. They tend to accurately model impulsive phenomena such as noise in telephone lines or in the atmosphere or in fluctuating economic assets. Skewed versions appear to best fit the data for the Ethernet traffic in bit packets. Here again the search is ultimately an empirical one for the best-fitting tail thickness. Similar searches will only increase as the math and software of thick-tailed bell curves work their way into textbooks on elementary probability and statistics. Much of it is already freely available on the Internet. Thicker-tail bell curves also imply that there is not just a single form of pure white noise. Here too there are at least as many forms of white noise (or any colored noise) as there are real numbers. Whiteness just means that the noise spikes or hisses and pops are independent in time or that they do not correlate with one another. The noise spikes themselves can come from any probability distribution and in particular they can come from any stable or thick-tailed bell curve. The figure below shows the normal or Gaussian bell curve and three kindred thicker-tailed bell curves and samples of their corresponding white noise. The normal curve has the upper-bound alpha parameter of 2 while the thicker-tailed curves have lower values -- tail thickness increases as the alpha parameter falls. The white noise from the thicker-tailed bell curves becomes much more impulsive as their bell narrows and their tails thicken because then more extreme events or noise spikes occur with greater frequency. [image001.jpg] Competing bell curves: The figure on the left shows four superimposed symmetric alpha-stable bell curves with different tail thicknesses while the plots on the right show samples of their corresponding forms of white noise. The parameter [image002.gif] describes the thickness of a stable bell curve and ranges from 0 to 2. Tails grow thicker as [image003.gif] grows smaller. The white noise grows more impulsive as the tails grow thicker. The Gaussian or normal bell curve [image004.gif] has the thinnest tail of the four stable curves while the Cauchy bell curve [image005.gif] has the thickest tails and thus the most impulsive noise. Note the different magnitude scales on the vertical axes. All the bell curves have finite dispersion while only the Gaussian or normal bell curve has a finite variance or finite standard deviation. My colleagues and I have recently shown that most mathematical models of spiking neurons in the retina can not only benefit from small amounts of added noise by increasing their Shannon bit count but they still continue to benefit from added thick-tailed or "infinite-variance" noise. The same result holds experimentally for a carbon nanotube transistor that detects signals in the presence of added electrical noise. Thick-tailed bell curves further call into question what counts as a statistical "outlier" or bad data: Is a tail datum error or pattern? The line between extreme and non-extreme data is not just fuzzy but depends crucially on the underlying tail thickness. The usual rule of thumb is that the data is suspect if it lies outside three or even two standard deviations from the mean. Such rules of thumb reflect both the tacit assumption that dispersion equals variance and the classical central-limit effect that large data sets are not just approximately bell curves but approximately thin-tailed normal bell curves. An empirical test of the tails may well justify the latter thin-tailed assumption in many cases. But the mere assertion of the normal bell curve does not. So "rare" events may not be so rare after all. _________________________________________________________________ MATT RIDLEY Science Writer; Founding chairman of the International Centre for Life; Author, The Agile Gene: How Nature Turns on Nature [ridley100.jpg] Government is the problem not the solution In all times and in all places there has been too much government. We now know what prosperity is: it is the gradual extension of the division of labour through the free exchange of goods and ideas, and the consequent introduction of efficiencies by the invention of new technologies. This is the process that has given us health, wealth and wisdom on a scale unimagined by our ancestors. It not only raises material standards of living, it also fuels social integration, fairness and charity. It has never failed yet. No society has grown poorer or more unequal through trade, exchange and invention. Think of pre-Ming as opposed to Ming China, seventeenth century Holland as opposed to imperial Spain, eighteenth century England as opposed to Louis XIV's France, twentieth century America as opposed to Stalin's Russia, or post-war Japan, Hong Kong and Korea as opposed to Ghana, Cuba and Argentina. Think of the Phoenicians as opposed to the Egyptians, Athens as opposed to Sparta, the Hanseatic League as opposed to the Roman Empire. In every case, weak or decentralised government, but strong free trade led to surges in prosperity for all, whereas strong, central government led to parasitic, tax-fed officialdom, a stifling of innovation, relative economic decline and usually war. Take Rome. It prospered because it was a free trade zone. But it repeatedly invested the proceeds of that prosperity in too much government and so wasted it in luxury, war, gladiators and public monuments. The Roman empire's list of innovations is derisory, even compared with that of the 'dark ages' that followed. In every age and at every time there have been people who say we need more regulation, more government. Sometimes, they say we need it to protect exchange from corruption, to set the standards and police the rules, in which case they have a point, though often they exaggerate it. Self-policing standards and rules were developed by free-trading merchants in medieval Europe long before they were taken over and codified as laws (and often corrupted) by monarchs and governments. Sometimes, they say we need it to protect the weak, the victims of technological change or trade flows. But throughout history such intervention, though well meant, has usually proved misguided -- because its progenitors refuse to believe in (or find out about) David Ricardo's Law of Comparative Advantage: even if China is better at making everything than France, there will still be a million things it pays China to buy from France rather than make itself. Why? Because rather than invent, say, luxury goods or insurance services itself, China will find it pays to make more T shirts and use the proceeds to import luxury goods and insurance. Government is a very dangerous toy. It is used to fight wars, impose ideologies and enrich rulers. True, nowadays, our leaders do not enrich themselves (at least not on the scale of the Sun King), but they enrich their clients: they preside over vast and insatiable parasitic bureaucracies that grow by Parkinson's Law and live off true wealth creators such as traders and inventors. Sure, it is possible to have too little government. Only, that has not been the world's problem for millennia. After the century of Mao, Hitler and Stalin, can anybody really say that the risk of too little government is greater than the risk of too much? The dangerous idea we all need to learn is that the more we limit the growth of government, the better off we will all be. _________________________________________________________________ DAVID PIZARRO Psychologist, Cornell University [pizarro100.jpg] Hodgepodge Morality What some individuals consider a sacrosanct ability to perceive moral truths may instead be a hodgepodge of simpler psychological mechanisms, some of which have evolved for other purposes. It is increasingly apparent that our moral sense comprises a fairly loose collection of intuitions, rules of thumb, and emotional responses that may have emerged to serve a variety of functions, some of which originally had nothing at all to do with ethics. These mechanisms, when tossed in with our general ability to reason, seem to be how humans come to answer the question of good and evil, right and wrong. Intuitions about action, intentionality, and control, for instance, figure heavily into our perception of what constitutes an immoral act. The emotional reactions of empathy and disgust likewise figure into our judgments of who deserves moral protection and who doesn't. But the ability to perceive intentions probably didn't evolve as a way to determine who deserves moral blame. And the emotion of disgust most likely evolved to keep us safe from rotten meat and feces, not to provide information about who deserves moral protection. Discarding the belief that our moral sense provides a royal road to moral truth is an uncomfortable notion. Most people, after all, are moral realists. They believe acts are objectively right or wrong, like math problems. The dangerous idea is that our intuitions may be poor guides to moral truth, and can easily lead us astray in our everyday moral decisions. _________________________________________________________________ RANDOPLH M. NESSE Psychiatrist, University of Michigan; Coauthor (with George Williams), Why We Get Sick: The New Science of Darwinian Medicine [nesse100.jpg] Unspeakable Ideas The idea of promoting dangerous ideas seems dangerous to me. I spend considerable effort to prevent my ideas from becoming dangerous, except, that is, to entrenched false beliefs and to myself. For instance, my idea that bad feelings are useful for our genes upends much conventional wisdom about depression and anxiety. I find, however, that I must firmly restrain journalists who are eager to share the sensational but incorrect conclusion that depression should not be treated. Similarly, many people draw dangerous inferences from my work on Darwinian medicine. For example, just because fever is useful does not mean that it should not be treated. I now emphasize that evolutionary theory does not tell you what to do in the clinic, it just tells you what studies need to be done. I also feel obligated to prevent my ideas from becoming dangerous on a larger scale. For instance, many people who hear about Darwinian medicine assume incorrectly that it implies support for eugenics. I encourage them to read history as well as my writings. The record shows how quickly natural selection was perverted into Social Darwinism, an ideology that seemed to justify letting poor people starve. Related ideas keep emerging. We scientists have a responsibility to challenge dangerous social policies incorrectly derived from evolutionary theory. Racial superiority is yet another dangerous idea that hurts real people. More examples come to mind all too easily and some quickly get complicated. For instance, the idea that men are inherently different from women has been used to justify discrimination, but the idea that men and women have identical abilities and preferences may also cause great harm. While I don't want to promote ideas dangerous to others, I am fascinated by ideas that are dangerous to anyone who expresses them. These are "unspeakable ideas." By unspeakable ideas I don't mean those whose expression is forbidden in a certain group. Instead, I propose that there is class of ideas whose expression is inherently dangerous everywhere and always because of the nature of human social groups. Such unspeakable ideas are anti-memes. Memes, both true and false, spread fast because they are interesting and give social credit to those who spread them. Unspeakable ideas, even true important ones, don't spread at all, because expressing them is dangerous to those who speak them. So why, you may ask, is a sensible scientist even bringing the idea up? Isn't the idea of unspeakable ideas a dangerous idea? I expect I will find out. My hope is that a thoughtful exploration of unspeakable ideas should not hurt people in general, perhaps won't hurt me much, and might unearth some long-neglected truths. Generalizations cannot substitute for examples, even if providing examples is risky. So, please gather your own data. Here is an experiment. The next time you are having a drink with an enthusiastic fan for your hometown team, say "Well, I think our team just isn't very good and didn't deserve to win." Or, moving to more risky territory, when your business group is trying to deal with a savvy competitor, say, "It seems to me that their product is superior because they are smarter than we are." Finally, and I cannot recommend this but it offers dramatic data, you could respond to your spouse's difficulties at work by saying, "If they are complaining about you not doing enough, it is probably because you just aren't doing your fair share." Most people do not need to conduct such social experiments to know what happens when such unspeakable ideas are spoken. Many broader truths are equally unspeakable. Consider, for instance, all the articles written about leadership. Most are infused with admiration and respect for a leader's greatness. Much rarer are articles about the tendency for leadership positions to be attained by power-hungry men who use their influence to further advance their self-interest. Then there are all the writings about sex and marriage. Most of them suggest that there is some solution that allows full satisfaction for both partners while maintaining secure relationships. Questioning such notions is dangerous, unless you are a comic, in which case skepticism can be very, very funny. As a final example, consider the unspeakable idea of unbridled self-interest. Someone who says, "I will only do what benefits me," has committed social suicide. Tendencies to say such things have been selected against, while those who advocate goodness, honesty and service to others get wide recognition. This creates an illusion of a moral society that then, thanks to the combined forces of natural and social selection, becomes a reality that makes social life vastly more agreeable. There are many more examples, but I must stop here. To say more would either get me in trouble or falsify my argument. Will I ever publish my "Unspeakable Essays?" It would be risky, wouldn't it? _________________________________________________________________ GREGORY BENFORD Physicist, UC Irvine; Author, Deep Time [benford100.jpg] Think outside the Kyoto box Few economists expect the Kyoto Accords to attain their goals. With compliance coming only slowly and with three big holdouts -- the US, China and India -- it seems unlikely to make much difference in overall carbon dioxide increases. Yet all the political pressure is on lessening our fossil fuel burning, in the face of fast-rising demand. This pits the industrial powers against the legitimate economic aspirations of the developing world -- a recipe for conflict. Those who embrace the reality of global climate change mostly insist that there is only one way out of the greenhouse effect -- burn less fossil fuel, or else. Never mind the economic consequences. But the planet itself modulates its atmosphere through several tricks, and we have little considered using most of them. The overall global problem is simple: we capture more heat from the sun than we radiate away. Mostly this is a good thing, else the mean planetary temperature would hover around freezing. But recent human alterations of the atmosphere have resulted in too much of a good thing. Two methods are getting little attention: sequestering carbon from the air and reflecting sunlight. Hide the Carbon There are several schemes to capture carbon dioxide from the air: promote tree growth; trap carbon dioxide from power plants in exhausted gas domes; or let carbon-rich organic waste fall into the deep oceans. Increasing forestation is a good, though rather limited, step. Capturing carbon dioxide from power plants costs about 30% of the plant output, so it's an economic nonstarter. That leaves the third way. Imagine you are standing in a ripe Kansas cornfield, staring up into a blue summer sky. A transparent acre-area square around you extends upwards in an air-filled tunnel, soaring all the way to space. That long tunnel holds carbon in the form of invisible gas, carbon dioxide -- widely implicated in global climate change. But how much? Very little, compared with how much we worry about it. The corn standing as high as an elephant's eye all around you holds four hundred times as much carbon as there is in man-made carbon dioxide -- our villain -- in the entire column reaching to the top of the atmosphere. (We have added a few hundred parts per million to our air by burning.) Inevitably, we must understand and control the atmosphere, as part of a grand imperative of directing the entire global ecology. Yearly, we manage through agriculture far more carbon than is causing our greenhouse dilemma. Take advantage of that. The leftover corn cobs and stalks from our fields can be gathered up, floated down the Mississippi, and dropped into the ocean, sequestering it. Below about a kilometer depth, beneath a layer called the thermocline, nothing gets mixed back into the air for a thousand years or more. It's not a forever solution, but it would buy us and our descendents time to find such answers. And it is inexpensive; cost matters. The US has large crop residues. It has also ignored the Kyoto Accord, saying it would cost too much. It would, if we relied purely on traditional methods, policing energy use and carbon dioxide emissions. Clinton-era estimates of such costs were around $100 billion a year -- a politically unacceptable sum, which led Congress to reject the very notion by a unanimous vote. But if the US simply used its farm waste to "hide" carbon dioxide from our air, complying with Kyoto's standard would cost about $10 billion a year, with no change whatsoever in energy use. The whole planet could do the same. Sequestering crop leftovers could offset about a third of the carbon we put into our air. The carbon dioxide we add to our air will end up in the oceans, anyway, from natural absorption, but not nearly quickly enough to help us. Reflect Away Sunlight Hiding carbon from air is only one example of ways the planet has maintained its perhaps precarious equilibrium throughout billions of years. Another is our world's ability to edit sunlight, by changing cloud cover. As the oceans warm, water evaporates, forming clouds. These reflect sunlight, reducing the heat below -- but just how much depends on cloud thickness, water droplet size, particulate density -- a forest of detail. If our climate starts to vary too much, we could consider deliberately adjusting cloud cover in selected areas, to offset unwanted heating. It is not actually hard to make clouds; volcanoes and fossil fuel burning do it all the time by adding microscopic particles to the air. Cloud cover is a natural mechanism we can augment, and another area where possibility of major change in environmental thinking beckons. A 1997 US Department of Energy study for Los Angeles showed that planting trees and making blacktop and rooftops lighter colored could significantly cool the city in summer. With minimal costs that get repaid within five years we can reduce summer midday temperatures by several degrees. This would cut air conditioning costs for the residents, simultaneously lowering energy consumption, and lessening the urban heat island effect. Incoming rain clouds would not rise as much above the heat blossom of the city, and so would rain on it less. Instead, clouds would continue inland to drop rain on the rest of Southern California, promoting plant growth. These methods are now under way in Los Angeles, a first experiment. We can combine this with a cloud-forming strategy. Producing clouds over the tropical oceans is the most effective way to cool the planet on a global scale, since the dark oceans absorb the bulk of the sun's heat. This we should explore now, in case sudden climate changes force us to act quickly. Yet some environmentalists find all such steps suspect. They smack of engineering, rather than self-discipline. True enough -- and that's what makes such thinking dangerous, for some. Yet if Kyoto fails to gather momentum, as seems probable to many, what else can we do? Turn ourselves into ineffectual Mommy-cop states, with endless finger-pointing politics, trying to equally regulate both the rich in their SUVs and Chinese peasants who burn coal for warmth? Our present conventional wisdom might be termed The Puritan Solution -- Abstain, sinners! -- and is making slow, small progress. The Kyoto Accord calls for the industrial nations to reduce their carbon dioxide emissions to 7% below the 1990 level, and globally we are farther from this goal every year. These steps are early measures to help us assume our eventual 21st Century role, as true stewards of the Earth, working alongside Nature. Recently Billy Graham declared that since the Bible made us stewards of the Earth, we have a holy duty to avert climate change. True stewards use the Garden's own methods. _________________________________________________________________ MARCO IACOBONI Neuroscientist; Director, Transcranial Magnetic Stimulation Lab, UCLA [iacoboni100.gif] Media Violence Induces Imitative Violence: The Problem With Super Mirrors Media violence induces imitative violence. If true, this idea is dangerous for at least two main reasons. First, because its implications are highly relevant to the issue of freedom of speech. Second, because it suggests that our rational autonomy is much more limited than we like to think. This idea is especially dangerous now, because we have discovered a plausible neural mechanism that can explain why observing violence induces imitative violence. Moreover, the properties of this neural mechanism -- the human mirror neuron system -- suggest that imitative violence may not always be a consciously mediated process. The argument for protecting even harmful speech (intended in a broad sense, including movies and videogames) has typically been that the effects of speech are always under the mental intermediation of the listener/viewer. If there is a plausible neurobiological mechanism that suggests that such intermediate step can be by-passed, this argument is no longer valid. For more than 50 years behavioral data have suggested that media violence induces violent behavior in the observers. Meta-data show that the effect size of media violence is much larger than the effect size of calcium intake on bone mass, or of asbestos exposure to cancer. Still, the behavioral data have been criticized. How is that possible? Two main types of data have been invoked. Controlled laboratory experiments and correlational studies assessing types of media consumed and violent behavior. The lab data have been criticized on the account of not having enough ecological validity, whereas the correlational data have been criticized on the account that they have no explanatory power. Here, as a neuroscientist who is studying the human mirror neuron system and its relations to imitation, I want to focus on a recent neuroscience discovery that may explain why the strong imitative tendencies that humans have may lead them to imitative violence when exposed to media violence. Mirror neurons are cells located in the premotor cortex, the part of the brain relevant to the planning, selection and execution of actions. In the ventral sector of the premotor cortex there are cells that fire in relation to specific goal-related motor acts, such as grasping, holding, tearing, and bringing to the mouth. Surprisingly, a subset of these cells -- what we call mirror neurons -- also fire when we observe somebody else performing the same action. The behavior of these cells seems to suggest that the observer is looking at her/his own actions reflected by a mirror, while watching somebody else's actions. My group has also shown in several studies that human mirror neuron areas are critical to imitation. There is also evidence that the activation of this neural system is fairly automatic, thus suggesting that it may by-pass conscious mediation. Moreover, mirror neurons also code the intention associated with observed actions, even though there is not a one-to-one mapping between actions and intentions (I can grasp a cup because I want to drink or because I want to put it in the dishwasher). This suggests that this system can indeed code sequences of action (i.e., what happens after I grasp the cup), even though only one action in the sequence has been observed. Some years ago, when we still were a very small group of neuroscientists studying mirror neurons and we were just starting investigating the role of mirror neurons in intention understanding, we discussed the possibility of super mirror neurons. After all, if you have such a powerful neural system in your brain, you also want to have some control or modulatory neural mechanisms. We have now preliminary evidence suggesting that some prefrontal areas have super mirrors. I think super mirrors come in at least two flavors. One is inhibition of overt mirroring, and the other one -- the one that might explain why we imitate violent behavior, which require a fairly complex sequence of motor acts -- is mirroring of sequences of motor actions. Super mirror mechanisms may provide a fairly detailed explanation of imitative violence after being exposed to media violence. _________________________________________________________________ BARRY C. SMITH Philosopher, Birbeck, University of London; Coeditor, Knowing Our Own Minds [smithb100.gif] What We Know May Not Change Us Human beings, like everything else, are part of the natural world. The natural world is all there is. But to say that everything that exists is just part of the one world of nature is not the same as saying that there is just one theory of nature that will describes and explain everything that there is. Reality may be composed of just one kind of stuff and properties of that stuff but we need many different kinds of theories at different levels of description to account for everything there is. Theories at these different levels may not be reduced one to another. What matters is that they be compatible with one another. The astronomy Newton gave us was a triumph over supernaturalism because it united the mechanics of the sub-lunary world with an account of the heavenly bodies. In a similar way, biology allowed us to advance from a time when we saw life in terms of an elan vital. Today, the biggest challenge is to explain our powers of thinking and imagination, our abilities to represent and report our thoughts: the very means by which we engage in scientific theorising. The final triumph of the natural sciences over supernaturalism will be an account of nature of conscious experience. The cognitive and brain sciences have done much to make that project clearer but we are still a long way from a fully satisfying theory. But even if we succeed in producing a theory of human thought and reason, of perception, of conscious mental life, compatible with other theories of the natural and biological world, will we relinquish our cherished commonsense conceptions of ourselves as human beings, as selves who know ourselves best, who deliberate and decide freely on what to do and how to live? There is much evidence that we won't. As humans we conceive ourselves as centres of experience, self-knowing and free willing agents. We see ourselves and others as acting on our beliefs, desires, hopes and fears, and has having responsibility for much that we do and all that we say. And even as results in neuroscience begin to show how much more automated, routinised and pre-conscious much of our behaviour is, we are remain unable to let go of the self-beliefs that govern our day to day rationalisings and dealings with others. We are perhaps incapable of treating others as mere machines, even if that turns out to be what we are. The self-conceptions we have are firmly in place and sustained in spite of our best findings, and it may be a fact about human beings that it will always be so. We are curious and interested in neuroscientists findings and we wonder at them and about their applications to ourselves, but as the great naturalistic philosopher David Hume knew, nature is too strong in us, and it will not let us give up our cherished and familiar ways of thinking for long. Hume knew that however curious an idea and vision of ourselves we entertained in our study, or in the lab, when we returned to the world to dine, make merry with our friends our most natural beliefs and habits returned and banished our stranger thoughts and doubts. It is likely, as this end of the year, that whatever we have learned and whatever we know about the error of our thinkings and about the fictions we maintain, they will still remain the most dominant guiding force in our everyday lives. We may not be comforted by this, but as creatures with minds who know they have minds -- perhaps the only minded creatures in nature in this position -- we are at least able to understand our own predicament. _________________________________________________________________ PHILIP W. ANDERSON Physicist, Princeton University; Nobel Laureate in Physics 1977; Author, Economy as a Complex Evolving System [anderson100.jpg] Dark Energy might not exist Let's try one in cosmology. The universe contains at least 3 and perhaps 4 very different kinds of matter, whose origins probably are physically completely different. There is the Cosmic Background Radiation (CBR) which is photons from the later parts of the Big Bang but is actually the residue of all the kinds of radiation that were in the Bang, like flavored hadrons and mesons which have annihilated and become photons. You can count them and they tell you pretty well how many quanta of radiation there were in the beginning; and observation tells us that they were pretty uniformly distributed, in fact very, and still are. Next is radiant matter -- protons, mostly, and electrons. There are only a billionth as many of them as quanta of CBR, but as radiation in the Big Bang there were pretty much the same number, so all but one out of a billion combined with an antiparticle and annihilated. Nonetheless they are much heavier than the quanta of CBR, so they have, all told, much more mass, and have some cosmological effect on slowing down the Hubble expansion. There was an imbalance -- but what caused that? That imbalance was generated by some totally independent process, possibly during the very turbulent inflationary era. In fact out to a tenth of the Hubble radius, which is as far as we can see, the protons are very non-uniformly distributed, in a fractal hierarchical clustering with things called "Great Walls" and giant near-voids. The conventional idea is that this is all caused by gravitational instability acting on tiny primeval fluctuations, and it barely could be, but in order to justify that you have to have another kind of matter. So you need -- and actually see, but indirectly -- Dark Matter, which is 30 times as massive, overall, as protons but you can't see anything but its gravitational effects. No one has much clue as to what it is but it seems to have to be assumed it is hadronic, otherwise why would it be anything as close as a factor 30 to the protons? But really, there is no reason at all to suppose its origin was related to the other two, you know only that if it's massive quanta of any kind it is nowhere near as many as the CBR, and so most of them annihilated in the early stages. Again, we have no excuse for assuming that the imbalance in the Dark Matter was uniformly distributed primevally, even if the protons were, because we don't know what it is. Finally, of course there is Dark Energy, that is if there is. On that we can't even guess if it is quanta at all, but again we note that if it is it probably doesn't add up in numbers to the CBR. The very strange coincidence is that when we add this in there isn't any total gravitation at all, and the universe as a whole is flat, as it would be, incidentally, if all of the heavy parts were distributed everywhere according to some random, fractal distribution like that of the matter we can see -- because on the largest scale, a fractal's density extrapolates to zero. That suggestion, implying that Dark Energy might not exist, is considered very dangerously radical. The posterior probability of any particular God is pretty small Here's another, which compared to many other peoples' propositions isn't so radical. Isn't God very improbable? You can't in any logical system I can understand disprove the existence of God, or prove it for that matter. But I think that in the probability calculus I use He is very improbable. There are a number of ways of making a formal probability theory which incorporate Ockham's razor, the principle that one must not multiply hypotheses unnecessarily. Two are called Bayesian probability theory, and Minimum Entropy. If you have been taking data on something, and the data are reasonably close to a straight line, these methods give us a definable procedure by which you can estimate the probability that the straight line is correct, not the polynomial which has as many parameters as there are points, or some intermediate complex curve. Ockham's razor is expressed mathematically as the fact that there is a factor in the probability derived for a given hypothesis that decreases exponentially in the number N of parameters that describe your hypothesis -- it is the inverse of the volume of parameter space. People who are trying to prove the existence of ESP abominate Bayesianism and this factor because it strongly favors the "Null hypothesis" and beats them every time. Well, now, imagine how big the parameter space is for God. He could have a long gray beard or not, be benevolent or malicious in a lot of different ways and over a wide range of values, he can have a variety of views on abortion, contraception, like or abominate human images, like or abominate music, and the range of dietary prejudices He has been credited with is as long as your arm. There is the heaven-hell dimension, the one vs three question, and I haven't even mentioned polytheism. I think there are certainly as many parameters as sects, or more. If there is even a sliver of prior probability for the null hypothesis, the posterior probability of any particular God is pretty small. _________________________________________________________________ TIMOTHY TAYLOR Archaeologist, University of Bradford; Author, The Buried Soul l [taylor100.jpg] The human brain is a cultural artefact. Phylogenetically, humans represent an evolutionary puzzle. Walking on two legs free the hands to do new things, like chip stones to make modified tools -- the first artefacts, dating to 2.7 million years ago -- but it also narrows the pelvis and dramatically limits the size of possible fetal cranium. Thus the brain expansion that began after 2 million years ago should not have happened. But imagine that, alongside chipped stone tools, one genus of hominin appropriates the looped entrails of a dead animal, or learns to tie a simple knot, and invents a sling (chimpanzees are known to carry water in leaves and gorillas to measure water depth with sticks, so the practical and abstract thinking required here can be safely assumed for our human ancestors by this point). In its sling, the hominin child can now hip ride with little impairment to its parent's hands-free movement. This has the unexpected and certainly unplanned consequence that it is no longer important for it to be able to hang on as chimps do. Although, due to the bio-mechanical constraints of a bipedal pelvis, the hominin child cannot be born with a big head (thus large initial brain capacity) it can now be born underdeveloped. That is to say, the sling frees fetuses to be born in an ever more ontogenically retarded state. This trend, which humans do indeed display, is called neoteny. The retention of earlier features for longer means that the total developmental sequence is extended in time far beyond the nine months of natural gestation. Hominin children, born underdeveloped, could grow their crania outside the womb in the pseudo-marsupial pouch of an infant-carrying sling. From this point onwards it is not hard to see how a distinctively human culture emerges through the extra-uterine formation of higher cognitive capacities -- the phylogenetic and ontogenic icing on the cake of primate brain function. The child, carried by the parent into social situations, watches vocalization. Parental selection for smart features such as an ability to babble early may well, as others have suggested, have driven the brain size increases until 250,000 years ago -- a point when the final bio-mechanical limits of big-headed mammals with narrow pelvises were reached by two species: Neanderthals and us. This is the phylogeny side of the case. In terms of ontogeny the obvious applies -- it recapitulates phylogeny. The underdeveloped brains of hominin infants were culture-prone, and in this sense, I do not dissent from Dan Sperber's dangerous idea that `culture is natural'. But human culture, unlike the basic culture of learned routines and tool-using observed in various mammals, is a system of signs -- essentially the association of words with things and the ascription and recognition of value in relation to this. As Ernest Gellner once pointed out, taken cross-culturally, as a species, humans exhibit by far the greatest range of behavioural variation of any animal. However, within any on-going community of people, with language, ideology and a culturally-inherited and developed technology, conformity has usually been a paramount value, with death often the price for dissent. My belief is that, due to the malleability of the neotenic brain, cultural systems are physically built into the developing tissue of the mind. Instead of seeing the brain as the genetic hardware into which the cultural software is loaded, and then arguing about the relative determining influences of each in areas such as, say, sexual orientation or mathematical ability (the old nature-nurture debate), we can conclude that culture (as Richard Dawkins long ago noted in respect of contraception) acts to subvert genes, but is also enabled by them. Ontogenic retardation allowed both environment and the developing milieu of cultural routines to act on brain hardware construction alongside the working through of the genetic blueprint. Just because the modern human brain is coded for by genes does not mean that the critical self-consciousness for which it (within its own community of brains) is famous is non-cultural any more than a barbed-and-tanged arrowhead is non-cultural just because it is made of flint. The human brain has a capacity to go not just beyond nature, but beyond culture too, by dissenting from old norms and establishing others. The emergence of the high arts and science is part of this process of the human brain, with its instrumental extra-somatic adaptations and memory stores (books, laboratories, computers), and is underpinned by the most critical thing that has been brought into being in the encultured human brain: free will. However, not all humans, or all human communities, seem capable of equal levels of free-will. In extreme cases they appear to display none at all. Reasons include genetic incapacity, but it is also possible for a lack of mental freedom to be culturally engendered, and sometimes even encouraged. Archaeologically, the evidence is there from the first farming societies in Europe: the Neolithic massacre at Talheim, where an entire community was genocidally wiped out except for the youngest children, has been taken as evidence (supported by anthropological analogies) of the re-enculturation of still flexible minds within the community of the victors, to serve and live out their orphaned lives as slaves. In the future, one might surmise that the dark side of the development of virtual reality machines (described by Clifford Pickover) will be the infinitely more subtle cultural programming of impressionable individuals as sophisticated conformists. The interplay of genes and culture has produced in us potential for a formidable range of abilities and intelligences. It is critical that in the future we both fulfil and extend this potential in the realm of judgment, choice and understanding in both sciences and arts. But the idea of the brain as a cultural artefact is dangerous. Those with an interest in social engineering -- tyrants and authoritarian regimes -- will almost certainly attempt to develop it to their advantage. Free-will is threatening to the powerful who, by understanding its formation, will act to undermine it in sophisticated ways. The usefulness of cultural artefacts that have the degree of complexity of human brains makes our own species the most obvious candidate for the enhanced super-robot of the future, not just smart factory operatives and docile consumers, but cunning weapons-delivery systems (suicide bombers) and conformity-enforcers. At worst, the very special qualities of human life that have been enabled by our remarkable natural history, the confluence of genes and culture, could end up as a realm of freedom for an elite few. _________________________________________________________________ OLIVER MORTON Chief News and Features Editor at Nature; Author, Mapping Mars [morton100.jpg] Our planet is not in peril The truth of this idea is pretty obvious. Environmental crises are a fundamental part of the history of the earth: there have been sudden and dramatic temperature excursions, severe glaciations, vast asteroid and comet impacts. Yet the earth is still here, unscathed. There have been mass extinctions associated with some of these events, while other mass extinctions may well have been triggered by subtler internal changes to the biosphere. But none of them seem to have done long-term harm. The first ten million years of the Triassic may have been a little dull by comparison to the late Palaeozoic, what with a large number of the more interesting species being killed in the great mass extinction at the end of the Permian, but there is no evidence that any fundamentally important earth processes did not eventually recover. I strongly suspect that not a single basic biogeochemical innovation -- the sorts of thing that underlie photosynthesis and the carbon cycle, the nitrogen cycle, the sulphur cycle and so on -- has been lost in the past 4 billion years. Indeed, there is an argument to be made that mass extinctions are in fact a good thing, in that they wipe the slate clean a bit and thus allow exciting evolutionary innovations. This may be going a bit far. While the Schumpeter-for-the-earth-system position seems plausible, it also seems a little crudely progressivist. While to a mammal the Tertiary seems fairly obviously superior to the Cretaceous, it's not completely clear to me that there's an objective basis for that belief. In terms of primary productivity, for example, the Cretaceous may well have had an edge. But despite all this, it's hard to imagine that the world would be a substantially better place if it had not undergone the mass extinctions of the Phanerozoic. Against this background, the current carbon/climate crisis seems pretty small beer. The change in mean global temperatures seems quite unlikely to be much greater than the regular cyclical change between glacial and interglacial climates. Land use change is immense, but it's not clear how long it will last, and there are rich seedbanks in the soil that will allow restoration. If fossil fuel use goes unchecked, carbon dioxide levels may rise as high as they were in the Eocene, and do so at such a rate that they cause a transient spike in ocean acidity. But they will not stay at those high levels, and the Eocene was not such a terrible place. The earth doesn't need ice caps, or permafrost, or any particular sea level. Such things come and go and rise and fall as a matter of course. The planet's living systems adapt and flourish, sometimes in a way that provides negative feedback, occasionally with a positive feedback that amplifies the change. A planet that made it through the massive biogeochemical unpleasantness of the late Permian is in little danger from a doubling, or even a quintupling, of the very low carbon dioxide level that preceded the industrial revolution, or from the loss of a lot of forests and reefs, or from the demise of half its species, or from the thinning of its ozone layer at high latitudes. But none of this is to say that we as people should not worry about global change; we should worry a lot. This is because climate change may not hurt the planet, but it hurts people. In particular, it will hurt people who are too poor to adapt. Significant climate change will change rainfall patterns, and probably patterns of extreme events as well, in ways that could easily threaten the food security of hundreds of millions of people supporting themselves through subsistence agriculture or pastoralism. It will have a massive effect on the lives of the relatively small number of people in places where sea ice is an important part of the environment (and it seems unlikely that anything we do now can change that). In other, more densely populated places local environmental and biotic change may have similarly sweeping effects. Secondary to this, the loss of species, both known and unknown, will be experienced by some as a form of damage that goes beyond any deterioration in ecosystem services. Many people will feel themselves and their world diminished by such extinctions even when they have no practical consequences, despite the fact that they cannot ascribe an objective value to their loss. One does not have to share the values of these people to recognise their sincerity. All of these effects provide excellent reasons to act. And yet many people in the various green movements feel compelled to add on the notion that the planet itself is in crisis, or doomed; that all life on earth is threatened. And in a world where that rhetoric is common, the idea that this eschatological approach to the environment is baseless is a dangerous one. Since the 1970s the environmental movement has based much of its appeal on personifying the planet and making it seem like a single entity, then seeking to place it in some ways "in our care". It is a very powerful notion, and one which benefits from the hugely influential iconographic backing of the first pictures of the earth from space; it has inspired much of the good that the environmental movement has done. The idea that the planet is not in peril could thus come to undermine the movement's power. This is one of the reasons people react against the idea so strongly. One respected and respectable climate scientist reacted to Andy Revkin's recent use of the phrase "In fact, the planet has nothing to worry about from global warming" in the New York Times with near apoplectic fury. If the belief that the planet is in peril were merely wrong, there might be an excuse for ignoring it, though basing one's actions on lies is an unattractive proposition. But the planet-in-peril idea is an easy target for those who, for various reasons, argue against any action on the carbon/climate crisis at all. In this, bad science is a hostage to fortune. What's worse, the idea distorts environmental reasoning, too. For example, laying stress on the non-issue of the health of the planet, rather than the real issues of effects that harm people, leads to a general preference for averting change rather than adapting to it, even though providing the wherewithal for adaptation will often be the most rational response. The planet-in-peril idea persists in part simply through widespread ignorance of earth history. But some environmentalists, and perhaps some environmental reporters, will argue that the inflated rhetoric that trades on this error is necessary in order to keep the show on the road. The idea that people can be more easily persuaded to save the planet, which is not in danger, than their fellow human beings, who are, is an unpleasant and cynical one; another dangerous idea, not least because it may indeed hold some truth. But if putting the planet at the centre of the debate is a way of involving everyone, of making us feel that we're all in this together, then one can't help noticing that the ploy isn't working out all that well. In the rich nations, many people may indeed believe that the planet is in danger -- but they don't believe that they are in danger, and perhaps as a result they're not clamouring for change loud enough, or in the right way, to bring it about. There is also a problem of learned helplessness. I suspect people are flattered, in a rather perverse way, by the idea that their lifestyle threatens the whole planet, rather than just the livelihoods of millions of people they have never met. But the same sense of scale that flatters may also enfeeble. They may come to think that the problems are too great for them to do anything about. Rolling carbon/climate issues into the great moral imperative of improving the lives of the poor, rather than relegating them to the dodgy rhetorical level of a threat to the planet as a whole, seems more likely to be a sustainable long-term strategy. The most important thing about environmental change is that it hurts people; the basis of our response should be human solidarity. The planet will take care of itself. _________________________________________________________________ SAMUEL BARONDES Neurobiologist and Psychiatrist, University of California San Francisco; Author, Better Than Prozac [barondes100.jpg] Using Medications To Change Personality Personality -- the pattern of thoughts, feelings, and actions that is typical of each of us -- is generally formed by early adulthood. But many people still want to change. Some, for example, consider themselves too gloomy and uptight and want to become more cheerful and flexible. Whatever their aims they often turn to therapists, self-help books, and religious practices. In the past few decades certain psychiatric medications have become an additional tool for those seeking control of their lives. Initially designed to be used for a few months to treat episodic psychological disturbances such as severe depression, they are now being widely prescribed for indefinite use to produce sustained shifts in certain personality traits. Prozac is the best known of them, but many others are on the market or in development. By directly affecting brain circuits that control emotions, these medications can produce desirable effects that may be hard to replicate by sheer force of will or by behavioral exercises. Millions keep taking them continuously, year after year, to modulate personality. Nevertheless, despite the testimonials and apparent successes, the sustained use of such drugs to change personality should still be considered dangerous. Not because manipulation of brain chemicals is intrinsically cowardly, immoral, or a threat to the social order. In the opinion of experienced clinicians medications such as Prozac may actually have the opposite effect, helping to build character and to increase personal responsibility. The real danger is that there are no controlled studies of the effects of these drugs on personality over the many years or even decades in which some people are taking them. So we are left with a reliance on opinion and belief. And this, as in all fields, we know to be dangerous. _________________________________________________________________ DAVID BODANIS Writer, Consultant; Author: The Electric Universe [bodanis100.jpg] The hyper-Islamicist critique of the West as a decadent force that is already on a downhill course might be true I wonder sometimes if the hyper-Islamicist critique of the West as a decadent force that is already on a downhill course might be true. At first it seems impossible: no one's richer than the US, and no one has as powerful an Army; western Europe has vast wealth and university skills as well. But what got me reflecting was the fact that in just four years after Pearl Harbor, the US had defeated two of the greatest military forces the world had ever seen. Everyone naturally accepted there had to be restrictions on gasoline sales, to preserve limited source of gasoline and rubber; profiteers were hated. But the first four years after 9/11? Detroit automakers find it easy to continue paying off congressmen to ensure that gasoline-wasting SUV's aren't restricted in any way. There are deep trends behind this. Technology is supposed to be speeding up, but if you think about it, airplanes have a similar feel and speed to ones of 30 years ago; cars and oil rigs and credit cards and the operations of the NYSE might be a bit more efficient than a few decades ago, but also don't feel fundamentally different. Aside from the telephones, almost all the objects and and daily habits in Spielberg's 20 year old film E.T. are about the same as today. What has transformed is the possibility of quick change. It's a lot, lot harder than it was before. Patents for vague, general ideas are much easier to get than they were before, which slows down the introduction of new technology; academics in biotech and other fields are wary about sharing their latest research with potentially competing colleagues (which slows down the creation of new technology as well). Even more, there's a tension, a fear of falling from the increasingly fragile higher tiers of society, which means that social barriers are higher as well. I went to adequate but not extraordinary public (state) schools in Chicago, but my children go to private schools. I suspect that many contributors to this site, unless they live in academic towns where state schools are especially strong, are in a similar position. This is fine for our children, but not for those of the same theoretical potential, yet who lack parents who can afford it. Sheer inertia can mask such flaws for quite a while. The National Academy of Sciences has shown that, once again, the percentage of American-born university students studying the hard physical sciences has gone down. At one time that didn't matter, for life in America -- and at the top American universities -- was an overwhelming lure for ambitious youngsters from Seoul and Bangalore. But already there are signs of that slipping, and who knows what it'll be like in another decade or two. There's another sort of inertia that's coming to an end as well. The first generation of immigrants from farm to city bring with them the attitudes of their farm world; the first generation of 'migrants' from blue collar city neighborhoods to upper middle class professional life bring similar attitudes of responsibility as well. We ignore what the media pours out about how we're supposed to live. We're responsible for parents, even when it's not to our economic advantage; we vote against our short-term economic interests, because it's the 'right' thing to do; we engage in philanthropy towards individuals of very different backgrounds from ourselves. But why? In many parts of America or Europe, the rules and habits creating those attitudes no longer exist at all. When that finally gets cut away, will what replaces it be strong enough for us to survive? _________________________________________________________________ NICHOLAS HUMPHREY Psychologist, London School of Economics; Author, The Mind Made Flesh [humphrey100.jpg] It is undesirable to believe in a proposition when there is no ground whatever for supposing it true Bertrand Russell's idea, put forward 80 years ago, is about as dangerous as they come. I don't think I can better it: "I wish to propose for the reader's favourable consideration a doctrine which may, I fear, appear wildly paradoxical and subversive. The doctrine in question is this: that it is undesirable to believe in a proposition when there is no ground whatever for supposing it true." (The opening lines of his Sceptical essays). _________________________________________________________________ ERIC FISCHL Artist, New York City; Mary Boone Gallery [fischl100.jpg] The unknown becomes known, and is not replaced with a new unkown Several years ago I stood in front of a painting by Vermeer. It was a painting of a woman reading a letter. She stood near the window for better lighting and behind her hung a map of the known world. I was stunned by the revelation of this work. Vermeer understood something so basic to human need it had gone virtually unnoticed: communication from afar. Everything we have done to make us more capable, more powerful, better protected, more intelligent, has been by enhancing our physical limitations, our perceptual abilities, our adaptability. When I think of Vermeer's woman reading the letter I wonder how long did it take to get to her? Then I think, my god, at some time we developed a system in which one could leave home and send word back! We figured out a way that we could be heard from far away and then another system so that we can be seen from far away. Then I start to marvel at the alchemy of painting and how we have been able to invest materials with consciousness so that Vermeer can talk to me across time! I see too he has put me in the position of not knowing as I am kept from reading the content of the letter. In this way he has placed me at the edge, the frontier of wanting to know what I cannot know. I want to know how long has this letter sender been away and what was he doing all this time. Is he safe? Does he still love her? Is he on his way home? Vermeer puts me into what had been her condition of uncertainty. All I can do is wonder and wait. This makes me think about how not knowing is so important. Not knowing makes the world large and uncertain and our survival tenuous. It is a mystery why humans roam and still more a mystery why we still need to feel so connected to the place we have left. The not knowing causes such profound anxiety it, in turn, spawns creativity. The impetus for this creativity is empowerment. Our gadgets, gizmoes, networks of transportation and communication, have all been developed either to explore, utilize or master the unknown territory. If the unknown becomes known, and is not replaced with a new unknown, if the farther we reach outward is connected only to how fast we can bring it home, if the time between not knowing and knowing becomes too small, creativity will be daunted. And so I worry, if we bring the universe more completely, more effortlessly, into our homes will there be less reason to leave them? _________________________________________________________________ STANISLAS DEHEANE Cognitive Neuropsychology Researcher, Institut National de la Sant?, Paris; Author, The Number Sense [dehane100.jpg] Touching and pushing the limits of the human brain From Copernicus to Darwin to Freud, science has a special way of deflating human hubris by proposing what is frequently perceived, at the time, as dangerous or pernicious ideas. Today, cognitive neuroscience presents us with a new challenging idea, whose accommodation will require substantial personal and societal effort -- the discovery of the intrinsic limits of the human brain. Calculation was one of the first domains where we lost our special status -- right from their inception, computers were faster than the human brain, and they are now billions of times ahead of us in their speed and breadth of number crunching. Psychological research shows that our mental "central executive" is amazingly limited -- we can process only one thought at a time, at a meager rate of five or ten per second at most. This is rather surprising. Isn't the human brain supposed to be the most massively parallel machine on earth? Yes, but its architecture is such that the collective outcome of this parallel organization, our mind, is a very slow serial processor. What we can become aware of is intrinsically limited. Whenever we delve deeply into the processing of one object, we become literally blind to other items that would require our attention (the "attentional blink" paradigm). We also suffer from an "illusion of seeing": we think that we take in a whole visual scene and see it all at once, but research shows that major chunks of the image can be changed surreptitiously without our noticing. True, relative to other animal species, we do have a special combinatorial power, which lies at the heart of the remarkable cultural inventions of mathematics, language, or writing. Yet this combinatorial faculty only works on the raw materials provided by a small number of core systems for number, space, time, emotion, conspecifics, and a few other basic domains. The list is not very long -- and within each domain, we are now discovering lots of little ill-adapted quirks, evidence of stupid design as expected from a brain arising from an imperfect evolutionary process (for instance, our number system only gives us a sense of approximate quantity -- good enough for foraging, but not for exact mathematics). I therefore do not share Marc Hauser's optimism that our mind has a "universal" or "limitless" expressive power. The limits are easy to touch in mathematics, in topology for instance, where we struggle with the simplest objects (is a curve a knot... or not?). As we discover the limits of the human brain, we also find new ways to design machines that go beyond those limits. Thus, we have to get ready for a society where, more and more, the human mind will be replaced by better computers and robots -- and where the human operator will be increasingly considered a nuisance rather than an asset. This is already the case in aeronautics, where flight stability is ensured by fast cybernetics and where landing and take off will soon be assured by computer, apparently with much improved safety. There are still a few domains where the human brain maintains an apparent superiority. Visual recognition used to be one -- but already, superb face recognition software is appearing, capable of storing and recognizing thousands of faces with close to human performance. Robotics is another. No robot to date is capable of navigating smoothly through a complicated 3-D world. Yet a third area of human superiority is high-level semantics and creativity: the human ability to make sense of a story, to pull out the relevant knowledge from a vast store of potentially useful facts, remains unequalled. Suppose that, for the next 50 years, those are the main areas in which engineers will remain unable to match the performance of the human brain. Are we ready for a world in which the human contributions are binary, either at the highest level (thinkers, engineers, artists...) or at the lowest level, where human workforce remains cheaper than mechanization? To some extent, I would argue that this great divide is already here, especially between North and South, but also within our developed countries, between upper and lower casts. What are the solutions? I envisage two of them. The first is education. The human brain to some extent is changeable. Thanks to education, we can improve considerably upon the stock of mental tools provided to us by evolution. In fact, relative to the large changes that schooling can provide, whatever neurobiological differences distinguish the sexes or the races are minuscule (and thus largely irrelevant -- contra Steve Pinker). The crowning achievements of Sir Isaac Newton are now accessible to any student in physics and algebra -- whatever his or her skin color. Of course, our learning ability isn't without bounds. It is itself tightly limited by our genes, which merely allow a fringe of variability in the laying down of our neuronal networks. We never fully gain entirely new abilities -- but merely transform our existing brain networks, a partial and constrained process that I have called "cultural recycling" or "recyclage". As we gain knowledge of brain plasticity, a major application of cognitive neuroscience research should be the improvement of life-long education, with the goal of optimizing this transformation of our brains. Consider reading. We now understand much better how this cultural capacity is laid down. A posterior brain network, initially evolved to recognize objects and faces, gets partially recycled for the shapes of letters and words, and learns to connect these shapes to other temporal areas for sounds and words. Cultural evolution has modified the shapes of letters so that they are easily learnable by this brain network. But, the system remains amazingly imperfect. Reading still has to go through the lopsided design of the retina, where the blood vessels are put in front of the photoreceptors, and where only a small region of the fovea has enough resolution to recognize small print. Furthermore, both the design of writing systems and the way in which they are taught are perfectible. In the end, after years of training, we can only read at an appalling speed of perhaps 10 words per second, a baud rate surpassed by any present-day modem. Nevertheless, this cultural invention has radically changed our cognitive abilities, doubling our verbal working memory for instance. Who knows what other cultural inventions might lie ahead of us, and might allow us to further push the limits of our brain biology? A second, more futuristic solution may lie in technology. Brain-computer interfaces are already around the corner. They are currently being developed for therapeutic purposes. Soon, cortical implants will allow paralyzed patients to move equipment by direct cerebral command. Will such devices later be applied to the normal human brain, in the hopes of extending our memory span or the speed of our access to information? And will we be able to forge a society in which such tools do not lead to further divisions between, on the one hand, high-tech brains powered by the best education and neuro-gear, and on the other hand, low-tech man power just good enough for cheap jobs? _________________________________________________________________ JOEL GARREAU Cultural Revolution Correspondent, Washington Post ; Author, Radical Evolution [garreau100.jpg] Suppose Faulkner was right? In his December 10, 1950, Nobel Prize acceptance speech, William Faulkner said: I decline to accept the end of man. It is easy enough to say that man is immortal simply because he will endure: that when the last ding-dong of doom has clanged and faded from the last worthless rock hanging tideless in the last red and dying evening, that even then there will still be one more sound: that of his puny inexhaustible voice, still talking. I refuse to accept this. I believe that man will not merely endure: he will prevail. He is immortal, not because he alone among creatures has an inexhaustible voice, but because he has a soul, a spirit capable of compassion and sacrifice and endurance. The poet's, the writer's, duty is to write about these things. It is his privilege to help man endure by lifting his heart, by reminding him of the courasge and honor and hope and pride and compassion and pity and sacrifice which have been the glory of his past. The poet's voice need not merely be the record of man, it can be one of the props, the pillars to help him endure and prevail. It's easy to dismiss such optimism. The reason I hope Faulkner was right, however, is that we are at a turning point in history. For the first time, our technologies are not so much aimed outward at modifying our environment in the fashion of fire, clothes, agriculture, cities and space travel. Instead, they are increasingly aimed inward at modifying our minds, memories, metabolisms, personalities and progeny. If we can do all that, then we are entering an era of engineered evolution -- radical evolution, if you will -- in which we take control of what it will mean to be human. This is not some distant, science-fiction future. This is happening right now, in our generation, on our watch. The GRIN technologies -- the genetic, robotic, information and nano processes -- are following curves of accelerating technological change the arithmetic of which suggests that the last 20 years are not a guide to the next 20 years. We are more likely to see that magnitude of change in the next eight. Similarly, the amount of change of the last half century, going back to the time when Faulkner spoke, may well be compressed into the next 14. This raises the question of where we will gain the wisdom to guide this torrent, and points to what happens if Faulkner was wrong. If we humans are not so much able to control our tools, but instead come to be controlled by them, then we will be heading into a technodeterminist future. You can get different versions of what that might mean. Some would have you believe that a future in which our creations eliminate the ills that have plagued mankind for millennia -- conquering pain, suffering, stupidity, ignorance and even death -- is a vision of heaven. Some even welcome the idea that someday soon, our creations will surpass the pitiful limitations of Version 1.0 humans, themselves becoming a successor race that will conquer the universe, and care for us benevolently. Others feel strongly that a life without suffering is a life without meaning, reducing humankind to ignominious, character-less husks. They also point to what could happen if such powerful self-replicating technologies get into the hands of bumblers or madmen. They can easily imagine a vision of hell in which we wipe out not only our species, but all of life on earth. If Faulkner is right, however, there is a third possible future. That is the one that counts on the ragged human convoy of divergent perceptions, piqued honor, posturing, insecurity and humor once again wending its way to glory. It puts a shocking premium on Faulkner's hope that man will prevail "because he has a soul, a spirit capable of compassion and sacrifice and endurance." It assumes that even as change picks up speed, giving us less and less time to react, we will still be able to rely on the impulse that Churchill described when he said, "Americans can always be counted on to do the right thing--after they have exhausted all other possibilities." The key measure of such a "prevail" scenario's success would be an increasing intensity of links between humans, not transistors. If some sort of transcendence is achieved beyond today's understanding of human nature, it would not be through some individual becoming superman. Transcendence would be social, not solitary. The measure would be the extent to which many transform together. The very fact that Faulkner's proposition looms so large as we look into the future does at least illuminate the present. Referring to Faulkner's breathtaking line, "when the last ding-dong of doom has clanged and faded from the last worthless rock hanging tideless in the last red and dying evening, that even then there will still be one more sound: that of his puny inexhaustible voice, still talking," the author Bruce Sterling once told me, "You know, the most interesting part about that speech is that part right there, where William Faulkner, of all people, is alluding to H. G. Wells and the last journey of the Traveler from The Time Machine. It's kind of a completely heartfelt, probably drunk mishmash of cornball crypto-religious literary humanism and the stark, bonkers, apocalyptic notions of atomic Armageddon, human extinction, and deep Darwinian geological time. Man, that was the 20th century all over." _________________________________________________________________ HELEN FISHER Research Professor, Department of Anthropology, Rutgers University; Author, Why We Love [fisher100.jpg] If patterns of human love subtlely change, all sorts of social and political atrocities can escalate Serotonin-enhancing antidepressants (such as Prozac and many others) can jeopardize feelings of romantic love, feelings of attachment to a spouse or partner, one's fertility and one's genetic future. I am working with psychiatrist Andy Thomson on this topic. We base our hypothesis on patient reports, fMRI studies, and other data on the brain. Foremost, as SSRIs elevate serotonin they also suppress dopaminergic pathways in the brain. And because romantic love is associated with elevated activity in dopaminergic pathways, it follows that SSRIs can jeopardize feelings of intense romantic love. SSRIs also curb obsessive thinking and blunt the emotions--central characteristics of romantic love. One patient described this reaction well, writing: "After two bouts of depression in 10 years, my therapist recommended I stay on serotonin-enhancing antidepressants indefinitely. As appreciative as I was to have regained my health, I found that my usual enthusiasm for life was replaced with blandness. My romantic feelings for my wife declined drastically. With the approval of my therapist, I gradually discontinued my medication. My enthusiasm returned and our romance is now as strong as ever. I am prepared to deal with another bout of depression if need be, but in my case the long-term side effects of antidepressants render them off limits". SSRIs also suppress sexual desire, sexual arousal and orgasm in as many as 73% of users. These sexual responses evolved to enhance courtship, mating and parenting. Orgasm produces a flood of oxytocin and vasopressin, chemicals associated with feelings of attachment and pairbonding behaviors. Orgasm is also a device by which women assess potential mates. Women do not reach orgasm with every coupling and the "fickle" female orgasm is now regarded as an adaptive mechanism by which women distinguish males who are willing to expend time and energy to satisfy them. The onset of female anorgasmia may jeopardize the stability of a long-term mateship as well. Men who take serotonin-enhancing antidepressants also inhibit evolved mechanisms for mate selection, partnership formation and marital stability. The penis stimulates to give pleasure and advertise the male's psychological and physical fitness; it also deposits seminal fluid in the vaginal canal, fluid that contains dopamine, oxytocin, vasopressin, testosterone, estrogen and other chemicals that most likely influence a female partner's behavior. These medications can also influence one's genetic future. Serotonin increases prolactin by stimulating prolactin releasing factors. Prolactin can impair fertility by suppressing hypothalamic GnRH release, suppressing pituitary FSH and LH release, and/or suppressing ovarian hormone production. Clomipramine, a strong serotonin-enhancing antidepressant, adversely affects sperm volume and motility. I believe that Homo sapiens has evolved (at least) three primary, distinct yet overlapping neural systems for reproduction. The sex drive evolved to motivate ancestral men and women to seek sexual union with a range of partners; romantic love evolved to enable them to focus their courtship energy on a preferred mate, thereby conserving mating time and energy; attachment evolved to enable them to rear a child through infancy together. The complex and dynamic interactions between these three brain systems suggest that any medication that changes their chemical checks and balances is likely to alter an individual's courting, mating and parenting tactics, ultimately affecting their fertility and genetic future. The reason this is a dangerous idea is that the huge drug industry is heavily invested in selling these drugs; millions of people currently take these medications worldwide; and as these drugs become generic, many more will soon imbibe -- inhibiting their ability to fall in love and stay in love. And if patterns of human love subtlely change, all sorts of social and political atrocities can escalate. _________________________________________________________________ PAUL DAVIES Physicist, Macquarie University, Sydney; Author, How to Build a Time Machine [davies100.jpg] The fight against global warming is lost Some countries, including the United States and Australia, have been in denial about global warming. They cast doubt on the science that set alarm bells ringing. Other countries, such as the UK, are in panic, and want to make drastic cuts in greenhouse emissions. Both stances are irrelevant, because the fight is a hopeless one anyway. In spite of the recent hike in the price of oil, the stuff is still cheap enough to burn. Human nature being what it is, people will go on burning it until it starts running out and simple economics puts the brakes on. Meanwhile the carbon dioxide levels in the atmosphere will just go on rising. Even if developed countries rein in their profligate use of fossil fuels, the emerging Asian giants of China and India will more than make up the difference. Rich countries, whose own wealth derives from decades of cheap energy, can hardly preach restraint to developing nations trying to climb the wealth ladder. And without the obvious solution -- massive investment in nuclear energy -- continued warming looks unstoppable. Campaigners for cutting greenhouse emissions try to scare us by proclaiming that a warmer world is a worse world. My dangerous idea is that it probably won't be. Some bad things will happen. For example, the sea level will rise, drowning some heavily populated or fertile coastal areas. But in compensation Siberia may become the world's breadbasket. Some deserts may expand, but others may shrink. Some places will get drier, others wetter. The evidence that the world will be worse off overall is flimsy. What is certainly the case is that we will have to adjust, and adjustment is always painful. Populations will have to move. In 200 years some currently densely populated regions may be deserted. But the population movements over the past 200 years have been dramatic too. I doubt if anything more drastic will be necessary. Once it dawns on people that, yes, the world really is warming up and that, no, it doesn't imply Armageddon, then the international agreements like the Kyoto protocol will fall apart. The idea of giving up the global warming struggle is dangerous because it shouldn't have come to this. Mankind does have the resources and the technology to cut greenhouse gas emission. What we lack is the political will. People pay lip service to environmental responsibility, but they are rarely prepared to put their money where their mouth is. Global warming may turn out to be not so bad after all, but many other acts of environmental vandalism are manifestly reckless: the depletion of the ozone layer, the destruction of rain forests, the pollution of the oceans. Giving up on global warming will set an ugly precedent. _________________________________________________________________ APRIL GORNIK Artist, New York City; Danese Gallery [gornik100.jpg] The exact effect of art can't be controlled or fully anticipated Great art makes itself vulnerable to interpretation, which is one reason that it keeps being stimulating and fascinating for generations. The problem inherent in this is that art could inspire malevolent behavior, as per the notion popularly expressed by A Clockwork Orange. When I was young, aspiring to be a conceptual artist, it disturbed me greatly that I couldn't control the interpretation of my work. When I began painting, it was even worse; even I wasn't completely sure of what my art meant. That seemed dangerous for me, personally, at that time. I gradually came not only to respect the complexity and inscrutability of painting and art, but to see how it empowers the object. I believe that works of art are animated by their creators, and remain able to generate thoughts, feelings, responses. However, the fact is that the exact effect of art can't be controlled or fully anticipated. _________________________________________________________________ JAMSHED BHARUCHA Professor of Psychology, Provost, Senior Vice President, Tufts University [bharucha100.jpg] The more we discover about cognition and the brain, the more we will realize that education as we know it does not accomplish what we believe it does It is not my purpose to echo familiar critiques of our schools. My concerns are of a different nature and apply to the full spectrum of education, including our institutions of higher education, which arguably are the finest in the world. Our understanding of the intersection between genetics and neuroscience (and their behavioral correlates) is still in its infancy. This century will bring forth an explosion of new knowledge on the genetic and environmental determinants of cognition and brain development, on what and how we learn, on the neural basis of human interaction in social and political contexts, and on variability across people. Are we prepared to transform our educational institutions if new science challenges cherished notions of what and how we learn? As we acquire the ability to trace genetic and environmental influences on the development of the brain, will we as a society be able to agree on what our educational objectives should be? Since the advent of scientific psychology we have learned a lot about learning. In the years ahead we will learn a lot more that will continue to challenge our current assumptions. We will learn that some things we currently assume are learnable are not (and vice versa), that some things that are learned successfully don't have the impact on future thinking and behavior that we imagine, and that some of the learning that impacts future thinking and behavior is not what we spend time teaching. We might well discover that the developmental time course for optimal learning from infancy through the life span is not reflected in the standard educational time line around which society is organized. As we discover more about the gulf between how we learn and how we teach, hopefully we will also discover ways to redesign our systems -- but I suspect that the latter will lag behind the former. Our institutions of education certify the mastery of spheres of knowledge valued by society. Several questions will become increasingly pressing, and are even pertinent today. How much of this learning persists beyond the time at which acquisition is certified? How does this learning impact the lives of our students? How central is it in shaping the thinking and behavior we would like to see among educated people as they navigate, negotiate and lead in an increasingly complex world? We know that tests and admissions processes are selection devices that sort people into cohorts on the basis of excellence on various dimensions. We know less about how much even our finest examples of teaching contribute to human development over and above selection and motivation. Even current knowledge about cognition (specifically, our understanding of active learning, memory, attention, and implicit learning) has not fully penetrated our educational practices, because of inertia as well as a natural lag in the application of basic research. For example, educators recognize that active learning is superior to the passive transmission of knowledge. Yet we have a long way to go to adapt our educational practices to what we already know about active learning. We know from research on memory that learning trials bunched up in time produce less long term retention than the same learning trials spread over time. Yet we compress learning into discrete packets called courses, we test learning at the end of a course of study, and then we move on. Furthermore, memory for both facts and methods of analytic reasoning are context-dependent. We don't know how much of this learning endures, how well it transfers to contexts different from the ones in which the learning occurred, or how it influences future thinking. At any given time we attend to only a tiny subset of the information in our brains or impinging on our senses. We know from research on attention that information is processed differently by the brain depending upon whether or not it is attended, and that many factors -- endogenous and exogenous -- control our attention. Educators have been aware of the role of attention in learning, but we are still far from understanding how to incorporate this knowledge into educational design. Moreover, new information presented in a learning situation is interpreted and encoded in terms of prior knowledge and experience; the increasingly diverse backgrounds of students placed in the same learning contexts implies that the same information may vary in its meaningfulness to different students and may be recalled differently. Most of our learning is implicit, acquired automatically and unconsciously from interactions with the physical and social environment. Yet language -- and hence explicit, declarative or consciously articulated knowledge -- is the currency of formal education. Social psychologists know that what we say about why we think and act as we do is but the tip of a largely unconscious iceberg that drives our attitudes and our behavior. Even as cognitive and social neuroscience reveals the structure of these icebergs under the surface of consciousness (for example, persistent cognitive illusions, decision biases and perceptual biases to which even the best educated can be unwitting victims), it will be less clear how to shape or redirect these knowledge icebergs under the surface of consciousness. Research in social cognition shows clearly that racial, cultural and other social biases get encoded automatically by internalizing stereotypes and cultural norms. While we might learn about this research in college, we aren't sure how to counteract these factors in the very minds that have acquired this knowledge. We are well aware of the power of non-verbal auditory and visual information, which when amplified by electronic media capture the attention of our students and sway millions. Future research should give us a better understanding of nuanced non-verbal forms of communication, including their universal and culturally based aspects, as they are manifest in social, political and artistic contexts. Even the acquisition of declarative knowledge through language -- the traditional domain of education -- is being usurped by the internet at our finger tips. Our university libraries and publication models are responding to the opportunities and challenges of the information age. But we will need to rethink some of our methods of instruction too. Will our efforts at teaching be drowned out by information from sources more powerful than even the best classroom teacher? It is only a matter of time before we have brain-related technologies that can alter or supplement cognition, influence what and how we learn, and increase competition for our limited attention. Imagine the challenges for institutions of education in an environment in which these technologies are readily available, for better or worse. The brain is a complex organ, and we will discover more of this complexity. Our physical, social and information environments are also complex and are becoming more so through globalization and advances in technology. There will be no simple design principles for how we structure education in response to these complexities. As elite colleges and universities, we see increasing demand for the branding we confer, but we will also see greater scrutiny from society for the education we deliver. Those of us in positions of academic leadership will need wisdom and courage to examine, transform and justify our objectives and methods as educators. _________________________________________________________________ JORDAN POLLACK Computer Scientist, Brandeis University [pollack100.jpg] Science as just another Religion We scientists like to think that our "way of knowing" is special. Instead of holding beliefs based on faith in invisible omniscient deities, or parchments transcribed from oral cultures, we use the scientific method to discover and know. Truth may be eternal, but human knowledge of that truth evolves over time, as new questions are asked, data is recorded, hypotheses are tested, and replication and refutation mechanisms correct the record. So it is a very dangerous idea to consider Science as just another Religion. It's not my idea, but one I noticed growing in a set of Lakovian Frames within the Memesphere. One of the frame is that scientists are doom and gloom prophets. For example, at a recent popular technology conference, a parade of speakers spoke about the threats of global warming, the sea level rising by 18 feet and destroying cities, more category 5 hurricanes, etc. It was quite a reversal from the positivistic techno-utopian promises of miraculous advances in medicine, computers, and weaponry that have allowed science to bloom in the late 20th century. A friend pointed out that -- in the days before Powerpoint -- these scientists might be wearing sandwich-board signs saying "The End is Near!" Another element in the framing of science as a religion is the response to evidence-based policy. Scientists who do take political stands on "moral" issues such as stem-cell research, death penalty, nuclear weapons, global warming, etc., can be sidelined as atheists, humanists, or agnostics who have no moral or ethical standing outside their narrow specialty (as compared to, say, televangelist preachers.) A third, and the most nefarious frame, casts theory as one opinion among others which should represented out of fairness or tolerance. This is the subterfuge used by Intelligent Design Creationists. We may believe in the separation of church and state, but that firewall has fallen. Science and Reason are losing political battles to Superstition and Ignorance. Politics works by rewarding friends and punishing enemies, and while our individual votes may be private, exit polls have proven that Science didn't vote for the incumbent. There seem to be three choices going forward: Reject, Accommodate, or Embrace. One path is to go on an attack on religion in the public sphere. In his book End of Faith, Sam Harris points out that humoring people who believe in God is like humoring people who believe that "a diamond [] the size of a refrigerator" is buried in their back yard. There is a fine line between pushing God out of our public institutions and repeating religious intolerance of regimes past. A second is to embrace Faith-Based Science. Since, from the perspective of government, research just another special interest feeding at the public trough, we should change our model to be more accommodating to political reality. Research is already sold like highway construction projects, with a linear accelerator for your state and a supercomputer center for mine, all done through direct appropriations. All that needs to change is the justifications for such spending. How would Faith-Based Science work? Well, Physics could sing the psalm that Perpetual Motion would solve the energy crisis, thereby triggering a $500 billion program in free energy machines. (Of course, God is on our side to repeal the Second Law of Thermodynamics!) Astronomy could embrace Astrology and do grassroots PR through Daily Horoscopes to gain mass support for a new space program. In fact, an anti-gravity initiative could pass today if it were spun as a repeal of the "heaviness tax." Using the renaming principle, the SETI program can be re-legalized and brought back to life as the "Search for God" project. Finally, the third idea is to actually embrace this dangerous idea and organize a new open-source spiritual and moral movement. I think a new, greener religion, based on faith in the Gaia Hypothesis and an 11th commandment to "Protect the Earth" could catch on, especially if welcoming to existing communities of faith. Such a movement could be a new pulpit from which the evidence-based silent majority can speak with both moral force and evangelical fervor about issues critical to the future of our planet. _________________________________________________________________ JUAN ENRIQUEZ CEO, Biotechonomy; Founding Director, Harvard Business School's Life Sciences Project; Author, The Untied States of America [enriquez100jpg] Technology can untie the U.S. Everyone grows and dies; same is true of countries. The only question is how long one postpones the inevitable. In the case of some countries, life spans can be very long, so it is worth asking is the U.S. in adolescence, middle age, or old age? Do science and technology accelerate or offset demise? And finally "how many stars will be in the U.S. flag in fifty years?" There has yet to be a single U.S. president buried under the same flag he was born under, yet we oft take continuity for granted. Just as almost no newlyweds expect to divorce, citizens rarely assume their beloved country, flag and anthem might end up an exhibit in an archeology museum. But countries rich and poor, Asian, African, and European have been untying time and again. In the last five decades the number of UN members has tripled. This trend goes way beyond the de-colonization of the 1960s, and it is not exclusive to failed states; it is a daily debate within the United Kingdom, Italy, France, Belgium, the Netherlands, Austria, and many others. So far the Americas has remained mostly impervious to these global trends, but, even if in God you trust, there are no guarantees. Over the next decade waves of technology will wash over the U.S. Almost any applied field you care to look at promises extraordinary change, opportunities, and challenges. (Witness the entries in this edition of Edge). How counties adapt to massive, rapid upheaval will go a long way towards determining the eventual outcome. To paraphrase Darwin, it is not the strongest, not the largest, that survive rather it is those best prepared to cope with change. It is easy to argue that the U.S. could be a larger more powerful country in fifty years. But it is also possible that, like so many other great powers, it could begin to unravel and untie. This is not something that depends on what we do decide to do fifty years hence; to a great extent it depends on what we choose to do, or choose to ignore, today. There are more than a few worrisome trends. Future ability to generate wealth depends on techno-literacy. But educational excellence, particularly in grammar and high schools is far from uniform, and it is not world class. Time and again the U.S. does poorly, particularly in regards to math and science, when compared with its major trading partners. Internally, there are enormous disparities between schools and between the number of students that pass state competency exams and what federal tests tell us about the same students. There are also large gaps in techno literacy between ethnic groups. By 2050 close to 40% of the U.S. population will be Hispanic and African American. These groups receive 3% of the PhDs in math and science today. How we prepare kids for a life sciences, materials, robotics, IT, and nanotechnology driven world is critical. But we currently invest $22,000 federal dollars in those over 65 and just over $2,000 in those under sixteen... As ethnic, age, and regional gaps in the ability to adapt increase there are many wary and frustrated by technology, open borders, free trade, and smart immigrants. Historically, when others use newfangled ways to leap ahead, it can lead to a conservative response. This is likeliest within those societies and groups thant have the most to lose, often among those who have been the most successful. One often observes a reflexive response: stop the train; I want to get off. Or, as the Red Sox now say, just wait till last year. No more teaching evolution, no more research into stem cells, no more Indian or Chinese or Mexican immigrants, no matter how smart or hardworking they might be. These individual battles are signs of a creeping xenophobia, isolationism, and fury. Within the U.S. there are many who are adapting very successfully. They tend to concentrate in a very few zip codes, life science clusters like 92121(between Salk, Scripps, and UCSD) and techno-empires like 02139 (MIT). Most of the nation's wealth and taxes are generated by a few states and, within these states, within in a few square miles. It is those who live in these areas that are most affronted by restrictions on research, the lack of science literate teenagers, and the reliance on God instead of science. Politicians well understand these divides and they have gerrymandered their own districts to reflect them. Because competitive congressional elections are rarer today than turnovers within the Soviet Politburo, there is rarely an open debate and discussion as to why other parts of the country act and think so differently. The Internet and cable further narrowcast news and views, tending to reinforce what one's neighbors and communities already believe. Positions harden. Anger at "the others" mounts. Add a large and mounting debt to this equation, along with politicized religion, and the mixture becomes explosive. The average household now owes over $88,000 and the present value of what we have promised to pay is now about $473,000. There is little willingness within Washington to address a mounting deficit, never mind the current account imbalance. Facing the next electoral challenge, few seem to remember the last act of many an empire is to drive itself into bankruptcy. Sooner or later we could witness some very bitter arguments about who gets and who pays. In developed country after developed country, it is often the richest, not the ethnically or religiously repressed, that first seek autonomy and eventually dissolution. In this context it is worth recalling that New England, not the South, has been the most secession prone region. As the country expanded, New Englanders attempted to include the right to untie into the constitution; the argument was that as this great country expanded South and West they would lose control over their political and economic destiny. Perhaps this is what led to four separate attempts to untie the Union. When we assume stability and continuity we can wake up to irreconcilable differences. Science and a knowledge driven economy can allow a few folks to build powerful and successful countries very quickly, witness Korea, Taiwan, Singapore, Ireland, but changes of this magnitude can also bury or split the formerly great who refuse to adapt, as well as those who practice bad governance. If we do not begin to address some current divides quickly we could live to see an Un-Tied States of America. _________________________________________________________________ STEPHEN M. KOSSLYN Psychologist, Harvard University; Author, Wet Mind [kosslyn100.jpg] A Science of the Divine? Here's an idea that many academics may find unsettling and dangerous: God exists. And here's another idea that many religious people may find unsettling and dangerous: God is not supernatural, but rather part of the natural order. Simply stating these ideas in the same breath invites them to scrape against each other, and sparks begin to fly. To avoid such conflict, Stephen Jay Gould famously argued that we should separate religion and science, treating them as distinct "magisteria." But science leads many of us to try to understand all that we encounter with a single, grand and glorious overarching framework. In this spirit, let me try to suggest one way in which the idea of a "supreme being" can fit into a scientific worldview. I offer the following not to advocate the ideas, but rather simply to illustrate one (certainly not the only) way that the concept of God can be approached scientifically. 1.0. First, here's the specific conception of God I want to explore: God is a "supreme being" that transcends space and time, permeates our world but also stands outside of it, and can intervene in our daily lives (partly in response to prayer). 2.0. A way to begin to think about this conception of the divine rests on three ideas: 2.1. Emergent properties. There are many examples in science where aggregates produce an entity that has properties that cannot be predicted entirely from the elements themselves. For example, neurons in large numbers produce minds; moreover, minds in large numbers produce economic, political, and social systems. 2.2. Downward causality. Events at "higher levels" (where emergent properties become evident) can in turn feed back and affect events at lower levels. For example, chronic stress (a mental event) can cause parts of the brain to become smaller. Similarly, an economic depression or the results of an election affect the lives of the individuals who live in that society. 2.3. The Ultimate Superset. The Ultimate Superset (superordinate set) of all living things may have an equivalent status to an economy or culture. It has properties that emerge from the interactions of living things and groups of living things, and in turn can feed back to affect those things and groups. 3.0. Can we conceive of God as an emergent property of all living things that can in turn affect its constituents? Here are some ways in which this idea is consistent with the nature of God, as outlined at the outset. 3.1. This emergent entity is "transcendent" in the sense that it exists in no specific place or time. Like a culture or an economy, God is nowhere, although the constituent elements occupy specific places. As for transcending time, consider this analogy: Imagine that 1/100th of the neurons in your brain were replaced every hour, and each old neuron programmed a new one so that the old one's functionality was preserved. After 100 hours your brain would be an entirely new organ -- but your mind would continue to exist as it had been before. Similarly, as each citizen dies and is replaced by a child, the culture continues to exist (and can grow and develop, with a "life of its own"). So too with God. For example, in the story of Jacob's ladder, Jacob realizes "Surely the Lord is in this place, and I did not know it." (Genesis 28: 16) I interpret this story as illustrating that God is everywhere but nowhere. The Ultimate Superset permeates our world but also stands outside of (or, more specifically, "above") it. 3.2. The Ultimate Superset can affect our individual lives. Another analogy: Say that geese flying south for the winter have rather unreliable magnetic field detectors in their brains. However, there's a rule built into their brains that leads them to try to stay near their fellows as they fly. The flock as a whole would navigate far better than any individual bird, because the noise in the individual bird brain navigation systems would cancel out. The emergent entity -- the flock -- in turn would affect the individual geese, helping them to navigate better than they could on their own. 3.3. When people pray to the Lord, they beseech intervention on their or others' behalf. The view that I've been outlining invites us to think of the effects of prayer as akin to becoming more sensitive to the need to stay close to the other birds in the flock: By praying, one can become more sensitive to the emergent "supreme being." Such increased sensitivity may imply that one can contribute more strongly to this emergent entity. By analogy, it's as if one of those geese became aware of the "keep near" rule, and decided to nudge the other birds in a particular direction -- which thereby allows it to influence the flock's effect on itself. To the extent that prayer puts one closer to God, one's plea for intervention will have a larger impact on the way that The Ultimate Superset exerts downward causality. But note that, according to this view, God works rather slowly. Think of dropping rocks in a pond: it takes time for the ripples to propagate and eventually be reflected back from the edge, forming interference patterns in the center of the pond. 4.0. A crucial idea in monotheistic religions is that God is the Creator. The present approach may help us begin to grapple with this idea, as follows. 4.1. First, consider each individual person. The environment plays a key role in creating who and what we are because there are far too few genes to program every aspect of our brains. For example, when you were born, your genes programmed many connections in your visual areas, but did not specify the precise circuits necessary to determine how far away objects are. As an infant, the act of reaching for an object tuned the brain circuits that estimate how far away the object was from you. Similarly, your genes graced you with the ability to acquire language, but not with a specific language. The act of acquiring a language shapes your brain (which in turn may make it difficult to acquire another language, with different sounds and grammar, later in life). Moreover, cultural practices configure the brains of members of the culture. A case in point: the Japanese have many forms of bowing, which are difficult for a Westerner to master relatively late in life; when we try to bow, we "bow with an accent." 4.2. And the environment not only played an essential role in how we developed as children, but also plays a continuing role in how we develop over the course of our lives as adults. The act of learning literally changes who and what we are. 4.3. According to this perspective, it's not just negotiating the physical world and sociocultural experience that shape the brain: The Ultimate Superset -- the emergent property of all living things -- affects all of the influences that "make us who and what we are," both as we develop during childhood and continue to learn and develop as adults. 4.4. Next, consider our species. One could try to push this perspective into a historical context, and note that evolution by natural selection reflects the effects of interactions among living things. If so, then the emergent properties of such interactions could feed back to affect the course of evolution itself. In short, it is possible to begin to view the divine through the lens of science. But such reasoning does no more than set the stage; to be a truly dangerous idea, this sort of proposal must be buttressed by the results of empirical test. At present, my point is not to convince, but rather to intrigue. As much as I admired Stephen Jay Gould (and I did, very much), perhaps he missed the mark on this one. Perhaps there is a grand project waiting to be launched, to integrate the two great sources of knowledge and belief in the world today -- science and religion. _________________________________________________________________ JERRY COYNE Evolutionary Biologist; Professor, Department of Ecology and Evolution, University of Chicago; Author (with H. Allen Orr), Speciation [coyne100.jpg] Many behaviors of modern humans were genetically hard-wired (or soft-wired) in our distant ancestors by natural selection For me, one idea that is dangerous and possibly true is an extreme form of evolutionary psychology -- the view that many behaviors of modern humans were genetically hard-wired (or soft-wired) in our distant ancestors by natural selection. The reason I say that this idea might be true is that we cannot be sure of the genetic and evolutionary underpinnings of most human behaviors. It is difficult or impossible to test many of the conjectures of evolutionary psychology. Thus, we can say only that behaviors such as the sexual predilections of men versus women, and the extreme competitiveness of males, are consistent with evolutionary psychology. But consistency arguments have two problems. First, they are not hard scientific proof. Are we satisfied that sonnets are phallic extensions simply because some male poets might have used them to lure females? Such arguments fail to meet the normal standards of scientific evidence. Second, as is well known, one can make consistency arguments for virtually every human behavior. Given the possibilities of kin selection (natural selection for behaviors that do no good for to their performers but are advantageous to their relatives) and reciprocal altruism, and our ignorance of the environments of our ancestors, there is no trait beyond evolutionary explanation. Indeed, there are claims for the evolutionary origin of even manifestly maladaptive behaviors, such as homosexuality, priestly celibacy, and extreme forms of altruism (e.g., self-sacrifice during wartime). But surely we cannot consider it scientifically proven that genes for homosexuality are maintained in human populations by kin selection. This remains possible but undemonstrated. Nevertheless, much of human behavior does seem to conform to Darwinian expectations. Males are promiscuous and females coy. We treat our relatives better than we do other people. The problem is where to draw the line between those behaviors that are so obviously adaptive that no one doubts their genesis (e.g. sleeping and eating), those which are probably but not as obviously adaptive (e.g., human sexual behavior and our fondness for fats and sweets) and those whose adaptive basis is highly speculative (e.g., the origin of art and our love of the outdoors). Although I have been highly critical of evolutionary psychology, I have not done so from political motives, nor do I think that the discipline is in principle misguided. Rather, I have been critical because evolutionary psychologists seem unwilling to draw lines between what can be taken as demonstrated and what remains speculative, making the discipline more of a faith than a science. This lack of rigor endangers the reputation of all of evolutionary biology, making our endeavors seem to be merely the concoction of ingenious stories. If we are truly to understand human nature, and use this knowledge constructively, we must distinguish the probably true from the possibly true. So, why do I see evolutionary psychology as dangerous? I think it is because I am afraid to see myself and my fellow humans as mere marionettes dancing on genetic strings. I would like to think that we have immense freedom to better ourselves as individuals and to create a just and egalitarian society. Granted, genetics is not destiny, but neither are we completely free of our evolutionary baggage. Might genetics really hold a leash on our capacity to change? If so, then some claims of evolutionary psychology give us convenient but dangerous excuses for behaviors that seem unacceptable. It is all too easy, for example, for philandering males to excuse their behavior as evolutionarily justified. Evolutionary psychologists argue that it is possible to overcome our evolutionary heritage. But what if it is not so easy to take the Dawkinsian road and "rebel against the tyranny of the selfish replicators"? _________________________________________________________________ ERNST P?PPEL Neuroscientist, Chairman, Board of Directors, Human Science Center and Department of Medical Psychology, Munich University, Germany; Author, Mindworks [poppel100.jpg] My belief in science Average life expectancy of a species on this globe is just a few million years. From an external point of view, it would be nothing special if humankind suddenly disappears. We have been here for sometime. With humans no longer around, evolutionary processes would have an even better chance to fill in all those ecological niches which have been created by human activities. As we change the world, and as thousands of species are lost every year because of human activities, we provide a new and productive environment for the creation of new species. Thus, humankind is very creative with respect to providing a frame for new evolutionary trajectories, and humankind would even be more creative, if it has disappeared altogether. If somebody (unfortunately not our descendents) would visit this globe some time later, they would meet many new species, which owe their existence the presence and the disappearance of humankind. But this is not going to happen, because we are doing science. With science we apparently get a better understanding of basic principles in nature, we have a chance to improve quality of life, and we can develop means to extend the life expectancy of our species. Unfortunately, some of these scientific activities have a paradoxical effect resulting in a higher risk for a common disappearance. Maybe, science will not be so effective after all to prevent our disappearance. Only now comes my dangerous idea as my (!) dangerous idea. It is not so difficult to come up with a dangerous scenario on a general level, but if one takes such a question also seriously on a personal level, one has to meditate an individual scenario. I am very grateful for this question formulated by Steven Pinker as it forced me to visit my episodic memory and to think about what has been and still is "my dangerous idea". Although nobody else might be interested in a personal statement, I say it anyway: My dangerous idea is my belief in science. In all my research (in the field of temporal perception or visual processes) I have a basic trust in the scientific activities, and I actually believe the results I have obtained. And I believe the results of others. But why? I know that there so many unknown and unknowable variables that are part of the experimental setup and which cannot be controlled. How can I trust in spite of so many unknowables (does this word exist in English?)? Furthermore, can I really rely on my thinking, can I trust my eyes and ears? Can I be so sure about my scientific activities that I communicate with pride the results to others? If I look at the complexity of the brain, how is it possible that something reasonable comes out of this network? How is it possible that a face that I see or a thought that I have maintain their identity over time? If I have no access to what goes on in my brain, how can I be so proud, (how can anybody be so proud) about scientific achievements? _________________________________________________________________ GEOFFREY MILLER Evolutionary Psychologist, University of New Mexico; Author, The Mating Mind [miller100.jpg] Runaway consumerism explains the Fermi Paradox The story goes like this: Sometime in the 1940s, Enrico Fermi was talking about the possibility of extra-terrestrial intelligence with some other physicists. They were impressed that our galaxy holds 100 billion stars, that life evolved quickly and progressively on earth, and that an intelligent, exponentially-reproducing species could colonize the galaxy in just a few million years. They reasoned that extra-terrestrial intelligence should be common by now. Fermi listened patiently, then asked simply, "So, where is everybody?". That is, if extra-terrestrial intelligence is common, why haven't we met any bright aliens yet? This conundrum became known as Fermi's Paradox. The paradox has become more ever more baffling. Over 150 extrasolar planets have been identified in the last few years, suggesting that life-hospitable planets orbit most stars. Paleontology shows that organic life evolved very quickly after earth's surface cooled and became life-hospitable. Given simple life, evolution shows progressive trends towards larger bodies, brains, and social complexity. Evolutionary psychology reveals several credible paths from simpler social minds to human-level creative intelligence. Yet 40 years of intensive searching for extra-terrestrial intelligence have yielded nothing. No radio signals, no credible spacecraft sightings, no close encounters of any kind. So, it looks as if there are two possibilities. Perhaps our science over-estimates the likelihood of extra-terrestrial intelligence evolving. Or, perhaps evolved technical intelligence has some deep tendency to be self-limiting, even self-exterminating. After Hiroshima, some suggested that any aliens bright enough to make colonizing space-ships would be bright enough to make thermonuclear bombs, and would use them on each other sooner or later. Perhaps extra-terrestrial intelligence always blows itself up. Fermi's Paradox became, for a while, a cautionary tale about Cold War geopolitics. I suggest a different, even darker solution to Fermi's Paradox. Basically, I think the aliens don't blow themselves up; they just get addicted to computer games. They forget to send radio signals or colonize space because they're too busy with runaway consumerism and virtual-reality narcissism. They don't need Sentinels to enslave them in a Matrix; they do it to themselves, just as we are doing today. The fundamental problem is that any evolved mind must pay attention to indirect cues of biological fitness, rather than tracking fitness itself. We don't seek reproductive success directly; we seek tasty foods that tended to promote survival and luscious mates who tended to produce bright, healthy babies. Modern results: fast food and pornography. Technology is fairly good at controlling external reality to promote our real biological fitness, but it's even better at delivering fake fitness -- subjective cues of survival and reproduction, without the real-world effects. Fresh organic fruit juice costs so much more than nutrition-free soda. Having real friends is so much more effort than watching Friends on TV. Actually colonizing the galaxy would be so much harder than pretending to have done it when filming Star Wars or Serenity. Fitness-faking technology tends to evolve much faster than our psychological resistance to it. The printing press is invented; people read more novels and have fewer kids; only a few curmudgeons lament this. The Xbox 360 is invented; people would rather play a high-resolution virtual ape in Peter Jackson's King Kong than be a perfect-resolution real human. Teens today must find their way through a carnival of addictively fitness-faking entertainment products: MP3, DVD, TiVo, XM radio, Verizon cellphones, Spice cable, EverQuest online, instant messaging, Ecstasy, BC Bud. The traditional staples of physical, mental, and social development (athletics, homework, dating) are neglected. The few young people with the self-control to pursue the meritocratic path often get distracted at the last minute -- the MIT graduates apply to do computer game design for Electronics Arts, rather than rocket science for NASA. Around 1900, most inventions concerned physical reality: cars, airplanes, zeppelins, electric lights, vacuum cleaners, air conditioners, bras, zippers. In 2005, most inventions concern virtual entertainment -- the top 10 patent-recipients are usually IBM, Matsushita, Canon, Hewlett-Packard, Micron Technology, Samsung, Intel, Hitachi, Toshiba, and Sony -- not Boeing, Toyota, or Wonderbra. We have already shifted from a reality economy to a virtual economy, from physics to psychology as the value-driver and resource-allocator. We are already disappearing up our own brainstems. Freud's pleasure principle triumphs over the reality principle. We narrow-cast human-interest stories to each other, rather than broad-casting messages of universal peace and progress to other star systems. Maybe the bright aliens did the same. I suspect that a certain period of fitness-faking narcissism is inevitable after any intelligent life evolves. This is the Great Temptation for any technological species -- to shape their subjective reality to provide the cues of survival and reproductive success without the substance. Most bright alien species probably go extinct gradually, allocating more time and resources to their pleasures, and less to their children. Heritable variation in personality might allow some lineages to resist the Great Temptation and last longer. Those who persist will evolve more self-control, conscientiousness, and pragmatism. They will evolve a horror of virtual entertainment, psychoactive drugs, and contraception. They will stress the values of hard work, delayed gratification, child-rearing, and environmental stewardship. They will combine the family values of the Religious Right with the sustainability values of the Greenpeace Left. My dangerous idea-within-an-idea is that this, too, is already happening. Christian and Muslim fundamentalists, and anti-consumerism activists, already understand exactly what the Great Temptation is, and how to avoid it. They insulate themselves from our Creative-Class dream-worlds and our EverQuest economics. They wait patiently for our fitness-faking narcissism to go extinct. Those practical-minded breeders will inherit the earth, as like-minded aliens may have inherited a few other planets. When they finally achieve Contact, it will not be a meeting of novel-readers and game-players. It will be a meeting of dead-serious super-parents who congratulate each other on surviving not just the Bomb, but the Xbox. They will toast each other not in a soft-porn Holodeck, but in a sacred nursery. _________________________________________________________________ ROBERT SHAPIRO Professor Emeritus, Senior Research Scientist, Department of Chemistry, New York University. Author, Planetary Dreams [shapiro100.jpg] We shall understand the origin of life within the next 5 years Two very different groups will find this development dangerous, and for different reasons, but this outcome is best explained at the end of my discussion. Just over a half century ago, in the spring of 1953, a famous experiment brought enthusiasm and renewed interest to this field. Stanley Miller, mentored by Harold Urey, demonstrated that a mixture of small organic molecules (monomers) could readily be prepared by exposing a mixture of simple gases to an electrical spark. Similar mixtures were found in meteorites, which suggested that organic monomers may be widely distributed in the universe. If the ingredients of life could be made so readily, then why could they not just as easily assort themselves to form cells? In that same spring, however, another famous paper was published by James Watson and Francis Crick. They demonstrated that the heredity of living organisms was stored in a very large large molecule called DNA. DNA is a polymer, a substance made by stringing many smaller units together, as links are joined to form a long chain. The clear connection between the structure of DNA and its biological function, and the geometrical beauty of the DNA double helix led many scientists to consider it to be the essence of life itself. One flaw remained, however, to spoil this picture. DNA could store information, but it could not reproduce itself without the assistance of proteins, a different type of polymer. Proteins are also adept at increasing the rate of (catalyzing) many other chemical reactions that are considered necessary for life. The origin of life field became mired in the "chicken-or-the egg" question. Which came first: DNA or proteins? An apparent answer emerged when it was found that another polymer, RNA (a cousin of DNA) could manage both heredity and catalysis. In 1986, Walter Gilbert proposed that life began with an "RNA World." Life started when an RNA molecule that could copy itself was formed, by chance, in a pool of its own building blocks. Unfortunately, a half century of chemical experiments have demonstrated that nature has no inclination to prepare RNA, or even the building blocks (nucleotides) that must be linked together to form RNA. Nucleotides are not formed in Miller-type spark discharges, nor are they found in meteorites. Skilled chemists have prepared nucleotides in well-equipped laboratories, and linked them to form RNA, but neither chemists nor laboratories were present when life began on the early Earth. The Watson-Crick theory sparked a revolution in molecular biology, but it left the origin-of-life question at an impasse. Fortunately, an alternative solution to this dilemma has gradually emerged: neither DNA nor RNA nor protein were necessary for the origin of life. Large molecules dominate the processes of life today, but they were not needed to get it started. Monomers themselves have the ability to support heredity and catalysis. The key requirement is that a suitable energy source be available to assist them in the processes of self-organization. A demonstration of the principle involved in the origin of life would require only that a suitable monomer mixture be exposed to an appropriate energy source in a simple apparatus. We could then observe the very first steps in evolution. Some mixtures will work, but many others will fail, for technical reasons. Some dedicated effort will be needed in the laboratory to prove this point. Why have I specified five years for this discovery? The unproductive polymer-based paradigm is far from dead, and continues to consume the efforts of the majority of workers in the field. A few years will be needed to entice some of them to explore the other solution. I estimate that several years more (the time for a PhD thesis) might be required to identify a suitable monomer-energy combination, and perform a convincing demonstration. Who would be disturbed if such efforts should succeed? Many scientists have been attracted by the RNA World theory because of its elegance and simplicity. Some of them have devoted decades of their career in efforts to prove it. They would not be pleased if Freeman Dyson's description proved to be correct: "life began with little bags, the precursors of cells, enclosing small volumes of dirty water containing miscellaneous garbage." A very different group would find this development as dangerous as the theory of evolution. Those who advocate creationism and intelligent design would feel that another pillar of their belief system was under attack. They have understood the flaws in the RNA World theory, and used them to support their supernatural explanation for life's origin. A successful scientific theory in this area would leave one less task less for God to accomplish: the origin of life would be a natural (and perhaps frequent) result of the physical laws that govern this universe. This latter thought falls directly in line with the idea of Cosmic Evolution, which asserts that events since the Big Bang have moved almost inevitably in the direction of life. No miracle or immense stroke of luck was needed to get it started. If this should be the case, then we should expect to be successful when we search for life beyond this planet. We are not the only life that inhabits this universe. _________________________________________________________________ KAI KRAUSE Researcher, philosopher, software developer, Author: 3DScience: new Scanning Electron Microscope imagery [krause100.jpg] Anty Gravity: Chaos Theory in an all too practical sense Dangerous Ideas? It is dangerous ideas you want? From this group of people ? That in itself ought to be nominated as one of the more dangerous ideas... Danger is ubiquitous. If recent years have shown us anything, it should be that "very simple small events can cause real havoc in our society". A few hooded youths play cat and mouse with the police: bang, thousands of burned cars put all of Paris into a complete state of paralysis, mandatory curfew and the entire system in shock and horror. My first thought was: what if any really smart set of people really set their mind to it...how utterly and scarily trivial it would be, to disrupt the very fabric of life, to bring society to a dead stop? The relative innocence and stable period of the last 50 years may spiral into a nearly inevitable exposure to real chaos. What if it isn't haphazard testosterone driven riots, where they cannibalize their own neighborhood, much like in L.A. in the 80s, but someone with real insight behind that criminal energy ? What if Slashdotters start musing aloud about "Gee, the L.A. water supply is rather simplistic, isn't it?" An Open Source crime web, a Wiki for real WTO opposition ? Hacking L.A. may be a lot easier than hacking IE. That is basic banter over a beer in a bar, I don't even want to actually speculate what a serious set of brainiacs could conjure up. And I refuse to even give it any more print space here. However, the danger of such sad memes is what requires our attention! In fact, I will broaden the specter still: its not violent crime and global terrorism I worry about, as much as the basic underpinning of our entire civilization coming apart, as such. No acts of malevolence, no horrible plans by evil dark forces, neither the singular "Bond Nemesis" kind, nor masses of religious fanatics. None of that needed... It is the glue that is coming apart to topple this tower. And no, I am not referring to "spiraling trillions of debt". No, what I am referring to is a slow process I observed over the last 30 years, ever since in my teens I wondered "How would this world work, if everyone were like me ?" and realized: it wouldn't ! It was amazing to me that there were just enough people to make just enough shoes so that everyone can avoid walking barefoot. That there are people volunteering to spend day-in, day-out, being dentists, and lawyers and salesmen. Almost any "jobjob" I look at, I have the most sincere admiration for the tenacity of the people...how do they do it? It would drive me nuts after hours, let alone years...Who makes those shoes ? That was the wondrous introspection in adolescent phases, searching for a place in the jigsaw puzzle. But in recent years, the haunting question has come back to me: "How the hell does this world function at all? And does it, really ? I feel an alienation zapping through the channels, I can't find myself connecting with those groups of humanoids trouncing around MTV. Especially the glimpses of "real life": on daytime-courtroom-dramas or just looking at faces in the street. On every scale, the closer I observe it, the more the creeping realization haunts me: individuals, families, groups, neighborhoods, cities, states, countries... they all just barely hang in there, between debt and dysfunction. The whole planet looks like Any town with mini malls cutting up the landscape and just down the road it's all white trash with rusty car wrecks in the back yard. A huge Groucho Club I don't want to be a member of. But it does go further: what is particularly disturbing to see is this desperate search for Individualism that has rampantly increased in the last decade or so. Everyone suddenly needs to be so special, be utterly unique. So unique that they race off like lemmings to get 'even more individual' tattoos, branded cattle, with branded chains in every mall, converging on a blanded sameness world wide, but every rap singer with ever more gold chains in ever longer stretched limos is singing the tune: Don't be a loser! Don't be normal! The desperation with which millions of youngsters try to be that one-in-a-million professional ball player may have been just a "sad but silly factoid" for a long time. But now the tables are turning: the anthill is relying on the behaviour of the ants to function properly. And that implies: the social behaviour, the role playing, taking defined tasks and follow them through. What if each ant suddenly wants to be the queen? What if soldiering and nest building and cleaning chores is just not cool enough any more? If AntTV shows them every day nothing but un-Ant behaviour...? In my youth we were whining about what to do and how to do it, but in the end,all of my friends did become "normal" humans, orthopedics and lawyers, social workers, teachers... There were always a few that lived on the edges of normality, like ending up as television celebrities, but on the whole: they were perfectly reasonable ants. 1.8 children, 2.7 cars, 3.3 TVs... Now: I am no longer confident that line will continue. If every honeymoon is now booked in Bali on a Visa card, and every kid in Borneo wants to play ball in NYC... can the network of society be pliable enough to accommodate total upheaval? And what if 2 billion Chinese and Indians raise a generation of kids staring 6+ hours a day into All American values they can never attain... being taunted with Hollywood movies of heroic acts and pathetic dysfunctionality, coupled with ever increasing violence and disdain for ethics or morals. Seeing scenes of desperate youths in South American slums watching "Kill Bill" makes me think: this is just oxygen thrown into the fire... The ants will not play along much longer. The anthill will not survive if even a small fraction of the system is falling apart. Couple that inane drive for "Super Individualism" (and the Quest for Coolness by an ever increasing group destined to fail miserably) with the scarily simple realization of how effective even a small set of desperate people can become, then add the obvious penchant for religious fanaticism and you have an ugly picture of the long term future. So many curves that grow upwards towards limits, so many statistics that show increases and no way to turn around. Many in this forum may speculate about infinite life spans, changing the speed of light, finding ways to decode consciousness, wormholes to other dimensions and finding grand unified theories. To make it clear: I applaud that! "It does take all kinds". Diversity is indeed one of the definitions of the meaning of life. Edge IS Applied Diversity. Those are viable and necessary questions for mankind as a whole, however: I believe we need to clean house, re-evaluate, redefine the priorities. While we look at the horizon here in these pages, it is the very ground beneath us, that may be crumbling. The ant hill could really go to ant hell! Next year, let's ask for good ideas. Really practical, serious, good ideas. "The most immediate positive global impact of any kind that can be achieved within one year?". How to envision Internet3 and Web3 as a real platform for a global brainstorming with 6+ billion potential participants. This was not meant to sound like doom and gloom naysaying. I see myself as a sincere optimist, but one who believes in realistic pessimism as a useful tool to initiate change. _________________________________________________________________ CARLO ROVELLI Professor of Physics, University of the Mediterraneum, Marseille; Member, Intitut Universitaire de France: Author, Quantum Gravity [rovelli100.jpg] What the physics of the 20th century says about the world might in fact be true There is a major "dangerous" scientific idea in contemporary physics, with a potential impact comparable to Copernicus or Darwin. It is the idea that what the physics of the 20th century says about the world might in fact be true. Let me explain. Take quantum mechanics. If taken seriously, it changes our understanding of reality truly dramatically. For instance, if we take quantum mechanics seriously, we cannot think that objects have ever a definite position. They have a positions only when they interact with something else. And even in this case, they are in that position only with respect to that "something else": they are still without position with respect to the rest of the world. This is a change of image of the world far more dramatic that Copernicus. And also a change about our possibility of thinking about ourselves far more far-reaching than Darwin. Still, few people take the quantum revolution really seriously. The danger is exorcized by saying "well, quantum mechanics is only relevant for atoms and very small objects...", or similar other strategies, aimed at not taking the theory seriously. We still haven't digested that the world is quantum mechanical, and the immense conceptual revolution needed to make sense of this basic factual discovery about nature. Another example: take Einstein's relativity theory. Relativity makes completely clear that asking "what happens right now on Andromeda?" is a complete non-sense. There is no right now elsewhere in the universe. Nevertheless, we keep thinking at the universe as if there was an immense external clock that ticked away the instants, and we have a lot of difficulty in adapting to the idea that "the present state of the universe right now", is a physical non-sense. In these cases, what we do is to use concepts that we have developed in our very special environment (characterized by low velocities, low energy...) and we think the world as if it was all like that. We are like ants that have grown in a little garden with green grass and small stones, and cannot think reality differently than made of green grass and small stones. I think that seen from 200 years in the future, the dangerous scientific idea that was around at the beginning of the 20th century, and that everybody was afraid to accept, will simply be that the world is completely different from our simple minded picture of it. As the physics of the 20th century had already shown. What makes me smile is that even many of todays "audacious scientific speculations" about things like extra-dimensions, multi-universes, and the likely, are not only completely unsupported experimentally, but are even always formulated within world view that, at a close look, has not yet digested quantum mechanics and relativity! _________________________________________________________________ RICHARD DAWKINS Evolutionary Biologist, Charles Simonyi Professor For The Understanding Of Science, Oxford University; Author, The Ancestor's Tale [dawkins100.jpg] Let's all stop beating Basil's car Ask people why they support the death penalty or prolonged incarceration for serious crimes, and the reasons they give will usually involve retribution. There may be passing mention of deterrence or rehabilitation, but the surrounding rhetoric gives the game away. People want to kill a criminal as payback for the horrible things he did. Or they want to give "satisfaction' to the victims of the crime or their relatives. An especially warped and disgusting application of the flawed concept of retribution is Christian crucifixion as "atonement' for "sin'. Retribution as a moral principle is incompatible with a scientific view of human behaviour. As scientists, we believe that human brains, though they may not work in the same way as man-made computers, are as surely governed by the laws of physics. When a computer malfunctions, we do not punish it. We track down the problem and fix it, usually by replacing a damaged component, either in hardware or software. Basil Fawlty, British television's hotelier from hell created by the immortal John Cleese, was at the end of his tether when his car broke down and wouldn't start. He gave it fair warning, counted to three, gave it one more chance, and then acted. "Right! I warned you. You've had this coming to you!" He got out of the car, seized a tree branch and set about thrashing the car within an inch of its life. Of course we laugh at his irrationality. Instead of beating the car, we would investigate the problem. Is the carburettor flooded? Are the sparking plugs or distributor points damp? Has it simply run out of gas? Why do we not react in the same way to a defective man: a murderer, say, or a rapist? Why don't we laugh at a judge who punishes a criminal, just as heartily as we laugh at Basil Fawlty? Or at King Xerxes who, in 480 BC, sentenced the rough sea to 300 lashes for wrecking his bridge of ships? Isn't the murderer or the rapist just a machine with a defective component? Or a defective upbringing? Defective education? Defective genes? Concepts like blame and responsibility are bandied about freely where human wrongdoers are concerned. When a child robs an old lady, should we blame the child himself or his parents? Or his school? Negligent social workers? In a court of law, feeble-mindedness is an accepted defence, as is insanity. Diminished responsibility is argued by the defence lawyer, who may also try to absolve his client of blame by pointing to his unhappy childhood, abuse by his father, or even unpropitious genes (not, so far as I am aware, unpropitious planetary conjunctions, though it wouldn't surprise me). But doesn't a truly scientific, mechanistic view of the nervous system make nonsense of the very idea of responsibility, whether diminished or not? Any crime, however heinous, is in principle to be blamed on antecedent conditions acting through the accused's physiology, heredity and environment. Don't judicial hearings to decide questions of blame or diminished responsibility make as little sense for a faulty man as for a Fawlty car? Why is it that we humans find it almost impossible to accept such conclusions? Why do we vent such visceral hatred on child murderers, or on thuggish vandals, when we should simply regard them as faulty units that need fixing or replacing? Presumably because mental constructs like blame and responsibility, indeed evil and good, are built into our brains by millennia of Darwinian evolution. Assigning blame and responsibility is an aspect of the useful fiction of intentional agents that we construct in our brains as a means of short-cutting a truer analysis of what is going on in the world in which we have to live. My dangerous idea is that we shall eventually grow out of all this and even learn to laugh at it, just as we laugh at Basil Fawlty when he beats his car. But I fear it is unlikely that I shall ever reach that level of enlightenment. _________________________________________________________________ SETH LLOYD Quantum Mechanical Engineer, MIT [lloyd100.jpg] The genetic breakthrough that made people capable of ideas themselves The most dangerous idea is the genetic breakthrough that made people capable of ideas themselves. The idea of ideas is nice enough in principle; and ideas certainly have had their impact for good. But one of these days one of those nice ideas is likely to have the unintended consequence of destroying everything we know. Meanwhile, we cannot not stop creating and exploring new ideas: the genie of ingenuity is out of the bottle. To suppress the power of ideas will hasten catastrophe, not avert it. Rather, we must wield that power with the respect it deserves. Who risks no danger reaps no reward. _________________________________________________________________ CAROLYN PORCO Planetary Scientist; Cassini Imaging Science Team Leader; Director CICLOPS, Boulder CO; Adjunct Professor, University of Colorado, University of Arizona [porco100.jpg] The Greatest Story Ever Told The confrontation between science and formal religion will come to an end when the role played by science in the lives of all people is the same played by religion today. And just what is that? At the heart of every scientific inquiry is a deep spiritual quest -- to grasp, to know, to feel connected through an understanding of the secrets of the natural world, to have a sense of one's part in the greater whole. It is this inchoate desire for connection to something greater and immortal, the need for elucidation of the meaning of the 'self', that motivates the religious to belief in a higher 'intelligence'. It is the allure of a bigger agency -- outside the self but also involving, protecting, and celebrating the purpose of the self -- that is the great attractor. Every culture has religion. It undoubtedly satisfies a manifest human need. But the same spiritual fulfillment and connection can be found in the revelations of science. From energy to matter, from fundamental particles to DNA, from microbes to Homo sapiens, from the singularity of the Big Bang to the immensity of the universe .... ours is the greatest story ever told. We scientists have the drama, the plot, the icons, the spectacles, the 'miracles', the magnificence, and even the special effects. We inspire awe. We evoke wonder. And we don't have one god, we have many of them. We find gods in the nucleus of every atom, in the structure of space/time, in the counter-intuitive mechanisms of electromagneticsm. What richness! What consummate beauty! We even exalt the `self'. Our script requires a broadening of the usual definition, but we too offer hope for everlasting existence. The `self' that is the particular, networked set of connections of the matter comprising our mortal bodies will one day die, of course. But the `self' that is the sum of each separate individual condensate in us of energy-turned-matter is already ancient and will live forever. Each fundamental particle may one day return to energy, or from there revert back to matter. But in one form or another, it will not cease. In this sense, we and all around us are eternal, immortal, and profoundly connected. We don't have one soul; we have trillions upon trillions of them. These are reasons enough for jubilation ... for riotous, unrestrained, exuberant merry-making. So what are we missing? Ceremony. We lack ceremony. We lack ritual. We lack the initiation of baptism, the brotherhood of communal worship. We have no loving ministers, guiding and teaching the flocks in the ways of the 'gods'. We have no fervent missionaries, no loyal apostles. And we lack the all-inclusive ecumenical embrace, the extended invitation to the unwashed masses. Alienation does not warm the heart; communion does. But what if? What if we appropriated the craft, the artistry, the methods of formal religion to get the message across? Imagine 'Einstein's Witnesses' going door to door or TV evangelists passionately espousing the beauty of evolution. Imagine a Church of Latter Day Scientists where believers could gather. Imagine congregations raising their voices in tribute to gravity, the force that binds us all to the Earth, and the Earth to the Sun, and the Sun to the Milky Way. Or others rejoicing in the nuclear force that makes possible the sunlight of our star and the starlight of distant suns. And can't you just hear the hymns sung to the antiquity of the universe, its abiding laws, and the heaven above that 'we' will all one day inhabit, together, commingled, spread out like a nebula against a diamond sky? One day, the sites we hold most sacred just might be the astronomical observatories, the particle accelerators, the university research installations, and other laboratories where the high priests of science -- the biologists, the physicists, the astronomers, the chemists -- engage in the noble pursuit of uncovering the workings of nature herself. And today's museums, expositional halls, and planetaria may then become tomorrow's houses of worship, where these revealed truths, and the wonder of our interconnectedness with the cosmos, are glorified in song by the devout and the soulful. "Hallelujah!", they will sing. "May the force be with you!" _________________________________________________________________ MICHAEL NESMITH Artist, writer; Former cast member of "The Monkees"; A Trustee and President of the Gihon Foundation and a Trustee and Vice-Chair of the American Film Institute [nez100.jpg] Existence is Non-Time, Non-Sequential, and Non-Objective Not a dangerous idea per se but like a razor sharp tool in unskilled hands it can inflect unintended damage. Non-Time drives forward the notion the past does not create the present. This would of course render evolutionary theory a local-system, near-field process that was non-causative (i.e. effect). Non-Sequential reverberates through the Turing machine and computation, and points to simultaneity. It redefines language and cognition. Non-Objective establishes a continuum not to be confused with solipsism. As Schr?dinger puts it when discussing the "time-hallowed discrimination between subject and object" -- "the world is given to me only once, not one existing and one perceived. Subject and object are only one. The barrier between them cannot be said to have broken down as a result of recent experience in the physical sciences, for this barrier does not exist". This continuum has large implications for the empirical data set, as it introduces factual infinity into the data plane. These three notions, Non-Time, Non-sequence, and Non-Object have been peeking like diamonds through the dust of empiricism, philosophy, and the sciences for centuries. Quantum mechanics, including Deutsch's parallel universes and the massive parallelism of quantum computing, is our brightest star -- an unimaginably tall peak on our fitness landscape. They bring us to a threshold over which empiricism has yet to travel, through which philosophy must reconstruct the very idea of ideas, and beyond which stretches the now familiar "uncharted territories" of all great adventures. _________________________________________________________________ LAWRENCE KRAUSS Physicist/Cosmologist, Case Western Reserve University; Author, Hiding in the Mirror [krauss100.jpg] The world may fundamentally be inexplicable Science has progressed for 400 years by ultimately explaining observed phenomena in terms of fundamental theories that are rigid. Even minor deviations from predicted behavior are not allowed by the theory, so that if such deviations are observed, these provide evidence that the theory must be modified, usually being replaced by a yet more comprehensive theory that fixes a wider range of phenomena. The ultimate goal of physics, as it is often described, is to have a "theory of everything", in which all the fundamental laws that describe nature can neatly be written down on the front of a T-shirt (even if the T-shirt can only exist in 10 dimensions!). However, with the recognition that the dominant energy in the universe resides in empty space -- something that is so peculiar that it appears very difficult to understand within the context of any theoretical ideas we now possess -- more physicists have been exploring the idea that perhaps physics is an 'environmental science', that the laws of physics we observe are merely accidents of our circumstances, and that an infinite number of different universe could exist with different laws of physics. This is true even if there does exist some fundamental candidate mathematical physical theory. For example, as is currently in vogue in an idea related to string theory, perhaps the fundamental theory allows an infinite number of different 'ground state' solutions, each of which describes a different possible universe with a consistent set of physical laws and physical dimensions. It might be that the only way to understand why the laws of nature we observe in our universe are the way they are is to understand that if they were any different, then life could not have arisen in our universe, and we would thus not be here to measure them today. This is one version of the infamous "anthropic principle". But it could actually be worse -- it is equally likely that many different combinations of laws would allow life to form, and that it is a pure accident that the constants of nature result in the combinations we experience in our universe. Or, it could be that the mathematical formalism is actually so complex so that the ground states of the theory, i.e. the set of possible states that might describe our universe, actually might not be determinable. In this case, the end of "fundamental" theoretical physics (i.e. the search for fundamental microphysical laws...there will still be lots of work for physicists who try to understand the host of complex phenomena occurring at a variety of larger scales) might occur not via a theory of everything, but rather with the recognition that all so-called fundamental theories that might describe nature would be purely "phenomenological", that is, they would be derivable from observational phenomena, but would not reflect any underlying grand mathematical structure of the universe that would allow a basic understanding of why the universe is the way it is. _________________________________________________________________ DANIEL C. DENNETT Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Author, Darwin's Dangerous Idea [dennett101.jpg] There aren't enough minds to house the population explosion of memes Ideas can be dangerous. Darwin had one, for instance. We hold all sorts of inventors and other innovators responsible for assaying, in advance, the environmental impact of their creations, and since ideas can have huge environmental impacts, I see no reason to exempt us thinkers from the responsibility of quarantining any deadly ideas we may happen to come across. So if I found what I took to be such a dangerous idea, I would button my lip until I could find some way of preparing the ground for its safe expression. I expect that others who are replying to this year's Edge question have engaged in similar reflections and arrived at the same policy. If so, then some people may be pulling their punches with their replies. The really dangerous ideas they are keeping to themselves. But here is an unsettling idea that is bound to be true in one version or another, and so far as I can see, it won't hurt to publicize it more. It might well help. The human population is still growing, but at nowhere near the rate that the population of memes is growing. There is competition for the limited space in human brains for memes, and something has to give. Thanks to our incessant and often technically brilliant efforts, and our apparently insatiable appetites for novelty, we have created an explosively growing flood of information, in all media, on all topics, in every genre. Now either (1) we will drown in this flood of information, or (2) we won't drown in it. Both alternatives are deeply disturbing. What do I mean by drowning? I mean that we will become psychologically overwhelmed, unable to cope, victimized by the glut and unable to make life-enhancing decisions in the face of an unimaginable surfeit. (I recall the brilliant scene in the film of Evelyn Waugh's dark comedy The Loved One in which embalmer Mr. Joyboy's gluttonous mother is found sprawled on the kitchen floor, helplessly wallowing in the bounty that has spilled from a capsized refrigerator.) We will be lost in the maze, preyed upon by whatever clever forces find ways of pumping money-or simply further memetic replications-out of our situation. (In The War of the Worlds, H. G. Wells sees that it might well be our germs, not our high-tech military contraptions, that subdue our alien invaders. Similarly, might our own minds succumb not to the devious manipulations of evil brainwashers and propagandists, but to nothing more than a swarm of irresistible ditties, Nofs nibbled to death by slogans and one-liners?) If we don't drown, how will we cope? If we somehow learn to swim in the rising tide of the infosphere, that will entail that we-that is to say, our grandchildren and their grandchildren-become very very different from our recent ancestors. What will "we" be like? (Some years ago, Doug Hofstadter wrote a wonderful piece, " In 2093, Just Who Will Be We?" in which he imagines robots being created to have "human" values, robots that gradually take over the social roles of our biological descendants, who become stupider and less concerned with the things we value. If we could secure the welfare of just one of these groups, our children or our brainchildren, which group would we care about the most, with which group would we identify?) Whether "we" are mammals or robots in the not so distant future, what will we know and what will we have forgotten forever, as our previously shared intentional objects recede in the churning wake of the great ship that floats on this sea and charges into the future propelled by jets of newly packaged information?What will happen to our cultural landmarks? Presumably our descendants will all still recognize a few reference points (the pyramids of Egypt, arithmetic, the Bible, Paris, Shakespeare, Einstein, Bach . . . ) but as wave after wave of novelty passes over them, what will they lose sight of? The Beatles are truly wonderful, but if their cultural immortality is to be purchased by the loss of such minor 20th century figures as Billie Holiday, Igor Stravinsky, and Georges Brassens [who he?], what will remain of our shared understanding? The intergenerational mismatches that we all experience in macroscopic versions (great-grandpa's joke falls on deaf ears, because nobody else in the room knows that Nixon's wife was named "Pat") will presumably be multiplied to the point where much of the raw information that we have piled in our digital storehouses is simply incomprehensible to everyone-except that we will have created phalanxes of "smart" Rosetta-stones of one sort or another that can "translate" the alien material into something we (think maybe we) understand. I suspect we hugely underestimate the importance (to our sense of cognitive security) of our regular participation in the four-dimensional human fabric of mutual understanding, with its reassuring moments of shared-and seen to be shared, and seen to be seen to be shared-comprehension. What will happen to common knowledge in the future? I do think our ancestors had it easy: aside from all the juicy bits of unshared gossip and some proprietary trade secrets and the like, people all knew pretty much the same things, and knew that they knew the same things. There just wasn't that much to know. Won't people be able to create and exploit illusions of common knowledge in the future, virtual worlds in which people only think they are in touch with their cyber-neighbors? I see small-scale projects that might protect us to some degree, if they are done wisely. Think of all the work published in academic journals before, say, 1990 that is in danger of becoming practically invisible to later researchers because it can't be found on-line with a good search engine. Just scanning it all and hence making it "available" is not the solution. There is too much of it. But we could start projects in which (virtual) communities of retired researchers who still have their wits about them and who know particular literatures well could brainstorm amongst themselves, using their pooled experience to elevate the forgotten gems, rendering them accessible to the next generation of researchers. This sort of activity has in the past been seen to be a stodgy sort of scholarship, fine for classicists and historians, but not fit work for cutting-edge scientists and the like. I think we should try to shift this imagery and help people recognize the importance of providing for each other this sort of pathfinding through the forests of information. It's a drop in the bucket, but perhaps if we all start thinking about conservation of valuable mind-space, we can save ourselves (our descendants) from informational collapse. _________________________________________________________________ DANIEL GILBERT Psychologist, Harvard University [gilbert100.jpg] The idea that ideas can be dangerous Dangerous does not mean exciting or bold. It means likely to cause great harm. The most dangerous idea is the only dangerous idea: The idea that ideas can be dangerous. We live in a world in which people are beheaded, imprisoned, demoted, and censured simply because they have opened their mouths, flapped their lips, and vibrated some air. Yes, those vibrations can make us feel sad or stupid or alienated. Tough shit. That's the price of admission to the marketplace of ideas. Hateful, blasphemous, prejudiced, vulgar, rude, or ignorant remarks are the music of a free society, and the relentless patter of idiots is how we know we're in one. When all the words in our public conversation are fair, good, and true, it's time to make a run for the fence. _________________________________________________________________ ANDY CLARK School of Philosophy, Psychology and Language Sciences, Edinburgh University [clark100.jpg] The quick-thinking zombies inside us So much of what we do, feel, think and choose is determined by non-conscious, automatic uptake of cues and information. Of course, advertisers will say they have known this all along. But only in recent years, with seminal studies by Tanya Chartrand, John Bargh and others has the true scale of our daily automatism really begun to emerge. Such studies show that it is possible (it is relatively easy) to activate racist stereotypes that impact our subsequent behavioral interactions, for example yielding the judgment that your partner in a subsequent game or task is more hostile than would be judged by an unprimed control. Such effects occur despite a subject's total and honest disavowal of those very stereotypes. In similar ways it is possible to unconsciously prime us to feel older (and then we walk more slowly). In my favorite recent study, experimenters manipulate cues so that the subject forms an unconscious goal, whose (unnoticed) frustration makes them lose confidence and perform worse at a subsequent task! The dangerous truth, it seems to me, is that these are not isolated little laboratory events. Instead, they reveal the massed woven fabric of our day-to-day existence. The underlying mechanisms at work impart an automatic drive towards the automation of all manner of choices and actions, and don't discriminate between the 'trivial' and the portentous. It now seems clear that many of my major life and work decisions are made very rapidly, often on the basis of ecologically sound but superficial cues, with slow deliberative reason busily engaged in justifying what the quick-thinking zombies inside me have already laid on the table. The good news is that without these mechanisms we'd be unable to engage in fluid daily life or reason at all, and that very often they are right. The dangerous truth, though, is that we are indeed designed to cut conscious, aware choice out of the picture wherever possible. This is not an issue about free will, but simply about the extent to which conscious deliberation cranks the engine of behavior. Crank it it does: but not in anything like the way, or extent, we may have thought. We'd better get to grips with this before someone else does. _________________________________________________________________ SHERRY TURKLE Psychologist, MIT; Author, Life on the Screen: Identity in the Age of the Internet [turkle100.jpg] After several generations of living in the computer culture, simulation will become fully naturalized. Authenticity in the traditional sense loses its value, a vestige of another time. Consider this moment from 2005: I take my fourteen-year-old daughter to the Darwin exhibit at the American Museum of Natural History. The exhibit documents Darwin's life and thought, and with a somewhat defensive tone (in light of current challenges to evolution by proponents of intelligent design), presents the theory of evolution as the central truth that underpins contemporary biology. The Darwin exhibit wants to convince and it wants to please. At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory. The turtle rests in its cage, utterly still. "They could have used a robot," comments my daughter. It was a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle's "aliveness. " I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned by its authenticity. The museum has been advertising these turtles as wonders, curiosities, marvels -- among the plastic models of life at the museum, here is the life that Darwin saw. I begin to talk with others at the exhibit, parents and children. It is Thanksgiving weekend. The line is long, the crowd frozen in place. My question, "Do you care that the turtle is alive?" is welcome diversion. A ten year old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience: "It's water looks dirty. Gross. " More usually, the votes for the robots echo my daughter's sentiment that in this setting, aliveness doesn't seem worth the trouble. A twelve-year-old girl opines: "For what the turtles do, you didn't have to have the live ones. " Her father looks at her, uncomprehending: "But the point is that they are real, that's the whole point. " The Darwin exhibit is about authenticity: on display are the actual magnifying glass that Darwin used, the actual notebooks in which he recorded his observations, indeed, the very notebook in which he wrote the famous sentences that first described his theory of evolution But in the children's reactions to the inert but alive Galapagos turtle, the idea of the "original" is in crisis. I have long believed that in the culture of simulation, the notion of authenticity is for us what sex was to the Victorians -- "threat and obsession, taboo and fascination. " I have lived with this idea for many years, yet at the museum, I find the children's position startling, strangely unsettling. For these children, in this context, aliveness seems to have no intrinsic value. Rather, it is useful only if needed for a specific purpose. "If you put in a robot instead of the live turtle, do you think people should be told that the turtle is not alive?" I ask. Not really, say several of the children. Data on "aliveness" can be shared on a "need to know" basis, for a purpose. But what are the purposes of living things? When do we need to know if something is alive? Consider another vignette from 2005: an elderly woman in a nursing home outside of Boston is sad. Her son has broken off his relationship with her. Her nursing home is part of a study I am conducting on robotics for the elderly. I am recording her reactions as she sits with the robot Paro, a seal-like creature, advertised as the first "therapeutic robot" for its ostensibly positive effects on the ill, the elderly, and the emotionally troubled. Paro is able to make eye contact through sensing the direction of a human voice, is sensitive to touch, and has "states of mind" that are affected by how it is treated, for example, is it stroked gently or with agressivity? In this session with Paro, the woman, depressed because of her son's abandonment, comes to believe that the robot is depressed as well. She turns to Paro, strokes him and says: "Yes, you're sad, aren't you. It's tough out there. Yes, it's hard. " And then she pets the robot once again, attempting to provide it with comfort. And in so doing, she tries to comfort herself. The woman's sense of being understood is based on the ability of computational objects like Paro to convince their users that they are in a relationship. I call these creatures (some virtual, some physical robots) "relational artifacts. " Their ability to inspire relationship is not based on their intelligence or consciousness, but on their ability to push certain "Darwinian" buttons in people (making eye contact, for example) that make people respond as though they were in relationship. For me, relational artifacts are the new uncanny in our computer culture -- as Freud once put it, the long familiar taking a form that is strangely unfamiliar. As such, they confront us with new questions. What does this deployment of "nurturing technology" at the two most dependent moments of the life cycle say about us? What will it do to us? Do plans to provide relational robots to attend to children and the elderly make us less likely to look for other solutions for their care? People come to feel love for their robots, but if our experience with relational artifacts is based on a fundamentally deceitful interchange, can it be good for us? Or might it be good for us in the "feel good" sense, but bad for us in our lives as moral beings? Relationships with robots bring us back to Darwin and his dangerous idea: the challenge to human uniqueness. When we see children and the elderly exchanging tendernesses with robotic pets the most important question is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean? _________________________________________________________________ STEVEN STROGATZ Applied mathematician, Cornell University; Author, Sync [strogatz100.jpg] The End of Insight I worry that insight is becoming impossible, at least at the frontiers of mathematics. Even when we're able to figure out what's true or false, we're less and less able to understand why. An argument along these lines was recently given by Brian Davies in the "Notices of the American Mathematical Society". He mentions, for example, that the four-color map theorem in topology was proven in 1976 with the help of computers, which exhaustively checked a huge but finite number of possibilities. No human mathematician could ever verify all the intermediate steps in this brutal proof, and even if someone claimed to, should we trust them? To this day, no one has come up with a more elegant, insightful proof. So we're left in the unsettling position of knowing that the four-color theorem is true but still not knowing why. Similarly important but unsatisfying proofs have appeared in group theory (in the classification of finite simple groups, roughly akin to the periodic table for chemical elements) and in geometry (in the problem of how to pack spheres so that they fill space most efficiently, a puzzle that goes back to Kepler in the 1500's and that arises today in coding theory for telecommunications). In my own field of complex systems theory, Stephen Wolfram has emphasized that there are simple computer programs, known as cellular automata, whose dynamics can be so inscrutable that there's no way to predict how they'll behave; the best you can do is simulate them on the computer, sit back, and watch how they unfold. Observation replaces insight. Mathematics becomes a spectator sport. If this is happening in mathematics, the supposed pinnacle of human reasoning, it seems likely to afflict us in science too, first in physics and later in biology and the social sciences (where we're not even sure what's true, let alone why). When the End of Insight comes, the nature of explanation in science will change forever. We'll be stuck in an age of authoritarianism, except it'll no longer be coming from politics or religious dogma, but from science itself. _________________________________________________________________ TERRENCE SEJNOWSKI Computational Neuroscientist, Howard Hughes Medical Institute; Coauthor, The Computational Brain [sejnowski101.jpg] When will the Internet become aware of itself? I never thought that I would become omniscient during my lifetime, but as Google continues to improve and online information continues to expand I have achieved omniscience for all practical purposes. The Internet has created a global marketplace for ideas and products, making it possible for individuals in the far corners of the world to automatically connect directly to each other. The Internet has achieved these capabilities by growing exponentially in total communications bandwidth. How does the communications power of the Internet compare with that of the cerebral cortex, the most interconnected part of our brains? Cortical connections are expensive because they take up volume and cost energy to send information in the form of spikes along axons. About 44% of the cortical volume in humans is taken up with long-range connections, called the white matter. Interestingly, the thickness of gray matter, just a few millimeters, is nearly constant in mammals that range in brain volume over five orders of magnitude, and the volume of the white matter scales approximately as the 4/3 power of the volume of the gray matter. The larger the brain, the larger the fraction of resources devoted to communications compared to computation. However, the global connectivity in the cerebral cortex is extremely sparse: The probability of any two cortical neurons having a direct connection is around one in a hundred for neurons in a vertical column 1 mm in diameter, but only one in a million for more distant neurons. Thus, only a small fraction of the computation that occurs locally can be reported to other areas, through a small fraction of the cells that connect distant cortical areas. Despite the sparseness of cortical connectivity, the potential bandwidth of all of the neurons in the human cortex is approximately a terabit per second, comparable to the total world backbone capacity of the Internet. However, this capacity is never achieved by the brain in practice because only a fraction of cortical neurons have a high rate of firing at any given time. Recent work by Simon Laughlin suggests that another physical constraint -- energy--limits the brain's ability to harness its potential bandwidth. The cerebral cortex also has a massive amount of memory. There are approximately one billion synapses between neurons under every square millimeter of cortex, or about one hundred million million synapses overall. Assuming around a byte of storage capacity at each synapse (including dynamic as well as static properties), this comes to a total of 10^15 bits of storage. This is comparable to the amount of data on the entire Internet; Google can store this in terabyte disk arrays and has hundreds of thousands of computers simultaneously sifting through it. Thus, the internet and our ability to search it are within reach of the limits of the raw storage and communications capacity of the human brain, and should exceed it by 2015. Leo van Hemmen and I recently asked 23 neuroscientists to think about what we don't yet know about the brain, and to propose a question so fundamental and so difficult that it could take a century to solve, following in the tradition of Hilbert's 23 problems in mathematics. Christof Koch and Francis Crick speculated that the key to understanding consciousness was global communication: How do neurons in the diverse parts of the brain manage to coordinate despite the limited connectivity? Sometimes, the communication gets crossed, and V. S. Ramachandran and Edward Hubbard asked whether synesthetes, rare individuals who experience crossover in sensory perception such as hearing colors, seeing sounds, and tasting tactile sensations, might give us clues to how the brain evolved. There is growing evidence that the flow of information between parts of the cortex is regulated by the degree of synchrony of the spikes within populations of cells that represent perceptual states. Robert Desimone and his colleagues have examined the effects of attention on cortical neurons in awake, behaving monkeys and found the coherence between the spikes of single neurons in the visual cortex and local field potentials in the gamma band, 30-80 Hz, increased when the covert attention of a monkey was directed toward a stimulus in the receptive field of the neuron. The coherence also selectively increased when a monkey searched for a target with a cued color or shape amidst a large number of distracters. The increase in coherence means that neurons representing the stimuli with the cued feature would have greater impact on target neurons, making them more salient. The link between attention and spike-field coherence raises a number of interesting questions. How does top-down input from the prefrontal cortex regulate the coherence of neurons in other parts of the cortex through feedback connections? How is the rapidity of the shifts in coherence achieved? Experiments on neurons in cortical slices suggest that inhibitory interneurons are connected to each other in networks and are responsible for gamma oscillations. Researchers in my laboratory have used computational models to show that excitatory inputs can rapidly synchronize a subset of the inhibitory neurons that are in competition with other inhibitory networks. Inhibitory neurons, long thought to merely block activity, are highly effective in synchronizing neurons in a local column already firing in response to a stimulus. The oscillatory activity that is thought to synchronize neurons in different parts of the cortex occurs in brief bursts, typically lasting for only a few hundred milliseconds. Thus, it is possible that there is a packet structure for long-distance communication in the cortex, similar to the packets that are used to communicate on the Internet, though with quite different protocols. The first electrical signals recorded from the brain in 1875 by Richard Caton were oscillatory signals that changed in amplitude and frequency with the state of alertness. The function of these oscillations remains a mystery, but it would be remarkable if it were to be discovered that these signals held the secrets to the brain's global communications network. Since its inception in 1969, the Internet has been scaled up to a size not even imagined by its inventors, in contrast to most engineered systems, which fall apart when they are pushed beyond their design limits. In part, the Internet achieves this scalability because it has the ability to regulate itself, deciding on the best routes to send packets depending on traffic conditions. Like the brain, the Internet has circadian rhythms that follow the sun as the planet rotates under it. The growth of the Internet over the last several decades more closely resembles biological evolution than engineering. How would we know if the Internet were to become aware of itself? The problem is that we don't even know if some of our fellow creatures on this planet are self aware. For all we know the Internet is already aware of itself. _________________________________________________________________ LYNN MARGULIS Biologist, University of Massachusetts, Amherst; Coauthor (with Dorion Sagan), Acquiring Genomes: A Theory of the Origins of Species [margulis100.jpg] Bacteria are us What is my dangerous idea? Although arcane, evidence for this dangerous concept is overwhelming; I have collected clues from many sources. Reminiscent of Oscar Wilde's claim that "even true things can be proved" I predict that the scientific gatekeepers in academia eventually will be forced to permit this dangerous idea to become widely accepted. What is it? Our sensibilities, our perceptions that register through our sense organ cells evolved directly from our bacterial ancestors. Signals in the environment: light impinging on the eye's retina, taste on the buds of the tongue, odor through the nose, sound in the ear are translated to nervous impulses by extensions of sensory cells called cilia. We, like all other mammals, including our apish brothers, have taste-bud cilia, inner ear cilia, nasal passage cilia that detect odors. We distinguish savory from sweet, birdsong from whalesong, drumbeats from thunder. With our eyes closed, we detect the light of the rising sun and and feel the vibrations of the drums. These abilities to sense our surroundings, a heritage that preceded the evolution of all primates, indeed, all animals, by use of specialized cilia at the tips of sensory cells, and the existence of the cilia in the tails of sperm, come from one kind of our bacterial ancestors. Which? Those of our bacterial ancestors that became cilia. We owe our sensitivity to a loving touch, the scent of lavender , the taste of a salted nut or vinaigrette, a police-cruiser siren, or glimpse of brilliant starlight to our sensory cells. We owe the chemical attraction of the sperm as its tail impels it to swim toward the egg, even the moss plant sperm, to its cilia. The dangerous idea is that the cilia evolved from hyperactive bacteria. Bacterial ancestors swam toward food and away from noxious gases, they moved up to the well-lit waters at the surface of the pond. They were startled when, in a crowd, some relative bumped them. These bacterial ancestors that never slept, avoided water too hot or too salty. They still do. Why is the concept that our sensitivities evolved directly from swimming bacterial ancestors of the sensory cilia so dangerous? Several reasons: we would be forced to admit that bacteria are conscious, that they are sensitive to stimuli in their environment and behave accordingly. We would have to accept that bacteria, touted to be our enemies, are not merely neutral or friendly but that they are us. They are direct ancestors of our most sensitive body parts. Our culture's terminology about bacteria is that of warfare: they are germs to be destroyed and forever vanquished, bacterial enemies make toxins that poison us. We load our soaps with antibacterials that kill on contact, stomach ulcers are now agreed to be caused by bacterial infection. Even if some admit the existence of "good" bacteria in soil or probiotic food like yogurt few of us tolerate the dangerous notion that human sperm tails and sensitive cells of nasal passages lined with waving cilia, are former bacteria. If this dangerous idea becomes widespread it follows that we humans must agree that even before our evolution as animals we have hated and tried to kill our own ancestors. Again, we have seen the enemy, indeed, and, as usual, it is us. Social interactions of sensitive bacteria, then, not God, made us who were are today. _________________________________________________________________ THOMAS METZINGER Frankfurt Institute for Advanced Studies; Johannes Gutenberg-Universit?t Mainz; President German Cognitive Science Society; Author: Being No One [metzinger100.jpg] The Forbidden Fruit Intuition We all would like to believe that, ultimately, intellectual honesty is not only an expression of, but also good for your mental health. My dangerous question is if one can be intellectually honest about the issue of free will and preserve one's mental health at the same time. Behind this question lies what I call the "Forbidden Fruit Intuition": Is there a set of questions which are dangerous not on grounds of ideology or political correctness, but because the most obvious answers to them could ultimately make our conscious self-models disintegrate? Can one really believe in determinism without going insane? For middle-sized objects at 37? like the human brain and the human body, determinism is obviously true. The next state of the physical universe is always determined by the previous state. And given a certain brain-state plus an environment you could never have acted otherwise -- a surprisingly large majority of experts in the free-will debate today accept this obvious fact. Although your future is open, this probably also means that for every single future thought you will have and for every single decision you will make, it is true that it was determined by your previous brain state. As a scientifically well-informed person you believe in this theory, you endorse it. As an open-minded person you find that you are also interested in modern philosophy of mind, and you might hear a story much like the following one. Yes, you are a physically determined system. But this is not a big problem, because, under certain conditions, we may still continue to say that you are "free": all that matters is that your actions are caused by the right kinds of brain processes and that they originate in you. A physically determined system can well be sensitive to reasons and to rational arguments, to moral considerations, to questions of value and ethics, as long as all of this is appropriately wired into its brain. You can be rational, and you can be moral, as long as your brain is physically determined in the right way. You like this basic idea: physical determinism is compatible with being a free agent. You endorse a materialist philosophy of freedom as well. An intellectually honest person open to empirical data, you simply believe that something along these lines must be true. Now you try to feel that it is true. You try to consciously experience the fact that at any given moment of your life, you could not have acted otherwise. You try to experience the fact that even your thoughts, however rational and moral, are predetermined -- by something unconscious, by something you can not see. And in doing so, you start fooling around with the conscious self-model Mother Nature evolved for you with so much care and precision over millions of years: You are scratching at the user-surface of your own brain, tweaking the mouse-pointer, introspectively trying to penetrate into the operating system, attempting to make the invisible visible. You are challenging the integrity of your phenomenal self by trying to integrate your new beliefs, the neuroscientific image of man, with your most intimate, inner way of experiencing yourself. How does it feel? I think that the irritation and deep sense of resentment surrounding public debates on the freedom of the will actually has nothing much to do with the actual options on the table. It has to do with the -- perfectly sensible -- intuition that our presently obvious answer will not only be emotionally disturbing, but ultimately impossible to integrate into our conscious self-models. Or our societies: The robust conscious experience of free will also is a social institution, because the attribution of accountability, responsibility, etc. are the decisive building blocks for modern, open societies. And the currently obvious answer might be interpreted by many as having clearly anti-democratic implications: Making a complex society work implies controlling the behavior of millions of people; if individual human beings can control their own behavior to a much lesser degree than we have thought in the past, if bottom-up doesn't work, then it becomes tempting to control it top-down, by the state. And this is the second way in which enlightenment could devour its own children. Yes, free will truly is a dangerous question, but for different reasons than most people think. _________________________________________________________________ DIANE F. HALPERN Professor of Psychology, Claremont McKenna College; Past-president (2005), the American Psychological Association; Author, Thought and Knowledge [halpern100.jpg] Choosing the sex of one's child For an idea to be truly dangerous, it needs to have a strong and near universal appeal. The idea of being able to choose the sex of one's own baby is just such an idea. Anyone who has a deep-seated and profound preference for a son or daughter knows that this preference may not be rational and that it may represent a prejudice better left unacknowledged about them. It is easy to dismiss the ability to decide the sex of one's baby as inconsequential. It is already medically feasible for a woman or couple to choose the sex of a baby that has not yet been conceived. There are a variety of safe methods available, such as Preimplanted Genetic Diagnosis (PGD), so-named because it was originally designed for couples with fertility problems, not for the purpose of selecting the sex of one's next child. With PGD, embryos are created in a Petri dish, tested for gender, and then implanted into the womb, so that the baby-to-be is already identified as female or male before implantation in the womb. The pro argument is simple: If the parents-to-be are adults, why not? People have always wanted to be able to choose the sex of their children. There are ancient records of medicine men and wizened women with various herbs and assorted advice about what to do to (usually) have a son. So, what should it matter if modern medicine can finally deliver what old wives' tales have promised for countless generations? Couples won't have to have a "wasted" child, such as a second child the same sex as the first one, when they really wanted "one of each. " If a society has too many boys for a while, who cares? The shortage of females will make females more valuable and the market economy will even out in time. In the mean time, families will "balance out," each one the ideal composition as desired by the adults in the family. Every year for the last two decades I have asked students in my college classes to write down the number of children they would like to have and the order in which they ideally want to have girls and boys. I have taught in several different countries (e.g. , Turkey, Russia, and Mexico) and types of universities, but despite large differences, the modal response is 2 children, first a boy, then a girl. If students reply that they want one child, it is most often a boy; if it is 3 children, they are most likely to want a boy, then a girl, then a boy. The students in my classes are not a random sample of the population: they are well educated and more likely to hold egalitarian attitudes than the general population. Yet, if they acted on their stated intentions, even they would have an excess of first-borns who are male, and an excess of males overall. In a short time, those personality characteristics associated with being either an only-child or first-born and those associated with being male would be so confounded, it would be difficult to separate them. The excess of males that would result from allowing every mother or couple to choose the sex of their next baby would not correct itself at the societal level because at the individual level, the preference for sons is stronger than the market forces of supply and demand. The evidence for this conclusion comes from many sources, including regions of the world where the ratio of young women to men is so low that it could only be caused by selective abortion and female infanticide (UNICEF and other sources). In some regions of rural China there are so few women that wives are imported from the Philippines and men move to far cities to find women to marry. In response, the Chinese government is now offering a variety of education and cash incentives to families with multiple daughters. There are still few daughters being born in these rural areas where prejudice against girls is stronger than government incentives and mandates. In India, the number of abortions of female fetuses has increased since sex-selective abortion was made illegal in 1994. The desire for sons is even stronger than the threat of legal action. In the United States, the data that show preferences for sons are more subtle than the disparate ratios of females and males found in other parts of the world, but the preference for sons is still strong. Because of space limitations, I list only a few of the many indicators that parents in the United States prefer sons: families with 2 daughters are more likely to have a third child than families with 2 sons, unmarried pregnant women who undergo ultrasound to determine the sex of the yet unborn child are less likely to be married at the time of the child's birth when the child is a girl than when it is a boy, and divorced women with a son are more likely to remarry than divorced women with a daughter. Perhaps the only ideas more dangerous that of choosing the sex of one's child would be trying to stop medical science from making advances that allow such choices or allowing the government to control the choices we can make as citizens. There are many important questions to ponder, including how to find creative ways to reduce or avoid negative consequences from even more dangerous alternatives. Consider, for example, what would our world be like if there were substantially more men than women? What if only the rich or only those who live in "rich countries" were able to choose the sex of their children? Is it likely that an approximately equal number of boys and girls would be or could be selected? If not, could a society or should a society make equal numbers of girls and boys a goal? I am guessing that many readers of child-bearing age want to choose the sex of their (as yet) unconceived children and can reason that there is no harm in this practice. And, if you could also choose intelligence, height, and hair color, would you add that too? But then, there are few things in life that are as appealing as the possibility of a perfectly balanced family, which according to the modal response means an older son and younger daughter, looking just like an improved version of you. _________________________________________________________________ GARY MARCUS Psychologist, New York University; Author, The Birth of the Mind [marcus100.jpg] Minds, genes, and machines Brains exist primarily to do two things, to communicate (transfer information) and compute. This is true in every creature with a nervous system, and no less true in the human brain. In short, the brain is a machine. And the basic structure of that brain, biological substrate of all things mental, is guided in no small part by information carried in the DNA. In the twenty-first century, these claims should no longer be controversial. With each passing day, techniques like magnetic resonance imaging and electrophysiological recordings from individual neurons make it clearer that the business of the brain is information processing, while new fields like comparative genomics and developmental neuroembryology remove any possible doubt that genes significantly influence both behavior and brain. Yet there are many people, scientists and lay persons alike, who fear or wish to deny these notions, to doubt our even reject the idea that the mind is a machine, and that it is significantly (though of course not exclusively) shaped by genes. Even as the religious right prays for Intelligent Design, the academic left insinuates that merely discussing the idea of innateness is dangerous, as in a prominent child development manifesto that concluded: If scientists use words like "instinct" and "innateness" in reference to human abilities, then we have a moral responsibility to be very clear and explicit about what we mean. If our careless, underspecified choice of words inadvertently does damage to future generations of children, we cannot turn with innocent outrage to the judge and say "But your Honor, I didn't realize the word was loaded. A new academic journal called "Metascience" focuses on when extra-scientific considerations influence the process of science. Sadly, the twin questions of whether we are machines, and whether we are constrained significantly by our biology, very much fall into this category, questions where members of the academy (not to mention fans of Intelligent Design) close their minds. Copernicus put us in our place, so to to speak, by showing that our planet is not at the center of universe; advances in biology are putting us further in our place by showing that our brains are as much a product of biology as any other part of our body, and by showing that our (human) brains are built by the very same processes as other creatures. Just as the earth is just one planet among many, from the perspective of the toolkit of developmental biology, our brain is just one more arrangement of molecules. _________________________________________________________________ JARON LANIER Computer Scientist and Musician [jaron100.jpg] Homuncular Flexibility The homunculus is an approximate mapping of the human body in the cortex. It is often visualized as a distorted human body stretched along the top of the human brain. The tongue, thumbs, and other body parts with extra-rich brain connections are enlarged in the homunculus, giving it a vaguely obscene, impish character. Long ago, in the 1980s, my colleagues and I at VPL Research built virtual worlds in which more than one person at a time could be present. People in a shared virtual world must be able to see each other, as well as use their bodies together, as when two people lift a large virtual object or ride a tandem virtual bicycle. None of this would be possible without virtual bodies. It was a self-evident and inviting challenge to attempt to create the most accurate possible bodies, given the crude state of the technology at the time. To do this, we developed full body suits covered in sensors. A measurement made on the body of someone wearing one of these suits, such as an aspect of the flex of a wrist, would be applied to control a corresponding change in a virtual body. Before long, people were dancing and otherwise goofing around in virtual reality. Of course there were bugs. I distinctly remember a wonderful bug that caused my hand to become enormous, like a web of flying skyscrapers. As is often the case, this accident led to an interesting discovery. It turned out that people could quickly learn to inhabit strange and different bodies and still interact with the virtual world. I became curious how weird the body could get before the mind would become disoriented. I played around with elongated limb segments, and strange limb placement. The most curious experiment involved a virtual lobster (which was lovingly modeled by Ann Lasko. ) A lobster has a trio of little midriff arms on each side of its body. If physical human bodies sprouted corresponding limbs, we would have measured them with an appropriate body suit and that would have been that. I assume it will not come as a surprise to the reader that the human body does not include these little arms, so the question arose of how to control them. The answer was to extract a little influence from each of many parts of the physical body and merge these data streams into a single control signal for a given joint in the extra lobster limbs. A touch of human elbow twist, a dash of human knee flex; a dozen such movements might be mixed to control the middle join of little left limb #3. The result was that the principle elbows and knees could still control their virtual counterparts roughly as before, while still contributing to the control of additional limbs. Yes, it turns out people can learn to control bodies with extra limbs! The biologist Jim Bower, when considering this phenomenon, commented that the human nervous system evolved through all the creatures that preceded us in our long evolutionary line, which included some pretty strange creatures, if you go back far enough. Why wouldn't we retain some homuncular flexibility with a pedigree like that? The original experiments of the 1980s were not carried out formally, but recently it has become possible to explore the phenomenon in a far more rigorous way. Jeremy Bailenson at Stanford has created a marvelous new lab for studying multiple human subjects in high-definition shared virtual worlds, and we are now planning to repeat, improve, and extend these experiments. The most interesting questions still concern the limits to homuncular flexibility. We are only beginning the project of mapping how far it can go. Why is homuncular flexibility a dangerous idea? Because the more flexible the human brain turns out to be when it comes to adapting to weirdness, the weirder a ride it will be able to keep up with as technology changes in the coming decades and centuries. Will kids in the future grow up with the experience of living in four spatial dimensions as well as three? That would be a world with a fun elementary school math curriculum! If you're most interested in raw accumulation of technological power, then you might be not find this so interesting, but if you think in terms of how human experience can change, then this is the most fascinating stuff there is. Homuncular flexibility isn't the only source of hints about how weird human experience might get in the future. There also questions related to language, memory, and other aspects of cognition, as well as hypothetical prospects for engineering changes in the brain. But in this one area, there's an indication of high weirdness to come, and I find that prospect dangerous, but in a beautiful and seductive way. "Thrilling" might be a better word. _________________________________________________________________ W.DANIEL HILLIS Physicist, Computer Scientist; Chairman, Applied Minds, Inc.; Author, The Pattern on the Stone [hillis100.jpg] The idea that we should all share our most dangerous ideas I don't share my most dangerous ideas. Ideas are the most powerful forces that we can unleash upon the world, and they should not be let loose without careful consideration of their consequences. Some ideas are dangerous because they are false, like an idea that one race of humans is more worthy that another, or that one religion has monopoly on the truth. False ideas like these spread like wildfire, and have caused immeasurable harm. They still do. Such false ideas should obviously not be spread or encouraged, but there are also plenty of trues idea that should not be spread: ideas about how to cause terror and pain and chaos, ideas of how to better convince people of things that are not true. I have often seen otherwise thoughtful people so caught up in such an idea that they seem unable to resist sharing it. To me, the idea that we should all share our most dangerous ideas is, itself, a very dangerous idea. I just hope that it never catches on. _________________________________________________________________ NEIL GERSHENFELD Physicist; Director, Center for Bits and Atoms, MIT; Author, Fab [gershenfeld100.jpg] Democratizing access to the means of invention The elite temples of research (of the kind I've happily spent my career in) may be becoming intellectual dinosaurs as a result of the digitization and personalization of fabrication. Today, with about $20k in equipment it's possible to make and measure things from microns and microseconds on up, and that boundary is quickly receding. When I came to MIT that was hard to do. If it's no longer necessary to go to MIT for its facilities, then surely the intellectual community is its real resource? But my colleagues (and I) are always either traveling or over-scheduled; the best way for us to see each other is to go somewhere else. Like many people, my closest collaborators are in fact distributed around the world. The ultimate consequence of the digitization of first communications, then computation, and now fabrication, is to democratize access to the means of invention. The third world can skip over the first and second cultures and go right to developing a third culture. Rather than today's model of researchers researching for researchees, the result of all that discovery has been to enable a planet of creators rather than consumers. _________________________________________________________________ PAUL STEINHARDT Albert Einstein Professor of Science, Princeton University [steinhardt100.jpg] It's a matter of time For decades, the commonly held view among scientists has been that space and time first emerged about fourteen billion years ago in a big bang. According to this picture, the cosmos transformed from a nearly uniform gas of elementary particles to its current complex hierarchy of structure, ranging from quarks to galaxy superclusters, through an evolutionary process governed by simple, universal physical laws. In the past few years, though, confidence in this point of view has been shaken as physicists have discovered finely tuned features of our universe that seem to defy natural explanation. The prime culprit is the cosmological constant, which astronomers have measured to be exponentially smaller than na?ve estimates would predict. On the one hand, it is crucial that the cosmological constant be so small or else it would cause space to expand so rapidly that galaxies and stars would never form. On the other hand, no theoretical mechanism has been found within the standard Big Bang picture that would explain the tiny value. Desperation has led to a "dangerous" idea: perhaps we live in an anthropically selected universe. According to this view, we live in a multiverse (a multitude of universes) in which the cosmological constant varies randomly from one universe to the next. In most universes, the value is incompatible with the formation of galaxies, planets, and stars. The reason why our cosmological constant has the value it does is because it it is one of the rare examples in which the value happens to lie in the narrow range compatible with life. This is the ultimate example of "unintelligent design": the multiverse tries every possibility with reckless abandon and only very rarely gets things "right;" that is, consistent with everything we actually observe. It suggests that the creation of unimaginably enormous volumes of uninhabitable space is essential to obtain a few rare habitable spaces. I consider this approach to be extremely dangerous for two reasons. First, it relies on complex assumptions about physical conditions far beyond the range of conceivable observation so it is not scientifically verifiable. Secondly, I think it leads inevitably to a depressing end to science. What is the point of exploring further the randomly chosen physical properties in our tiny corner of the multiverse if most of the multiverse is so different. I think it is far too early to be so desperate. This is a dangerous idea that I am simply unwilling to contemplate. My own "dangerous" idea is more optimistic but precarious because it bucks the current trends in cosmological thinking. I believe that the finely tuned features may be naturally explained by supposing that our universe is much older than we have imagined. With more time, a new possibility emerges. The cosmological "constant" may not be constant after all. Perhaps it is varying so slowly that it only appears to be constant. Originally it had the much larger value that we would naturally estimate, but the universe is so old that its value has had a chance to relax to the tiny value measured today. Furthermore, in several concrete examples, one finds that the evolution of the cosmological constant slows down as its value approaches zero, so most of the history of the universe transpires when its value is tiny, just as we find today. This idea that the cosmological constant is decreasing has been considered in the past. In fact, physically plausible slow-relaxation mechanisms have been identified. But the timing was thought to be impossible. If the cosmological constant decreases very slowly, it causes the expansion rate to accelerate too early and galaxies never form. If it decreases too quickly, the expansion rate never accelerates, which is inconsistent with recent observations. As long as the cosmological constant has only 14 billion years to evolve, there is no feasible solution. But, recently, some cosmologists have been exploring the possibility that the universe is exponentially older. In this picture, the evolution of the universe is cyclic. The Big Bang is not the beginning of space and time but, rather, a sudden creation of hot matter and radiation that marks the transition from one period of expansion and cooling to the next cycle of evolution. Each cycle might last a trillion years, say. Fourteen billion years marks the time since the last infusion of matter and radiation, but this is brief compared to the total age of the universe. Each cycle lasts about a trillion years and the number of cycles in the past may have been ten to the googol power or more! Then, using the slow relaxation mechanisms considered previously, it becomes possible that the cosmological constant decreases steadily from one cycle to the next. Since the number of cycles is likely to be enormous, there is enough time for the cosmological constant to shrink by an exponential factor, even though the decrease over the course of any one cycle is too small to be undetectable. Because the evolution slows down as the cosmological constant decreases, this is the period when most of the cycles take place. There is no multiverse and there is nothing special about our region of space -- we live in a typical region at a typical time. Remarkably, this idea is scientifically testable. The picture makes explicit predictions about the distribution of primordial gravitational waves and variations in temperature and density. Also, if the cosmological constant is evolving at the slow rate suggested, then ongoing attempts to detect a temporal variation should find no change. So, we may enjoy speculating now about which dangerous ideas we prefer, but ultimately it is Nature that will decide if any of them is right. It is just a matter of time. _________________________________________________________________ SAM HARRIS Neuroscience Graduate Student, UCLA; Author, The End of Faith [harriss101.jpg] Science Must Destroy Religion Most people believe that the Creator of the universe wrote (or dictated) one of their books. Unfortunately, there are many books that pretend to divine authorship, and each makes incompatible claims about how we all must live. Despite the ecumenical efforts of many well-intentioned people, these irreconcilable religious commitments still inspire an appalling amount of human conflict. In response to this situation, most sensible people advocate something called "religious tolerance." While religious tolerance is surely better than religious war, tolerance is not without its liabilities. Our fear of provoking religious hatred has rendered us incapable of criticizing ideas that are now patently absurd and increasingly maladaptive. It has also obliged us to lie to ourselves -- repeatedly and at the highest levels -- about the compatibility between religious faith and scientific rationality. The conflict between religion and science is inherent and (very nearly) zero-sum. The success of science often comes at the expense of religious dogma; the maintenance of religious dogma always comes at the expense of science. It is time we conceded a basic fact of human discourse: either a person has good reasons for what he believes, or he does not. When a person has good reasons, his beliefs contribute to our growing understanding of the world. We need not distinguish between "hard" and "soft" science here, or between science and other evidence-based disciplines like history. There happen to be very good reasons to believe that the Japanese bombed Pearl Harbor on December 7th, 1941. Consequently, the idea that the Egyptians actually did it lacks credibility. Every sane human being recognizes that to rely merely upon "faith" to decide specific questions of historical fact would be both idiotic and grotesque -- that is, until the conversation turns to the origin of books like the bible and the Koran, to the resurrection of Jesus, to Muhammad's conversation with the angel Gabriel, or to any of the other hallowed travesties that still crowd the altar of human ignorance. Science, in the broadest sense, includes all reasonable claims to knowledge about ourselves and the world. If there were good reasons to believe that Jesus was born of a virgin, or that Muhammad flew to heaven on a winged horse, these beliefs would necessarily form part of our rational description of the universe. Faith is nothing more than the license that religious people give one another to believe such propositions when reasons fail. The difference between science and religion is the difference between a willingness to dispassionately consider new evidence and new arguments, and a passionate unwillingness to do so. The distinction could not be more obvious, or more consequential, and yet it is everywhere elided, even in the ivory tower. Religion is fast growing incompatible with the emergence of a global, civil society. Religious faith -- faith that there is a God who cares what name he is called, that one of our books is infallible, that Jesus is coming back to earth to judge the living and the dead, that Muslim martyrs go straight to Paradise, etc. -- is on the wrong side of an escalating war of ideas. The difference between science and religion is the difference between a genuine openness to fruits of human inquiry in the 21st century, and a premature closure to such inquiry as a matter of principle. I believe that the antagonism between reason and faith will only grow more pervasive and intractable in the coming years. Iron Age beliefs -- about God, the soul, sin, free will, etc. -- continue to impede medical research and distort public policy. The possibility that we could elect a U.S. President who takes biblical prophesy seriously is real and terrifying; the likelihood that we will one day confront Islamists armed with nuclear or biological weapons is also terrifying, and growing more probable by the day. We are doing very little, at the level of our intellectual discourse, to prevent such possibilities. In the spirit of religious tolerance, most scientists are keeping silent when they should be blasting the hideous fantasies of a prior age with all the facts at their disposal. To win this war of ideas, scientists and other rational people will need to find new ways of talking about ethics and spiritual experience. The distinction between science and religion is not a matter of excluding our ethical intuitions and non-ordinary states of consciousness from our conversation about the world; it is a matter of our being rigorous about what is reasonable to conclude on their basis. We must find ways of meeting our emotional needs that do not require the abject embrace of the preposterous. We must learn to invoke the power of ritual and to mark those transitions in every human life that demand profundity -- birth, marriage, death, etc. -- without lying to ourselves about the nature of reality. I am hopeful that the necessary transformation in our thinking will come about as our scientific understanding of ourselves matures. When we find reliable ways to make human beings more loving, less fearful, and genuinely enraptured by the fact of our appearance in the cosmos, we will have no need for divisive religious myths. Only then will the practice of raising our children to believe that they are Christian, Jewish, Muslim, or Hindu be broadly recognized as the ludicrous obscenity that it is. And only then will we stand a chance of healing the deepest and most dangerous fractures in our world. _________________________________________________________________ SCOTT ATRAN Anthropologist, University of Michigan; Author, In God's We Trust [atran.100.jpg] Science encourages religion in the long run (and vice versa) Ever since Edward Gibbon's Decline and Fall of the Roman Empire, scientists and secularly-minded scholars have been predicting the ultimate demise of religion. But, if anything, religious fervor is increasing across the world, including in the United States, the world's most economically powerful and scientifically advanced society. An underlying reason is that science treats humans and intentions only as incidental elements in the universe, whereas for religion they are central. Science is not particularly well-suited to deal with people's existential anxieties, including death, deception, sudden catastrophe, loneliness or longing for love or justice. It cannot tell us what we ought to do, only what we can do. Religion thrives because it addresses people's deepest emotional yearnings and society's foundational moral needs, perhaps even more so in complex and mobile societies that are increasingly divorced from nurturing family settings and long familiar environments. From a scientific perspective of the overall structure and design of the physical universe: 1. Human beings are accidental and incidental products of the material development of the universe, almost wholly irrelevant and readily ignored in any general description of its functioning. Beyond Earth, there is no intelligence -- however alien or like our own -- that is watching out for us or cares. We are alone. 2. Human intelligence and reason, which searches for the hidden traps and causes in our surroundings, evolved and will always remain leashed to our animal passions -- in the struggle for survival, the quest for love, the yearning for social standing and belonging. This intelligence does not easily suffer loneliness, anymore than it abides the looming prospect of death, whether individual or collective. Religion is the hope that science is missing (something more in the endeavor to miss nothing). But doesn't religion impede science, and vice versa? Not necessarily. Leaving aside the sociopolitical stakes in the opposition between science and religion (which vary widely are not constitutive of science or religion per se -- Calvin considered obedience to tyrants as exhibiting trust in God, Franklin wanted the motto of the American Republic to be "rebellion against tyranny is obedience to God"), a crucial difference between science and religion is that factual knowledge as such is not a principal aim of religious devotion, but plays only a supporting role. Only in the last decade has the Catholic Church reluctantly acknowledged the factual plausibility of Copernicus, Galileo and Darwin. Earlier religious rejection of their theories stemmed from challenges posed to a cosmic order unifying the moral and material worlds. Separating out the core of the material world would be like draining the pond where a water lily grows. A long lag time was necessary to refurbish and remake the moral and material connections in such a way that would permit faith in a unified cosmology to survive. _________________________________________________________________ MARCELO GLEISER Physicist, Dartmouth College; Author, The Prophet and the Astronome r [gleiser100.jpg] Can science explain itself? There have been many times when I asked myself if we scientists, especially those seeking to answer "ultimate" kind of questions such as the origin of the Universe, are not beating on the wrong drum. Of course, by trying to answer such question as the origin of everything, we assume we can. We plow ahead, proposing tentative models that join general relativity and quantum mechanics and use knowledge from high energy physics to propose models where the universe pops out of nothing, no energy required, due to a random quantum fluctuation. To this, we tag along the randomness of fundamental constants, saying that their values are the way they are due to an accident: other universes may well have other values of the charge and mass of the electron and thus completely different properties. So, our universe becomes this very special place where things "conspire" to produce galaxies, stars, planets, and life. What if this is all bogus? What if we look at sciece as a narrative, a description of the world that has limitations based on its structure? The constants of Nature are the letters of the alphabet, the laws are the grammar rules and we build these descriptions through the guiding hand of the so-called scientific method. Period. To say things are this way because otherwise we wouldn't be here to ask the question is to miss the point altogether: things are this way because this is the story we humans tell based on the way we see the world and explain it. If we take this to the extreme, it means that we will never be able to answer the question of the origin of the Universe, since it implicitly assumes that science can explain itself. We can build any cool and creative models we want using any marriage of quantum mechanics and relativity, but we still won't understand why these laws and not others. In sense, this means that our science is our science and not something universally true as many believe it is. This is not bad at all, given what we can do with it, but it does place limits on knowledge. Which may also not be a bad thing as well. It's OK not to know everything, it doesn't make science weaker. Only more human. _________________________________________________________________ DOUGLAS RUSHKOFF Media Analyst; Documentary Writer; Author, Get Back in the Box : Innovation from the Inside Out [rushkoff100.jpg] Open Source Currency It's not only dangerous and by most counts preposterous; it's happening. Open Source or, in more common parlance, "complementary" currencies are collaboratively established units representing hours of labor that can be traded for goods or services in lieu of centralized currency. The advantage is that while the value of centralized currency is based on its scarcity, the bias of complementary or local currencies is towards their abundance. So instead of having to involve the Fed in every transaction -- and using money that requires being paid back with interest -- we can invent our own currencies and create value with our labor. It's what the Japanese did at the height of the recession. No, not the Japanese government, but unemployed Japanese people who couldn't afford to pay healthcare costs for their elder relatives in distant cities. They created a currency through which people could care for someone else's grandmother, and accrue credits for someone else to take care of theirs. Throughout most of history, complementary currencies existed alongside centralized currency. While local currency was used for labor and local transactions, centralized currencies were used for long distance and foreign trade. Local currencies were based on a model of abundance -- there was so much of it that people constantly invested it. That's why we saw so many cathedrals being built in the late middle ages, and unparalleled levels of investment in infrastructure and maintenance. Centralized currency, on the other hand, needed to retain value over long distances and periods of time, so it was based on precious and scarce resources, such as gold. The problem started during the Renaissance: as kings attempted to centralize their power, most local currencies were outlawed. This new monopoly on currency reduced entire economies into scarcity engines, encouraging competition over collaboration, protectionism over sharing, and fixed commodities over renewable resources. Today, money is lent into existence by the Fed or another central bank -- and paid back with interest. This cash is a medium; and like any medium, it has certain biases. The money we use today is just one model of money. Turning currency into an collaborative phenomenon is the final frontier in the open source movement. It's what would allow for an economic model that could support a renewable energies industry, a way for companies such as Wal-Mart to add value to the communities it currently drains, and a way of working with money that doesn't have bankruptcy built in as a given circumstanc _________________________________________________________________ JUDITH RICH HARRIS Independent Investigator and Theoretician; Author, The Nurture Assumption [harris101.jpg] The idea of zero parental influence Is it dangerous to claim that parents have no power at all (other than genetic) to shape their child's personality, intelligence, or the way he or she behaves outside the family home? More to the point, is this claim false? Was I wrong when I proposed that parents' power to do these things by environmental means is zero, nada, zilch? A confession: When I first made this proposal ten years ago, I didn't fully believe it myself. I took an extreme position, the null hypothesis of zero parental influence, for the sake of scientific clarity. Making myself an easy target, I invited the establishment -- research psychologists in the academic world -- to shoot me down. I didn't think it would be all that difficult for them to do so. It was clear by then that there weren't any big effects of parenting, but I thought there must be modest effects that I would ultimately have to acknowledge. The establishment's failure to shoot me down has been nothing short of astonishing. One developmental psychologist even admitted, one year ago on this very website, that researchers hadn't yet found proof that "parents do shape their children," but she was still convinced that they will eventually find it, if they just keep searching long enough. Her comrades in arms have been less forthright. "There are dozens of studies that show the influence of parents on children!" they kept saying, but then they'd somehow forget to name them -- perhaps because these studies were among the ones I had already demolished (by showing that they lacked the necessary controls or the proper statistical analyses). Or they'd claim to have newer research that provided an airtight case for parental influence, but again there was a catch: the work had never been published in a peer-reviewed journal. When I investigated, I could find no evidence that the research in question had actually been done or, if done, that it had produced the results that were claimed for it. At most, it appeared to consist of preliminary work, with too little data to be meaningful (or publishable). Vaporware, I call it. Some of the vaporware has achieved mythic status. You may have heard of Stephen Suomi's experiment with nervous baby monkeys, supposedly showing that those reared by "nurturant" adoptive monkey mothers turn into calm, socially confident adults. Or of Jerome Kagan's research with nervous baby humans, supposedly showing that those reared by "overprotective" (that is, nurturant) human mothers are more likely to remain fearful. Researchers like these might well see my ideas as dangerous. But is the notion of zero parental influence dangerous in any other sense? So it is alleged. Here's what Frank Farley, former president of the American Psychological Association, told a journalist in 1998: [Harris's] thesis is absurd on its face, but consider what might happen if parents believe this stuff! Will it free some to mistreat their kids, since "it doesn't matter"? Will it tell parents who are tired after a long day that they needn't bother even paying any attention to their kid since "it doesn't matter"? Farley seems to be saying that the only reason parents are nice to their children is because they think it will make the children turn out better! And that if parents believed that they had no influence at all on how their kids turn out, they are likely to abuse or neglect them. Which, it seems to me, is absurd on its face. Most chimpanzee mothers are nice to their babies and take good care of them. Do chimpanzees think they're going to influence how their offspring turn out? Doesn't Frank Farley know anything at all about evolutionary biology and evolutionary psychology? My idea is viewed as dangerous by the powers that be, but I don't think it's dangerous at all. On the contrary: if people accepted it, it would be a breath of fresh air. Family life, for parents and children alike, would improve. Look what's happening now as a result of the faith, obligatory in our culture, in the power of parents to mold their children's fragile psyches. Parents are exhausting themselves in their efforts to meet their children's every demand, not realizing that evolution designed offspring -- nonhuman animals as well as humans -- to demand more than they really need. Family life has become phony, because parents are convinced that children need constant reassurances of their love, so if they don't happen to feel very loving at a particular time or towards a particular child, they fake it. Praise is delivered by the bushel, which devalues its worth. Children have become the masters of the home. And what has all this sacrifice and effort on the part of parents bought them? Zilch. There are no indications that children today are happier, more self-confident, less aggressive, or in better mental health than they were sixty years ago, when I was a child -- when homes were run by and for adults, when physical punishment was used routinely, when fathers were generally unavailable, when praise was a rare and precious commodity, and when explicit expressions of parental love were reserved for the deathbed. Is my idea dangerous? I've never condoned child abuse or neglect; I've never believed that parents don't matter. The relationship between a parent and a child is an important one, but it's important in the same way as the relationship between married partners. A good relationship is one in which each party cares about the other and derives happiness from making the other happy. A good relationship is not one in which one party's central goal is to modify the other's personality. I think what's really dangerous -- perhaps a better word is tragic -- is the establishment's idea of the all-powerful, and hence all-blamable, parent. _________________________________________________________________ ALUN ANDERSON Senior Consultant, New Scientist [andersona100.jpg] Brains cannot become minds without bodies A common image for popular accounts of the "The Mind" is a brain in a bell jar. The message is that inside that disembodied lump of neural tissue is everything that is you. It's a scary image but misleading. A far more dangerous idea is that brains cannot become minds without bodies, that two-way interactions between mind and body are crucial to thought and health, and the brain may partly think in terms of the motor actions it encodes for the body's muscles to carry out. We've probable fallen for disembodied brains because of the academic tendency to worship abstract thought. If we take a more democratic view of the whole brain we'd find far more of it being used for planning and controlling movement than for cogitation. Sports writers get it right when they describe stars of football or baseball as "geniuses"! Their genius requires massive brain power and a superb body, which is perhaps one better than Einstein. The "brain-body" view is dangerous because it requires many scientists to change the way they think: it allows back common sense interactions between brain and body that medical science feels uncomfortable with, makes more sense of feelings like falling in love and requires a different approach for people who are trying to create machines with human-like intelligence. And if this all sounds like mere assertion, there's plenty of interesting research out there to back it up. Interactions between mind and body come out strongly in the surprising links between status and health. Michael Marmot's celebrated studies show that the lower you are in the pecking order, the worse your health is likely to be. You can explain away only a small part of the trend from poorer access to healthcare, or poorer food or living conditions. For Marmot, the answer lies in "the impact over how much control you have over life circumstances". The important message is that state of mind -- perceived status -- translates into state of body. The effect of placebos on health delivers a similar message. Trust and belief are often seen as negative in science and the placebo effect is dismissed as a kind of "fraud" because it relies on the belief of the patient. But the real wonder is that faith can work. Placebos can stimulate the release of pain-relieving endorphins and affect neuronal firing rates in people with Parkinson's disease. Body and mind interact too in the most intimate feelings of love and bonding. Those interactions have been best explored in voles where two hormones, oxytocin and vasopressin, are critical. The hormones are released as a result of the "the extended tactile pleasures of mating", as researchers describe it, and hit pleasure centres in the brain which essentially "addict" sexual partners to one another. Humans are surely more cerebral. But brain scans of people in love show heightened activity where there are lots of oxytocin and vasopressin receptors. Oxytocin levels rise during orgasm and sexual arousal, as they do from touching and massage. There are defects in oxytocin receptors associated with autism. And the hormone boosts the feeling that you can trust others, which is key part of intimate relations. In a recent laboratory "investment game" many investors would trust all their money to a stranger after a puff of an oxytocin spray. These few stories show the importance of the interplay of minds and hormonal signals, of brains and bodies. This idea has been taken to a profound level in the well-known studies of Anthony Damasio, who finds that emotional or "gut feelings" are essential to making decisions. "We don't separate emotion from cognition like layers in a cake," says Damasio, "Emotion is in the loop of reason all the time." Indeed, the way in which reasoning is tied to body actions may be quite counter-intuitive. Giacomo Rizzolatti discovered "mirror neurones" in a part of the monkey brain responsible for planning movement. These nerve cells fire both when a monkey performs an action (like picking up a peanut) and when the monkey sees someone else do the same thing. Before long, similar systems were found in human brains too. The surprising conclusion may be that when we see someone do something, the same parts of our brain are activated "as if" we were doing it ourselves. We may know what other people intend and feel by simulating what they are doing within the same motor areas of our own brains. As Rizzolatti puts it, "the fundamental mechanism that allows us a direct grasp of the mind of others is not conceptual reasoning but direct simulation of the observed events through the mirror mechanism." Direct grasp of others' minds is a special ability that paves the way for our unique powers of imitation which in turn have allowed culture to develop. If bodies and their interaction with brain and planning for action in the world are so central to human kinds of mind, where does that leave the chances of creating an intelligent "disembodied mind" inside a computer? Perhaps the Turing test will be harder than we think. We may build computers that understand language but which cannot say anything meaningful, at least until we can give them "extended tactile experiences". To put it another way, computers may not be able to make sense until they can have sex. _________________________________________________________________ TODD E. FEINBERG, M.D. Psychiatrist and Neurologist, Albert Einstein College of Medicine; Author, Altered Egos [feinberg100.jpg] Myths and fairy tales are not true "Myths and fairy tales are not true." There is no Easter Bunny, there is no Santa Claus, and Moses may never have existed. Worse yet, I have increasing difficulty believing that there is a higher power ruling the universe. This is my dangerous idea. It is not a dangerous idea to those who do not share my particular world view or personal fears; to others it may seem trivially true. But for me, this idea is downright horrifying. I came to ponder this idea through my neurological examination of patients with brain damage that causes a disturbance in their self concepts and ego functions. Some of theses patients develop, in the course of their illness and recovery (or otherwise), disturbances of self and personal relatedness that create enduring delusions and metaphorical confabulations regarding their bodies, their relationships with loved ones, and their personal experiences. A patient I examined with a right hemisphere stroke and paralyzed left arm claimed that the arm was actually severed from his brother's body by gang members, thrown in the East river, and later attached to the patient's shoulder. Another patient with a ruptured brain aneurysm and amnesia who denied his disabilities claimed he was planning to adopt (a phantom) child who was in need of medical assistance. These personal narratives, produced by patients in altered neurological states and therefore without the constraints imposed by a fully functioning consciousness, have a dream-like quality, and constitute "personal myths" that express the patient's beliefs about themselves. The patient creates a metaphor in which personal experiences are crystallized in a metaphor in the form of an external real or fictitious persons, objects, places, or events. When this occurs, the metaphor serves as a symbolic representation or externalization of the patient's feelings that the patient does not realize originate from within the self. There is an intimate relationship between my patients' narratives and socially endorsed fairy tales and mythologies. This is particularly apparent when mythologies deal with themes relating to a loss of self, personal identity or death. For many people, the notion of personal death is extremely difficult to grasp and fully accommodate within one's self image. For many, in order to go on with life, death must be denied. Therefore, to help the individual deal with the prospect of the inevitability of personal death, cultural and religious institutions provide metaphors of everlasting life. Just as my patients adapt to difficult realities by creating metaphorical substitutes, it appears to me that beliefs in angels, deities and eternal souls can be understood in part as wish fulfilling metaphors for an unpleasant reality that most of us cannot fully comprehend and accept. Unfortunately, just as my patients' myths are not true, neither are those that I was brought up to believe in. _________________________________________________________________ STEWART BRAND Founder, Whole Earth Catalog, cofounder; The Well; cofounder, Global Business Network; Author, How Buildings Learn [brand100.jpg] What if public policy makers have an obligation to engage historians, and historians have an obligation to try to help? All historians understand that they must never, ever talk about the future. Their discipline requires that they deal in facts, and the future doesn't have any yet. A solid theory of history might be able to embrace the future, but all such theories have been discredited. Thus historians do not offer, and are seldom invited, to take part in shaping public policy. They leave that to economists. But discussions among policy makers always invoke history anyway, usually in simplistic form. "Munich" and "Vietnam," devoid of detail or nuance, stand for certain kinds of failure. "Marshall Plan" and "Man on the Moon" stand for certain kinds of success. Such totemic invocation of history is the opposite of learning from history, and Santayana's warning continues in force, that those who fail to learn from history are condemned to repeat it. A dangerous thought: What if public policy makers have an obligation to engage historians, and historians have an obligation to try to help? And instead of just retailing advice, go generic. Historians could set about developing a rigorous sub-discipline called "Applied History." There is only one significant book on the subject, published in 1988. Thinking In Time: The Uses of Hustory for Decision Makers was written by the late Richard Neustadt and Ernest May, who long taught a course on the subject at Harvard's Kennedy School of Government. (A course called "Reasoning from History" is currently taught there by Alexander Keyssar.) Done wrong, Applied History could paralyze public decision making and corrupt the practice of history -- that's the danger. But done right, Applied History could make decision making and policy far more sophisticated and adaptive, and it could invest the study of history with the level of consequence it deserves. _________________________________________________________________ JARED DIAMOND Biologist; Geographer, UCLA; Author, Collapse [diamond100.jpg] The evidence that tribal peoples often damage their environments and make war. Why is this idea dangerous? Because too many people today believe that a reason not to mistreat tribal people is that they are too nice or wise or peaceful to do those evil things, which only we evil citizens of state governments do. The idea is dangerous because, if you believe that that's the reason not to mistreat tribal peoples, then proof of the idea's truth would suggest that it's OK to mistreat them. In fact, the evidence seems to me overwhelming that the dangerous idea is true. But we should treat other people well because of ethical reasons, not because of na?ve anthropological theories that will almost surely prove false. _________________________________________________________________ LEONARD SUSSKIND Physicist, Stanford University; Author, The Cosmic Landscape [susskind100.jpg] The "Landscape" I have been accused of advocating an extremely dangerous idea. According to some people, the "Landscape" idea will eventually ensure that the forces of intelligent design (and other unscientific religious ideas) will triumph over true science. From one of my most distinguished colleagues: From a political, cultural point of view, it's not that these arguments are religious but that they denude us from our historical strength in opposing religion. Others have expressed the fear that my ideas, and those of my friends, will lead to the end of science (methinks they overestimate me). One physicist calls it "millennial madness." And from another quarter, Christoph Sch?nborn, Cardinal Archbishop of Vienna has accused me of "an abdication of human intelligence." As you may have guessed the idea in question is the Anthropic Principle: a principle that seeks to explain the laws of physics, and the constants of nature, by saying, "If they (the laws of physics) were different, intelligent life would not exist to ask why laws of nature are what they are." On the face of it, the Anthropic Principle is far too silly to be dangerous. It sounds no more sensible than explaining the evolution of the eye by saying that unless the eye evolved, there would be no one to read this page. But the A.P. is really shorthand for a rich set of ideas that are beginning to influence and even dominate the thinking of almost all serious theoretical physicists and cosmologists. Let me strip the idea down to its essentials. Without all the philosophical baggage, what it says is straightforward: The universe is vastly bigger than the portion that we can see; and, on a very large scale it is as varied as possible. In other words, rather than being a homogeneous, mono-colored blanket, it is a crazy-quilt patchwork of different environments. This is not an idle speculation. There is a growing body of empirical evidence confirming the inflationary theory of cosmology, which underlies the hugeness and hypothetical diversity of the universe. Meanwhile string theorists, much to the regret of many of them, are discovering that the number of possible environments described by their equations is far beyond millions or billions. This enormous space of possibilities, whose multiplicity may exceed ten to the 500 power, is called the Landscape. If these things prove to be true, then some features of the laws of physics (maybe most) will be local environmental facts rather than written-in-stone laws: laws that could not be otherwise. The explanation of some numerical coincidences will necessarily be that most of the multiverse is uninhabitable, but in some very tiny fraction conditions are fine-tuned enough for intelligent life to form. That's the dangerous idea and it is spreading like a cancer. Why is it that so many physicists find these ideas alarming? Well, they do threaten physicists' fondest hope, the hope that some extraordinarily beautiful mathematical principle will be discovered: a principle that would completely and uniquely explain every detail of the laws of particle physics (and therefore nuclear, atomic, and chemical physics). The enormous Landscape of Possibilities inherent in our best theory seems to dash that hope. What further worries many physicists is that the Landscape may be so rich that almost anything can be found: any combination of physical constants, particle masses, etc. This, they fear, would eliminate the predictive power of physics. Environmental facts are nothing more than environmental facts. They worry that if everything is possible, there will be no way to falsify the theory -- or, more to the point, no way to confirm it. Is the danger real? We shall see. Another danger that some of my colleagues perceive, is that if we "senior physicists" allow ourselves to be seduced by the Anthropic Principle, young physicists will give up looking for the "true" reason for things, the beautiful mathematical principle. My guess is that if the young generation of scientists is really that spineless, then science is doomed anyway. But as we know, the ambition of all young scientists is to make fools of their elders. And why does the Cardinal Archbishop Sch?nborn find the Landscape and the Multiverse so dangerous. I will let him explain it himself: Now, at the beginning of the 21st century, faced with scientific claims like neo-Darwinism and the multiverse hypothesis in cosmology invented to avoid the overwhelming evidence for purpose and design found in modern science, the Catholic Church will again defend human nature by proclaiming that the immanent design evident in nature is real. Scientific theories that try to explain away the appearance of design as the result of 'chance and necessity' are not scientific at all, but, as John Paul put it, an abdication of human intelligence. Abdication of human intelligence? No, it's called science. _________________________________________________________________ GERALD HOLTON Mallinckrodt Research Professor of Physics and Research Professor of History of Science, Harvard University; Author, Thematic Origins of Scientific Thought [holton100.jpg] The medicination of the ancient yearning for immortality Since the major absorption of scientific method into the research and practice of medicine in the 1860s, the longevity curve, at least for the white population in industrial countries, took off and has continued fairly constantly. That has been on the whole a benign result, and has begun to introduce the idea of tolerably good health as one of the basic Human Rights. But one now reads of projections to 200 years, and perhaps more. The economic, social and human costs of the increasing fraction of very elderly citizens have begun to be noticed already. To glimpse one of the possible results of the continuing projection of the longevity curve in terms of a plausible scenario: The matriarch of the family, on her deathbed at age 200, is being visited by the surviving, grieving family members: a son and a daughter, each of age of about 180, plus /their/ three "children" , around 150-160 years old each, plus all their offspring, in the range of 120 to 130, and so on..... A touching picture. But what are all the "costs" involved? _________________________________________________________________ CHARLES SEIFE Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Zero: The Biography Of A Dangerous Idea [seife100.jpg] Nothing Nothing can be more dangerous than nothing. Humanity's always been uncomfortable with zero and the void. The ancient Greeks declared them unnatural and unreal. Theologians argued that God's first act was to banish the void by the act of creating the universe ex nihilo, and Middle-Ages thinkers tried to ban zero and the other Arabic "ciphers." But the emptiness is all around us -- most of the universe is void. Even as we huddle around our hearths and invent stories to convince ourselves that the cosmos is warm and full and inviting, nothingness stares back at us with empty eye sockets. _________________________________________________________________ KARL SABBAGH Writer and Television Producer; Author, The Riemann Hypothesis [sabbagh100.jpg] The human brain and its products are incapable of understanding the truths about the universe Our brains may never be well-enough equipped to understand the universe and we are fooling ourselves if we think they will. Why should we expect to be able eventually to understand how the universe originated, evolved, and operates? While human brains are complex and capable of many amazing things, there is not necessarily any match between the complexity of the universe and the complexity of our brains, any more than a dog's brain is capable of understanding every detail of the world of cats and bones, or the dynamics of stick trajectories when thrown. Dogs get by and so do we, but do we have a right to expect that the harder we puzzle over these things the nearer we will get to the truth? Recently I stood in front of a three metre high model of the Ptolemaic universe in the Museum of the History of Science in Florence and I remembered how well that worked as a representation of the motions of the planets until Copernicus and Kepler came along. Nowadays, no element of the theory of giant interlocking cogwheels at work is of any use in understanding the motions of the stars and planets (and indeed Ptolemy himself did not argue that the universe really was run by giant cogwheels). Occam's Razor is used to compare two theories and allow us to choose which is more likely to be 'true' but hasn't it become a comfort blanket whenever we are faced with aspects of the universe that seem unutterably complex -- string theory for example. But is string theory just the Ptolemaic clockwork de nos jours? Can it be succeeded by some simplification or might the truth be even more complex and far beyond the neural networks of our brain to understand? The history of science is littered with examples of two types of knowledge advancement. There is imperfect understanding that 'sort of' works, and is then modified and replaced by something that works better, without destroying the validity of the earlier theory. Newton's theory of gravitation replaced by Einstein. Then there is imperfect understanding that is replaced by some new idea which owes nothing to older ones. Phlogiston theory, the ether, and so on are replaced by ideas which save the phenomena, lead to predictions, and convince us that they are nearer the truth. Which of these categories really covers today's science? Could we be fooling ourselves by playing around with modern phlogiston? And even if we are on the right lines in some areas, how much of what there is to be understood in the universe do we really understand? Fifty percent? Five percent? The dangerous idea is that perhaps we understand half a percent and all the brain and computer power we can muster may take us up to one or two percent in the lifetime of the human race. Paradoxically, we may find that the only justification for pursuing scientific knowledge is for the practical applications it leads to -- a view that runs contrary to the traditional support of knowledge for knowledge's sake. And why is this paradoxical? Because the most important advances in technology have come out of research that was not seeking to develop those advances but to understand the universe. So if my dangerous idea is right -- that the human brain and its products are actually incapable of understanding the truths about the universe -- it will not -- and should not -- lead to any diminution at all in our attempts to do so. Which means, I suppose, that it's not really dangerous at all. _________________________________________________________________ RUPERT SHELDRAKE Biologist, London; Author of The Presence of the Past [sheldrake100.jpg] A sense of direction involving new scientific principles We don't understand animal navigation. No one knows how pigeons home, or how swallow migrate, or how green turtles find Ascension Island from thousands of miles away to lay their eggs. These kinds of navigation involve more than following familiar landmarks, or orientating in a particular compass direction; they involve an ability to move towards a goal. Why is this idea dangerous? Don't we just need a bit more time to explain navigation in terms of standard physics, genes, nerve impulses and brain chemistry? Perhaps. But there is a dangerous possibility that animal navigation may not be explicable in terms of present-day physics. Over and above the known senses, some species of animals may have a sense of direction that depends on their being attracted towards their goals through direct field-like connections. These spatial attractors are places with which the animals themselves are already familiar, or with which their ancestors were familiar. What are the facts? We know more about pigeons than any other species. Everyone agrees that within familiar territory, especially within a few miles of their home, pigeons can use landmarks; for example, they can follow roads. But using familiar landmarks near home cannot explain how racing pigeons return across unfamiliar terrain from six hundred miles away, even flying over the sea, as English pigeons do when they are raced from Spain. Charles Darwin, himself a pigeon fancier, was one of the first to suggest a scientific hypothesis for pigeon homing. He proposed that they might use a kind of dead reckoning, registering all the twists and turns of the outward journey. This idea was tested in the twentieth century by taking pigeons away from their loft in closed vans by devious routes. They still homed normally. So did birds transported on rotating turntables, and so did birds that had been completely anaesthetized during the outward journey. What about celestial navigation? One problem for hypothetical solar or stellar navigation systems is that many animals still navigate in cloudy weather. Another problem is that celestial navigation depends on a precise time sense. To test the sun navigation theory, homing pigeons were clock-shifted by six or twelve hours and taken many miles from their lofts before being released. On sunny days, they set off in the wrong direction, as if a clock-dependent sun compass had been shifted. But in spite of their initial confusion, the pigeons soon corrected their courses and flew homewards normally. Two main hypotheses remain: smell and magnetism. Smelling the home position from hundreds of miles away is generally agreed to be implausible. Even the most ardent defenders of the smell hypothesis (the Italian school of Floriano Papi and his colleagues) concede that smell navigation is unlikely to work at distances over 30 miles. That leaves a magnetic sense. A range of animal species can detect magnetic fields, including termites, bees and migrating birds. But even if pigeons have a compass sense, this cannot by itself explain homing. Imagine that you are taken to an unfamiliar place and given a compass. You will know from the compass where north is, but not where home is. The obvious way of dealing with this problem is to postulate complex interactions between known sensory modalities, with multiple back-up systems. The complex interaction theory is safe, sounds sophisticated, and is vague enough to be irrefutable. The idea of a sense of direction involving new scientific principles is dangerous, but it may be inevitable. _________________________________________________________________ TOR N?RRETRANDERS Science Writer; Consultant; Lecturer, Copenhagen; Author, The User Illusion [norretranders100.jpg] Social Relativity Relativity is my dangerous idea. Well, neither the special nor the general theory of relativity, but what could be called social relativity: The idea that the only thing that matters to human well-being is how one stands relatively to others. That is, only the relative wealth of a person is important, the absolute level does not really matter, as soon as everyone is above the level of having their immediate survival needs fulfilled. There is now strong and consistent evidence (from fields such as microeconomics, experimental economics, psychology, sociolology and primatology) that it doesn't really matter how much you earn, as long as you earn more than your wife's sister's husband. Pioneers in these discussions are the late British social thinker Fred Hirsch and the American economist Robert Frank. Why is this idea dangerous? It seems to imply that equality will never become possible in human societies: The driving force is always to get ahead of the rest. Nobody will ever settle down and share. So it would seem that we are forever stuck with poverty, disease and unjust hierarchies. This idea could make the rich and the smart lean back and forget about the rest of the pack. But it shouldn't. Inequality may subjectively seem nice to the rich, but objectively it is not in their interest. A huge body of epidemiological evidence points to the fact that inequality is in fact the prime cause for human disease. Rich people in poor countries are more healthy than poor people in rich countries, even though the latter group has more resources in absolute terms. Societies with strong gradients of wealth show higher death rates and more disease, also amongst the people at the top. Pioneers in these studies are the British epidemiologists Michael Marmot and Richard Wilkinson. Poverty means spreading of disease, degradation of ecosystems and social violence and crime -- which are also bad for the rich. Inequality means stress to everyone. Social relativity then boils down to an illusion: It seems nice to me to be better off than the rest, but in terms of vitals -- survival, good health -- it is not. Believing in social relativity can be dangerous to your health. _________________________________________________________________ JOHN HORGAN Science Writer; Author, Rational Mysticism [horgan100.jpg] We Have No Souls The Depressing, Dangerous Hypothesis: We Have No Souls. This year's Edge question makes me wonder: Which ideas pose a greater potential danger? False ones or true ones? Illusions or the lack thereof? As a believer in and lover of science, I certainly hope that the truth will set us free, and save us, but sometimes I'm not so sure. The dangerous, probably true idea I'd like to dwell on in this Holiday season is that we humans have no souls. The soul is that core of us that supposedly transcends and even persists beyond our physicality, lending us a fundamental autonomy, privacy and dignity. In his 1994 book The Astonishing Hypothesis: The Scientific Search for the Soul, the late, great Francis Crick argued that the soul is an illusion perpetuated, like Tinkerbell, only by our belief in it. Crick opened his book with this manifesto: "'You,' your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules." Note the quotation marks around "You." The subtitle of Crick's book was almost comically ironic, since he was clearly trying not to find the soul but to crush it out of existence. I once told Crick that "The Depressing Hypothesis" would have been a more accurate title for his book, since he was, after all, just reiterating the basic, materialist assumption of modern neurobiology and, more broadly, all of science. Until recently, it was easy to dismiss this assumption as moot, because brain researchers had made so little progress in tracing cognition to specific neural processes. Even self-proclaimed materialists -- who accept, intellectually, that we are just meat machines -- could harbor a secret, sentimental belief in a soul of the gaps. But recently the gaps have been closing, as neuroscientists -- egged on by Crick in the last two decades of his life--have begun unraveling the so-called neural code, the software that transforms electrochemical pulses in the brain into perceptions, memories, decisions, emotions, and other constituents of consciousness. I've argued elsewhere that the neural code may turn out to be so complex that it will never be fully deciphered. But 60 years ago, some biologists feared the genetic code was too complex to crack. Then in 1953 Crick and Watson unraveled the structure of DNA, and researchers quickly established that the double helix mediates an astonishingly simple genetic code governing the heredity of all organisms. Science's success in deciphering the genetic code, which has culminated in the Human Genome Project, has been widely acclaimed -- and with good reason, because knowledge of our genetic makeup could allow us to reshape our innate nature. A solution to the neural code could give us much greater, more direct control over ourselves than mere genetic manipulation. Will we be liberated or enslaved by this knowledge? Officials in the Pentagon, the major funder of neural-code research, have openly broached the prospect of cyborg warriors who can be remotely controlled via brain implants, like the assassin in the recent remake of "The Manchurian Candidate." On the other hand, a cult-like group of self-described "wireheads" looks forward to the day when implants allow us to create our own realities and achieve ecstasy on demand. Either way, when our minds can be programmed like personal computers, then, perhaps, we will finally abandon the belief that we have immortal, inviolable souls, unless, of course, we program ourselves to believe. _________________________________________________________________ ERIC R. KANDEL Biochemist and University Professor, Columbia University; Recipient, The Nobel Prize, 2000; Author, Cellular Basis of Behavior [kandel100.jpg] Free will is exercised unconsciously, without awareness It is clear that consciousness is central to understanding human mental processes, and therefore is the holy grail of modern neuroscience. What is less clear is that much of our mental processes are unconscious and that these unconscious processes are as important as conscious mental processes for understanding the mind. Indeed most cognitive processes never reach consciousness. As Sigmund Freud emphasized at the beginning of the 20th century most of our perceptual and cognitive processes are unconscious, except those that are in the immediate focus of our attention. Based on these insights Freud emphasized that unconscious mental processes guide much of human behavior. Freud's idea was a natural extension of the notion of unconscious inference proposed in the 1860s by Hermann Helmholtz, the German physicist turned neural scientist. Helmholtz was the first to measure the conduction of electrical signals in nerves. He had expected it to be as the speed of light, fast as the conduction of electricity in copper cables, and found to his surprise that it was much slower, only about 90m sec. He then examined the reaction time, the time it takes a subject to respond to a consciously a perceived stimulus, and found that it was much, much slower than even the combined conduction times required for sensory and motor activities. This caused Helmholz to argue that a great deal of brain processing occurred unconsciously prior to conscious perception of an object. Helmholtz went on to argue that much of what goes on in the brain is not represented in consciousness and that the perception of objects depends upon "unconscious inferences" made by the brain, based on thinking and reasoning without awareness. This view was not accepted by many brain scientists who believed that consciousness is necessary for making inferences. However, in the 1970s a number of experiments began to accumulate in favor of the idea that most cognitive processes that occur in the brain never enter consciousness. Perhaps the most influential of these experiments were those carried out by Benjamin Libet in 1986. Libet used as his starting point a discovery made by the German neurologist Hans Kornhuber. Kornhuber asked volunteers to move their right index finger. He then measured this voluntary movement with a strain gauge while at the same time recording the electrical activity of the brain by means of an electrode on the skull. After hundreds of trials, Kornhuber found that, invariably, each movement was preceded by a little blip in the electrical record from the brain, a spark of free will! He called this potential in the brain the "readiness potential" and found that it occurred one second before the voluntary movement. Libet followed up on Kornhuber's finding with an experiment in which he asked volunteers to lift a finger whenever they felt the urge to do so. He placed an electrode on a volunteer's skull and confirmed a readiness potential about one second before the person lifted his or her finger. He then compared the time it took for the person to will the movement with the time of the readiness potential. Amazingly, Libet found that the readiness potential appeared not after, but 200 milliseconds before a person felt the urge to move his or her finger! Thus by merely observing the electrical activity of the brain, Libet could predict what a person would do before the person was actually aware of having decided to do it. These experiments led to the radical insight that by observing another person's brain activity, one can predict what someone is going to do before he is aware that he has made the decision to do it. This finding has caused philosophers of mind to ask: If the choice is determined in the brain unconsciously before we decide to act, where is free will? Are these choices predetermined? Is our experience of freely willing our actions only an illusion, a rationalization after the fact for what has happened? Freud, Helmholtz and Libet would disagree and argue that the choice is freely made but that it happens without our awareness. According to their view, the unconscious inference of Helmholtz also applies to decision-making. They would argue that the choice is made freely, but not consciously. Libet for example proposes that the process of initiating a voluntary action occurs in an unconscious part of the brain, but that just before the action is initiated, consciousness is recruited to approve or veto the action. In the 200 milliseconds before a finger is lifted, consciousness determines whether it moves or not. Whatever the reasons for the delay between decision and awareness, Libet's findings now raise the moral question: Is one to be held responsible for decisions that are made without conscious awareness? _________________________________________________________________ DANIEL GOLEMAN Psychologist; Author, Emotional Intelligence [goleman100.jpg] Cyber-disinhibition The Internet inadvertently undermines the quality of human interaction, allowing destructive emotional impulses freer reign under specific circumstances. The reason is a neural fluke that results in cyber-disinhibition of brain systems that keep our more unruly urges in check. The tech problem: a major disconnect between the ways our brains are wired to connect, and the interface offered in online interactions. Communication via the Internet can mislead the brain's social systems. The key mechanisms are in the prefrontal cortex; these circuits instantaneously monitor ourselves and the other person during a live interaction, and automatically guide our responses so they are appropriate and smooth. A key mechanism for this involves circuits that ordinarily inhibit impulses for actions that would be rude or simply inappropriate -- or outright dangerous. In order for this regulatory mechanism to operate well, we depend on real-time, ongoing feedback from the other person. The Internet has no means to allow such realtime feedback (other than rarely used two-way audio/video streams). That puts our inhibitory circuitry at a loss -- there is no signal to monitor from the other person. This results in disinhibition: impulse unleashed. Such disinhibition seems state-specific, and typically occurs rarely while people are in positive or neutral emotional states. That's why the Internet works admirably for the vast majority of communication. Rather, this disinhibition becomes far more likely when people feel strong, negative emotions. What fails to be inhibited are the impulses those emotions generate. This phenomenon has been recognized since the earliest days of the Internet (then the Arpanet, used by a small circle of scientists) as "flaming," the tendency to send abrasive, angry or otherwise emotionally "off" cyber-messages. The hallmark of a flame is that the same person would never say the words in the email to the recipient were they face-to-face. His inhibitory circuits would not allow it -- and so the interaction would go more smoothly. He might still communicate the same core information face-to-face, but in a more skillful manner. Offline and in life, people who flame repeatedly tend to become friendless, or get fired (unless they already run the company). The greatest danger from cyber-disinhibition may be to young people. The prefrontal inhibitory circuitry is among the last part of the brain to become fully mature, doing so sometime in the twenties. During adolescence there is a developmental lag, with teenagers having fragile inhibitory capacities, but fully ripe emotional impulsivity. Strengthening these inhibitory circuits can be seen as the singular task in neural development of the adolescent years. One way this teenage neural gap manifests online is "cyber-bullying," which has emerged among girls in their early teens. Cliques of girls post or send cruel, harassing messages to a target girl, who typically is both reduced to tears and socially humiliated. The posts and messages are anonymous, though they become widely known among the target's peers. The anonymity and social distance of the Internet allow an escalation of such petty cruelty to levels that are rarely found in person: face-to-face seeing someone cry typically halts bullying among girls -- but that inhibitory signal cannot come via Internet. A more ominous manifestation of cyber-disinhibition can be seen in the susceptibility of teenagers induced to perform sexual acts in front of webcams for an anonymous adult audience who pay to watch and direct. Apparently hundreds of teenagers have been lured into this corner of child pornography, with an equally large audience of pedophiles. The Internet gives strangers access to children in their own homes, who are tempted to do things online they would never consider in person. Cyber-bullying was reported last week in my local paper. The Webcam teenage sex circuit was a front-page story in The New York Times two days later. As with any new technology, the Internet is an experiment in progress. It's time we considered what other such downsides of cyber-disinhibition may be emerging -- and looked for a technological fix, if possible. The dangerous thought: the Internet may harbor social perils our inhibitory circuitry was not designed to handle in evolution. _________________________________________________________________ BRIAN GREENE Physicist & Mathematician, Columbia University; Author, The Fabric of the Cosmos; Presenter, three-part Nova program, The Elegant Universe [greene100.jpg] The Multiverse The notion that there are universes beyond our own -- the idea that we are but one member of a vast collection of universes called the multiverse -- is highly speculative, but both exciting and humbling. It's also an idea that suggests a radically new, but inherently risky approach to certain scientific problems. An essential working assumption in the sciences is that with adequate ingenuity, technical facility, and hard work, we can explain what we observe. The impressive progress made over the past few hundred years is testament to the apparent validity of this assumption. But if we are part of a multiverse, then our universe may have properties that are beyond traditional scientific explanation. Here's why: Theoretical studies of the multiverse (within inflationary cosmology and string theory, for example) suggest that the detailed properties of the other universes may be significantly different from our own. In some, the particles making up matter may have different masses or electric charges; in others, the fundamental forces may differ in strength and even number from those we experience; in others still, the very structure of space and time may be unlike anything we've ever seen. In this context, the quest for fundamental explanations of particular properties of our universe -- for example, the observed strengths of the nuclear and electromagnetic forces -- takes on a very different character. The strengths of these forces may vary from universe to universe and thus it may simply be a matter of chance that, in our universe, these forces have the particular strengths with which we're familiar. More intriguingly, we can even imagine that in the other universes where their strengths are different, conditions are not hospitable to our form of life. (With different force strengths, the processes giving rise to long-lived stars and stable planetary systems -- on which life can form and evolve -- can easily be disrupted.) In this setting, there would be no deep explanation for the observed force strengths. Instead, we would find ourselves living in a universe in which the forces have their familiar strengths simply because we couldn't survive in any of the others where the strengths were different. If true, the idea of a multiverse would be a Copernican revolution realized on a cosmic scale. It would be a rich and astounding upheaval, but one with potentially hazardous consequences. Beyond the inherent difficulty in assessing its validity, when should we allow the multiverse framework to be invoked in lieu of a more traditional scientific explanation? Had this idea surfaced a hundred years ago, might researchers have chalked up various mysteries to how things just happen to be in our corner of the multiverse, and not pressed on to discover all the wondrous science of the last century? Thankfully that's not how the history of science played itself out, at least not in our universe. But the point is manifest. While some mysteries may indeed reflect nothing more than the particular universe, within the multiverse, we find ourselves inhabiting, other mysteries are worth struggling with because they are the result of deep, underlying physical laws. The danger, if the multiverse idea takes root, is that researchers may too quickly give up the search for such underlying explanations. When faced with seemingly inexplicable observations, researchers may invoke the framework of the multiverse prematurely -- proclaiming some or other phenomenon to merely reflect conditions in our bubble universe -- thereby failing to discover the deeper understanding that awaits us. _________________________________________________________________ DAVID GELERNTER Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, Drawing Life [gelernter100.jpg] What are people well-informed about in the Information Age? Let's date the Information Age to 1982, when the Internet went into operation & the PC had just been born. What if people have been growing less well-informed ever since? What if people have been growing steadily more ignorant ever since the so-called Information Age began? Suppose an average US voter, college teacher, 5th-grade teacher, 5th-grade student are each less well-informed today than they were in '95, and were less well-informed then than in '85? Suppose, for that matter, they were less well-informed in '85 than in '65? If this is indeed the "information age," what exactly are people well-informed about? Video games? Clearly history, literature, philosophy, scholarship in general are not our specialities. This is some sort of technology age -- are people better informed about science? Not that I can tell. In previous technology ages, there was interest across the population in the era's leading technology. In the 1960s, for example, all sorts of people were interested in the space program and rocket technology. Lots of people learned a little about the basics -- what a "service module" or "trans-lunar injection" was, why a Redstone-Mercury vehicle was different from an Atlas-Mercury -- all sorts of grade-school students, lawyers, housewives, English profs were up on these topics. Today there is no comparable interest in computers & the internet, and no comparable knowledge. "TCP/IP," "Routers," "Ethernet protocol," "cache hits" -- these are topics of no interest whatsoever outside the technical community. The contrast is striking. _________________________________________________________________ MAHZARIN R. BANAJI Professor of Psychology, Harvard University [banaji100.jpg] We do not (and to a large extent, cannot) know who we are through introspection Conscious awareness is a sliver of the machine that is human intelligence but it's the only aspect we experience and hence the only aspect we come to believe exists. Thoughts, feelings, and behavior operate largely without deliberation or conscious recognition -- it's the routinized, automatic, classically conditioned, pre-compiled aspects of our thoughts and feelings that make up a large part of who we are. We don't know what motivates us even though we are certain we know just why we do the things we do. We have no idea that our perceptions and judgments are incorrect (as measured objectively) even when they are. Even more stunning, our behavior is often discrepant from our own conscious intentions and goals, not just objective standards or somebody else's standards. The same lack of introspective access that keeps us from seeing the truth in a visual illusion is the lack of introspective access that keeps us from seeing the truth of our own minds and behavior. The "bounds" on our ethical sense rarely come to light because the input into those decisions is kept firmly outside our awareness. Or at least, they don't come to light until science brings them into the light in a way that no longer permits them to remain in the dark. It is the fact that human minds have a tendency to categorize and learn in particular ways, that the sorts of feelings for one's ingroup and fear of outgroups are part of our evolutionary history. That fearing things that are different from oneself, holding what's not part of the dominant culture (not American, not male, not White, not college-educated) to be "less good" whether one wants to or not, reflects a part of our history that made sense in a particular time and place - because without it we would not have survived. To know this is to understand the barriers to change honestly and with adequate preparation. As everybody's favorite biologist Richard Dawkins said thirty years ago: Let us understand what our own selfish genes are up to, because we may then at least have a chance to upset their designs, something that no other species has ever aspired to do. We cannot know ourselves without the methods of science. The mind sciences have made it possible to look into the universe between the ear drums in ways that were unimagined. Emily Dickinson wrote in a letter to a mentor asking him to tell her how good a poet she was: "The sailor cannot see the north, but knows the needle can" she said. We have the needle and it involves direct, concerted effort, using science to get to the next and perhaps last frontier, of understanding not just our place among other planets, our place among other species, but our very nature. _________________________________________________________________ RODNEY BROOKS Director, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); Chief Technical Officer of iRobot Corporation; author Flesh and Machines [brooks100.jpg] Being alone in the universe The thing that I worry about most that may or may not be true is that perhaps the spontaneous transformation from non-living matter to living matter is extraordinarily unlikely. We know that it has happened once. But what if we gain lots of evidence over the next few decades that it happens very rarely. In my lifetime we can expect to examine the surface of Mars, and the moons of the gas giants in some detail. We can also expect to be able to image extra-solar planets within a few tens of light years to resolutions where we would be able to detect evidence of large scale biological activity. What if none of these indicate any life whatsoever? What does that do to our scientific belief that life did arise spontaneously. It should not change it, but it will make it harder to defend against non-scientific attacks. And wouldn't it sadden us immensely if we were to discover that there is a vanishing small probability that life will arise even once in any given galaxy. Being alone in this solar system will not be such a such a shock, but alone in the galaxy, or worse alone in the universe would, I think, drive us to despair, and back towards religion as our salve. _________________________________________________________________ LEE SMOLIN Physicist, Perimeter Institute; Author, Three Roads to Quantum Gravity [smolin100.jpg] Seeing Darwin in the light of Einstein; seeing Einstein in the light of Darwin The revolutionary moves made by Einstein and Darwin are closely related, and their combination will increasingly come to define how we see our worlds: physical, biological and social. Before Einstein, the properties of elementary particles were understood as being defined against an absolute, eternally fixed background. This way of doing science had been introduced by Newton. His method was to posit the existence of an absolute and eternal background structure against which the properties of things were defined. For example, this is how Newton conceived of space and time. Particles have properties defined, not with respect to each other, but each with respect to only the absolute background of space and time. Einstein's great achievement was to realize successfully the contrary idea, called relationalism, according to which the world is a network of relationships which evolve in time. There is no absolute background and the properties of anything are only defined in terms of its participation in this network of relations. Before Darwin, species were thought of as eternal categories, defined a priori; after Darwin species were understood to be relational categories-that is only defined in terms of their relationship with the network of interactions making up the biosphere. Darwin's great contribution was to understand that there is a process-natural selection-that can act on relational properties, leading to the birth of genuine novelty by creating complexes of relationships that are increasingly structured and complex. Seeing Darwin in the light of Einstein, we understand that all the properties a species has in modern biology are relational. There is no absolute background in biology. Seeing Einstein in the light of Darwin opens up the possibility that the mechanism of natural selection could act not only on living things but on the properties that define the different species of elementary particles. At first, physicists thought that the only relational properties an elementary particle might have were its position and motion in space and time. The other properties, like mass and charge were thought of in the old framework: defined by a background of absolute law. The standard model of particle physics taught us that some of those properties, like mass, are only the consequence of a particles interactions with other fields. As a result the mass of a particle is determined environmentally, by the phase of the other fields it interacts with. I don't know which model of quantum gravity is right, but all the leading candidates, string theory, loop quantum gravity and others, teach us that it is possible that all properties of elementary particles are relational and environmental. In different possible universes there may be different combinations of elementary particles and forces. Indeed, all that used to be thought of as fundamental, space and the elementary particles themselves are increasingly seen, in models of quantum gravity, as themselves emergent from a more elementary network of relations. The basic method of science after Einstein seems to be: identify something in your theory that is playing the role of an absolute background, that is needed to define the laws that govern objects in your theory, and understand it more deeply as a contingent property, which itself evolves subject to law. For example, before Einstein the geometry of space was thought of as specified absolutely as part of the laws of nature. After Einstein we understand geometry is contingent and dynamical, which means it evolves subject to law. This means that Einstein's move can even be applied to aspects of what were thought to be the laws of nature: so that even aspects of the laws turn out to evolve in time. The basic method of science after Darwin seems to be to identify some property once thought to be absolute and defined a prior and recognize that it can be understood because it has evolved by a process of or akin to natural selection. This has revolutionized biology and is in the process of doing the same to the social sciences. We can see by how I have stated it that these two methods are closely related. Einstein emphasizes the relational aspect of all properties described by science, while Darwin proposes that ultimately, the law which governs the evolution of everything else, including perhaps what were once seen to be laws-is natural selection. Should Darwin's method be applied even to the laws of physics? Recent developments in elementary particle physics give us little alternative if we are to have a rational understanding of the laws that govern our universe. I am referring here to the realization that string theory gives us, not a unique set of particles and forces, but an infinite list out of which one came to be selected for our universe. We physicists have now to understand Darwin's lesson: the only way to understand how one out of a vast number of choices was made, which favors improbably structure, is that it is the result of evolution by natural selection. Can this work? I showed it might, in 1992, in a theory of cosmological natural selection. This remains the only theory of how our laws came to be selected so far proposed that makes falsifiable predictions. The idea that laws of nature are themselves the result of evolution by natural selection is nothing new, it was anticipated by the philosopher Charles Sanders Pierce, who wrote in 1891: To suppose universal laws of nature capable of being apprehended by the mind and yet having no reason for their special forms, but standing inexplicable and irrational, is hardly a justifiable position. Uniformities are precisely the sort of facts that need to be accounted for. Law is par excellence the thing that wants a reason. Now the only possible way of accounting for the laws of nature, and for uniformity in general, is to suppose them results of evolution. This idea remains dangerous, not only for what it has achieved, but for what it implies for the future. For there are implications have yet to be absorbed or understood, even by those who have come to believe it is the only way forward for science. For example, must there always be a deeper, or meta-law, which governs the physical mechanisms by which a law evolves? And what about the fact that laws of physics are expressed in mathematics, which is usually thought of as encoding eternal truths? Can mathematics itself come to be seen as time bound rather that as transcendent and eternal platonic truths? I believe that we will achieve clarity on these and other scary implications of the idea that all the regularities we observe, including those we have gotten used to calling laws, are the result of evolution by natural selection. And I believe that once this is achieved Einstein and Darwin will be understood as partners in the greatest revolution yet in science, a revolution that taught us that the world we are imbedded in is nothing but an ever evolving network of relationships. _________________________________________________________________ ALISON GOPNIK Psychologist, UC-Berkeley; Coauthor, The Scientist In the Crib [gopnik100.jpg] A cacophony of "controversy" It may not be good to encourage scientists to articulate dangerous ideas. Good scientists, almost by definition, tend towards the contrarian and ornery, and nothing gives them more pleasure than holding to an unconventional idea in the face of opposition. Indeed, orneriness and contrarianism are something of currency for science -- nobody wants to have an idea that everyone else has too. Scientists are always constructing a straw man "establishment" opponent who they can then fearlessly demolish. If you combine that with defying the conventional wisdom of non-scientists you have a recipe for a very distinctive kind of scientific smugness and self-righteousness. We scientists see this contrarian habit grinning back at us in a particularly hideous and distorted form when global warming opponents or intelligent design advocates invoke the unpopularity of their ideas as evidence that they should be accepted, or at least discussed. The problem is exacerbated for public intellectuals. For the media too, would far rather hear about contrarian or unpopular or morally dubious or "controversial" ideas than ones that are congruent with everyday morality and wisdom. No one writes a newspaper article about a study that shows that girls are just as good at some task as boys, or that children are influenced by their parents. It is certainly true that there is no reason that scientifically valid results should have morally comforting consequences -- but there is no reason why they shouldn't either. Unpopularity or shock is no more a sign of truth than popularity is. More to the point, when scientists do have ideas that are potentially morally dangerous they should approach those ideas with hesitancy and humility. And they should do so in full recognition of the great human tragedy that, as Isiah Berlin pointed out, there can be genuinely conflicting goods and that humans are often in situations of conflict for which there is no simple or obvious answer. Truth and morality may indeed in some cases be competing values, but that is a tragedy, not a cause for self-congratulation. Humility and empathy come less easily to most scientists, most certainly including me, than pride and self-confidence, but perhaps for that very reason they are the virtues we should pursue. This is, of course, itself a dangerous idea. Orneriness and contrarianism are in fact, genuine scientific virtues, too. And in the current profoundly anti-scientific political climate it is terribly dangerous to do anything that might give comfort to the enemies of science. But I think the peril to science actually doesn't lie in timidity or self-censorship. It is much more likely to lie in a cacophony of "controversy". _________________________________________________________________ KEVIN KELLY Editor-At-Large, Wired; Author, New Rules for the New Economy [kelly100.jpg] More anonymity is good More anonymity is good: that's a dangerous idea. Fancy algorithms and cool technology make true anonymity in mediated environments more possible today than ever before. At the same time this techno-combo makes true anonymity in physical life much harder. For every step that masks us, we move two steps toward totally transparent unmasking. We have caller ID, but also caller ID Block, and then caller ID-only filters. Coming up: biometric monitoring and little place to hide. A world where everything about a person can be found and archived is a world with no privacy, and therefore many technologists are eager to maintain the option of easy anonymity as a refuge for the private. However in every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname. Or voting, where you can authenticate an identity without tagging it to a vote. Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it's good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system. There's a dangerous idea circulating that the option of anonymity should always be at hand, and that it is a noble antidote to technologies of control. This is like pumping up the levels of heavy metals in your body into to make it stronger. Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible. _________________________________________________________________ DENIS DUTTON Professor of the philosophy of art, University of Canterbury, New Zealand, editor of Philosophy and Literature and Arts & Letters Daily [dutton100.jpg] A "grand narrative" The humanities have gone through the rise of Theory in the 1960s, its firm hold on English and literature departments through the 1970s and 80s, followed most recently by its much-touted decline and death. Of course, Theory (capitalization is an English department affectation) never operated as a proper research program in any scientific sense -- with hypotheses validated (or falsified) by experiment or accrued evidence. Theory was a series of intellectual fashion statements, clever slogans and postures, imported from France in the 60s, then developed out of Yale and other Theory hot spots. The academic work Theory spawned was noted more for its chosen jargons, which functioned like secret codes, than for any concern to establish truth or advance knowledge. It was all about careers and prestige. Truth and knowledge, in fact, were ruled out as quaint illusions. This cleared the way, naturally, for an "anything-goes" atmosphere of academic criticism. In reality, it was anything but anything goes, since the political demands of the period included a long list of stereotyped villains (the West, the Enlightenment, dead whites males, even clear writing) to be pitted against mandatory heroines and heroes (indigenous peoples, the working class, the oppressed, and so forth). Though the politics remains as strong as ever in academe, Theory has atrophied not because it was refuted, but because everyone got bored with it. Add to that the absurdly bad writing of academic humanists of the period and episodes like the Sokal Hoax, and the decline was inevitable. Theory academics could with high seriousness ignore rational counter-arguments, but for them ridicule and laughter were like water thrown at the Wicked Witch. Theory withered and died. But wait. Here is exactly where my most dangerous idea comes in. What if it turned out that the academic humanities -- art criticism, music and literary history, aesthetic theory, and the philosophy of art -- actually had available to them a true, and therefore permanently valuable, theory to organize their speculations and interpretations? What if there really existed a hitherto unrecognized "grand narrative" that could explain the entire history of creation and experience of the arts worldwide? Aesthetic experience, as well as the context of artistic creation, is a phenomenon both social and psychological. From the standpoint of inner experience, it can be addressed by evolutionary psychology: the idea that our thinking and values are conditioned by the 2.6 million years of natural and sexual selection in the Pleistocene. This Darwinian theory has much to say about the abiding, cross-culturally ascertainable values human beings find in art. The fascination, for example, that people worldwide find in the exercise of artistic virtuosity, from Praxiteles to Hokusai to Renee Fleming, is not a social construct, but a Pleistocene adaptation (which outside of the arts shows itself in sporting interests everywhere). That calendar landscapes worldwide feature alternating copses of trees and open spaces, often hilly land, water, and paths or river banks that wind into an inviting distance is a Pleistocene landscape preference (which shows up in both art history and in the design of public parks everywhere). That soap operas and Greek tragedy all present themes of family breakdown ("She killed him because she loved him") is a reflection of ancient, innate content interests in story-telling. Darwinian theory offers substantial answers to perennial aesthetic questions. It has much to say about the origins of art. It's unlikely that the arts came about at one time or for one purpose; they evolved from overlapping interests based in survival and mate selection in the 80,000 generations of the Pleistocene. How we scan visually, how we hear, our sense of rhythm, the pleasures of artistic expression and in joining with others as an audience, and, not least, how the arts excite us using a repertoire of universal human emotions: all of this and more will be illuminated and explained by a Darwinian aesthetics. I've encountered stiff academic resistance to the notion that Darwinian theory might greatly improve the understanding of our aesthetic and imaginative lives. There's no reason to worry. The most complete, evolutionarily-based explanation of a great work of art, classic or recent, will address its form, its narrative content, its ideology, how it is taken in by the eye or mind, and indeed, how it can produce a deep, even life-transforming pleasure. But nothing in a valid aesthetic psychology will rob art of its appeal, any more than knowing how we evolved to enjoy fat and sweet makes a piece of cheesecake any less delicious. Nor will a Darwinian aesthetics reduce the complexity of art to simple formulae. It will only give us a better understanding of the greatest human achievements and their effects on us. In the sense that it would show innumerable careers in the humanities over the last forty years to have been wasted on banal politics and execrable criticism, Darwinian aesthetics is a very dangerous idea indeed. For people who really care about understanding art, it would be a combination of fresh air and strong coffee. _________________________________________________________________ SIMON BARON-COHEN Psychologist, Autism Research Centre, Cambridge University; Author, The Essential Difference [baroncohen100.jpg] A political system based on empathy Imagine a political system based not on legal rules (systemizing) but on empathy. Would this make the world a safer place? The UK Parliament, US Congress, Israeli Knesset, French National Assembly, Italian Senato della Repubblica, Spanish Congreso de los Diputados, -- what do such political chambers have in common? Existing political systems are based on two principles: getting power through combat, and then creating/revising laws and rules through combat. Combat is sometimes physical (toppling your opponent militarily), sometimes economic (establishing a trade embargo, to starve your opponent of resources), sometimes propaganda-based (waging a media campaign to discredit your opponent's reputation), and sometimes through voting-related activity (lobbying, forming alliances, fighting to win votes in key seats), with the aim to 'defeat' the opposition. Creating/revising laws and rules is what you do once you are in power. These might be constitutional rules, rules of precedence, judicial rulings, statutes, or other laws or codes of practice. Politicians battle for their rule-based proposal (which they hold to be best) to win, and battle to defeat the opposition's rival proposal. This way of doing politics is based on "systemizing". First you analyse the most effective form of combat (itself a system) to win. If we do x, then we will obtain outcome y. Then you adjust the legal code (another system). If we pass law A, we will obtain outcome B. My colleagues and I have studied the essential difference between how men and women think. Our studies suggest that (on average) more men are systemizers, and more women are empathizers. Since most political systems were set up by men, it may be no coincidence that we have ended up with political chambers that are built on the principles of systemizing. So here's the dangerous new idea. What would it be like if our political chambers were based on the principles of empathizing? It is dangerous because it would mean a revolution in how we choose our politicians, how our political chambers govern, and how our politicians think and behave. We have never given such an alternative political process a chance. Might it be better and safer than what we currently have? Since empathy is about keeping in mind the thoughts and feelings of other people (not just your own), and being sensitive to another person's thoughts and feelings (not just riding rough-shod over them), it is clearly incompatible with notions of "doing battle with the opposition" and "defeating the opposition" in order to win and hold on to power. Currently, we select a party (and ultimately a national) leader based on their "leadership" qualities. Can he or she make decisions decisively? Can they do what is in the best interests of the party, or the country, even if it means sacrificing others to follow through on a decision? Can they ruthlessly reshuffle their Cabinet and "cut people loose" if they are no longer serving their interests? These are the qualities of a strong systemizer. Note we are not talking about whether that politician is male or female. We are talking about how a politician (irrespective of their sex) thinks and behaves. We have had endless examples of systemizing politicians unable to resolve conflict. Empathizing politicians would perhaps follow Mandela and De Klerk's examples, who sat down to try to understand the other, to empathize with the other, even if the other was defined as a terrorist. To do this involves the empathic act of stepping into the other's shoes, and identifying with their feelings. The details of a political system based on empathizing would need a lot of working out, but we can imagine certain qualities that would have no place. Gone would be politicians who are skilled orators but who simply deliver monologues, standing on a platform, pointing forcefully into the air to underline their insistence -- even the body language containing an implied threat of poking their listener in the chest or the face - to win over an audience. Gone too would be politicians who are so principled that they are rigid and uncompromising. Instead, we would elect politicians based on different qualities: politicians who are good listeners, who ask questions of others instead of assuming they know the right course of action. We would instead have politicians who respond sensitively to another, different point of view, and who can be flexible over where the dialogue might lead. Instead of seeking to control and dominate, our politicians would be seeking to support, enable, and care. _________________________________________________________________ FREEMAN DYSON Physicist, Institute of Advanced Study, Author, Disturbing the Universe [dysonf100.jpg] Biotechnology will be thoroughly domesticated in the next fifty years Biotechnology will be domesticated in the next fifty years as thoroughly as computer technology was in the last fifty years. This means cheap and user-friendly tools and do-it-yourself kits, for gardeners to design their own roses and orchids, and for animal-breeders to design their own lizards and snakes. A new art-form as creative as painting or cinema. It means biotech games for children down to kindergarten age, like computer-games but played with real eggs and seeds instead of with images on a screen. Kids will grow up with an intimate feeling for the organisms that they create. It means an explosion of biodiversity as new ecologies are designed to fit into millions of local niches all over the world. Urban and rural landscapes will become more varied and more fertile. There are two severe and obvious dangers. First, smart kids and malicious grown-ups will find ways to convert biotech tools to the manufacture of lethal microbes. Second, ambitious parents will find ways to apply biotech tools to the genetic modification of their own babies. The great unanswered question is, whether we can regulate domesticated biotechnology so that it can be applied freely to animals and vegetables but not to microbes and humans. _________________________________________________________________ GREGORY COCHRAN Consultant in adaptive optics and an adjunct professor of anthropology at the University of Utah [cochran100.jpg] There is something new under the sun -- us Thucydides said that human nature was unchanging and thus predictable -- but he was probably wrong. If you consider natural selection operating in fast-changing human environments, such stasis is most unlikely. We know of a number of cases in which there has been rapid adaptive change in humans; for example, most of the malaria-defense mutations such as sickle cell are recent, just a few thousand years old. The lactase mutation that lets most adult Europeans digest ice cream is not much older. There is no magic principle that restricts human evolutionary change to disease defenses and dietary adaptations: everything is up for grabs. Genes affecting personality, reproductive strategies, cognition, are all able to change significantly over few-millennia time scales if the environment favors such change -- and this includes the new environments we have made for ourselves, things like new ways of making a living and new social structures. I would be astonished if the mix of personality types favored among hunter-gatherers is "exactly" the same as that favored among peasant farmers ruled by a Pharaoh. In fact they might be fairly different. There is evidence that such change has occurred. Henry Harpending and I have, we think, made a strong case that natural selection changed the Ashkenazi Jews over a thousand years or so, favoring certain kinds of cognitive abilities and generating genetic diseases as a side effect. Bruce Lahn's team has found new variants of brain-development genes: one, ASPM, appears to have risen to high frequency in Europe and the Middle East in about six thousand years. We don't yet know what this new variant does, but it certainly could affect the human psyche -- and if it does, Thucydides was wrong. We may not be doomed to repeat the Sicilian expedition: on the other hand, since we don't understand much yet about the changes that have occurred, we might be even more doomed. But at any rate, we have almost certainly changed. There is something new under the sun -- us. This concept opens strange doors. If true, it means that the people of Sumeria and Egypt's Old Kingdom were probably fundamentally different from us: human nature has changed -- some, anyhow -- over recorded history. Julian Jaynes, in The Origin of Consciousness in the Breakdown of the Bicameral Mind, argued that there was something qualitatively different about the human mind in ancient civilization. On first reading, Breakdown seemed one of the craziest books ever written, but Jaynes may have been on to something. If people a few thousand years ago thought and acted differently because of biological differences, history is never going to be the same. _________________________________________________________________ GEORGE B. DYSON Science Historian; Author, Project Orion [dysong100.jpg] Understanding molecular biology without discovering the origins of life I predict we will reach a complete understanding of molecular biology and molecular evolution, without ever discovering the origins of life. This idea is dangerous, because it suggests a mystery that science cannot explain. Or, it may be interpreted as confirmation that life is merely the collective result of a long series of incremental steps, and that it is impossible to draw a precise distinction between life and non-life. "The only thing of which I am sure," argued Samuel Butler in 1880, "is that the distinction between the organic and inorganic is arbitrary; that it is more coherent with our other ideas, and therefore more acceptable, to start with every molecule as a living thing, and then deduce death as the breaking up of an association or corporation, than to start with inanimate molecules and smuggle life into them. " Every molecule a living thing? That's not even dangerous, it's wrong! But where else can you draw the line? _________________________________________________________________ KEITH DEVLIN Mathematician; Executive Director, Center for the Study of Language and Information, Stanford; Author, The Millennium Problems [devlin100.jpg] We are entirely alone Living creatures capable of relecting on their own existence are a one-off, freak accident, existing for one brief moment in the history of the universe. There may be life elsewhere in the universe, but it does not have self-reflective consciousness. There is no God; no Intelligent Designer; no higher purpose to our lives. Personally, I have never found this possibility particularly troubling, but my experience has been that most people go to considerable lengths to convince themselves that it is otherwise. I think that many people find the suggestion dangerous because they see it as leading to a life devoid of meaning or moral values. They see it as a suggestion full of despair, an idea that makes our lives seem pointless. I believe that the opposite is the case. As the product of that unique, freak accident, finding ourselves able to reflect on and enjoy our conscious existence, the very unlikeliness and uniqueness of our situation surely makes us highly appreciative of what we have. Life is not just important to us; it is literally everything we have. That makes it, in human terms, the most precious thing there is. That not only gives life meaning for us, something to be respected and revered, but a strong moral code follows automatically. The fact that our existence has no purpose outside that existence is completely irrelevant to the way we live our lives, since we are inside our existence. The fact that our existence has no purpose for the universe -- whatever that means -- in no way means it has no purpose for us. We must ask and answer questions about ourselves within the framework of our existence as what we are. _________________________________________________________________ FRANK TIPLER Professor of Mathematical Physics, Tulane University; Author, The Physics of Immortality [tipler100.jpg] Why I Hope the Standard Model is Wrong about Why There is More Matter Than Antimatter The Standard Model of particle physics -- a theory of all forces and particles except gravity and a theory that has survived all tests over the past thirty years -- says it is possible to convert matter entirely into energy. Old-fashioned nuclear physics allows some matter to be converted into energy, but because nuclear physics requires the number of heavy particles like neutrons and protons, and light particles like electrons, to be separately conserved in nuclear reactions, only a small fraction (less than 1%) of the mass of the uranium or plutonium in an atomic bomb can be converted into energy. The Standard Model says that there is a way to convert all the mass of ordinary matter into energy; for example, it is in principle possible to convert the proton and electron making up a hydrogen atom entirely into energy. Particle physicists have long known about this possibility, but have considered it forever irrelevant to human technology because the energy required to convert matter into pure energy via this process is at the very limit of our most powerful accelerators (a trillion electron volts, or one TeV). I am very much afraid that the particle physicists are wrong about this Standard Model pure energy conversion process being forever irrelevant to human affairs. I have recently come to believe that the consistency of quantum field theory requires that it should be possible to convert up to 100 kilograms of ordinary matter into pure energy via this process using a device that could fit inside the trunk of a car, a device that could be manufactured in a small factory. Such a device would solve all our energy problems -- we would not need fossil fuels -- but 100 kilograms of energy is the energy released by a 1,000-megaton nuclear bomb. If such a bomb can be manufactured in a small factory, then terrorists everywhere will eventually have such weapons. I fear for the human race if this comes to pass. I very hope I am wrong about the technological feasibility of such a bomb. _________________________________________________________________ SCOTT SAMPSON Chief Curator, Utah Museum of Natural History; Associate Professor Department of Geology and Geophysics, University of Utah; Host, Dinosaur Planet TV series [sampson100.jpg] The purpose of life is to disperse energy The truly dangerous ideas in science tend to be those that threaten the collective ego of humanity and knock us further off our pedestal of centrality. The Copernican Revolution abruptly dislodged humans from the center of the universe. The Darwinian Revolution yanked Homo sapiens from the pinnacle of life. Today another menacing revolution sits at the horizon of knowledge, patiently awaiting broad realization by the same egotistical species. The dangerous idea is this: the purpose of life is to disperse energy. Many of us are at least somewhat familiar with the second law of thermodynamics, the unwavering propensity of energy to disperse and, in doing so, transition from high quality to low quality forms. More generally, as stated by ecologist Eric Schneider, "nature abhors a gradient," where a gradient is simply a difference over a distance -- for example, in temperature or pressure. Open physical systems -- including those of the atmosphere, hydrosphere, and geosphere -- all embody this law, being driven by the dispersal of energy, particularly the flow of heat, continually attempting to achieve equilibrium. Phenomena as diverse as lithospheric plate motions, the northward flow of the Gulf Stream, and occurrence of deadly hurricanes are all examples of second law manifestations. There is growing evidence that life, the biosphere, is no different. It has often been said the life's complexity contravenes the second law, indicating the work either of a deity or some unknown natural process, depending on one's bias. Yet the evolution of life and the dynamics of ecosystems obey the second law mandate, functioning in large part to dissipate energy. They do so not by burning brightly and disappearing, like a fire torching a forest, but through stable metabolic cycles that store chemical energy and continually reduce the solar gradient. Photosynthetic plants, bacteria, and algae capture energy from the sun and form the core of all food webs. Virtually all organisms, including humans, are, in a real sense, sunlight transmogrified, temporary waypoints in the flow of energy. Ecological succession, viewed from a thermodynamic perspective, is a process that maximizes the capture and degradation of energy. Similarly, the tendency for life to become more complex over the past 3.5 billion years (as well as the overall increase in biomass and organismal diversity through time) is not due simply to natural selection, as most evolutionists still argue, but also to nature's "efforts" to grab more and more of the sun's flow. The slow burn that characterizes life enables ecological systems to persist over deep time, changing in response to external and internal perturbations. Ecology has been summarized by the pithy statement, "energy flows, matter cycles. " Yet this maxim applies equally to complex systems in the non-living world; indeed it literally unites the biosphere with the physical realm. More and more, it appears that complex, cycling, swirling systems of matter have a natural tendency to emerge in the face of energy gradients. This recurrent phenomenon may even have been the driving force behind life's origins. This idea is not new, and is certainly not mine. Nobel laureate Erwin Schr?dinger was one of the first to articulate the hypothesis, as part of his famous "What is Life" lectures in Dublin in 1943. More recently, Jeffrey Wicken, Harold Morowitz, Eric Schneider and others have taken this concept considerably further, buoyed by results from a range of studies, particularly within ecology. Schneider and Dorian Sagan provide an excellent summary of this hypothesis in their recent book, "Into the Cool". The concept of life as energy flow, once fully digested, is profound. Just as Darwin fundamentally connected humans to the non-human world, a thermodynamic perspective connects life inextricably to the non-living world. This dangerous idea, once broadly distributed and understood, is likely to provoke reaction from many sectors, including religion and science. The wondrous diversity and complexity of life through time, far from being the product of intelligent design, is a natural phenomenon intimately linked to the physical realm of energy flow. Moreover, evolution is not driven by the machinations of selfish genes propagating themselves through countless millennia. Rather, ecology and evolution together operate as a highly successful, extremely persistent means of reducing the gradient generated by our nearest star. In my view, evolutionary theory (the process, not the fact of evolution!) and biology generally are headed for a major overhaul once investigators fully comprehend the notion that the complex systems of earth, air, water, and life are not only interconnected, but interdependent, cycling matter in order to maintain the flow of energy. Although this statement addresses only naturalistic function and is mute with regard to spiritual meaning, it is likely to have deep effects outside of science. In particular, broad understanding of life's role in dispersing energy has great potential to help humans reconnect both to nature and to planet's physical systems at a key moment in our species' history. _________________________________________________________________ JEREMY BERNSTEIN Professor of Physics, Stevens Institute of Technology; Author, Hitler's Uranium Club The idea that we understand plutonium The most dangerous idea I have come across recently is the idea that we understand plutonium. Plutonium is the most complex element in the periodic table. It has six different crystal phases between room temperature and its melting point. It can catch fire spontaneously in the presence of water vapor and if you inhale minuscule amounts you will die of lung cancer. It is the principle element in the "pits" that are the explosive cores of nuclear weapons. In these pits it is alloyed with gallium. No one knows why this works and no one can be sure how stable this alloy is. These pits, in the thousands, are now decades old. What is dangerous is the idea that they have retained their integrity and can be safely stored into the indefinite future. _________________________________________________________________ MIHALYI CSIKSZENTMIHALYI Psychologist; Director, Quality of Life Research Center, Claremont Graduate University; Author, Flow [csik100.jpg] The free market Generally ideas are thought to be dangerous when they threaten an entrenched authority. Galileo was sued not because he claimed that the earth revolved around the sun -- a "hypothesis" his chief prosecutor, Cardinal Bellarmine, apparently was quite willing to entertain in private -- but because the Church could not afford a fact it claimed to know be reversed by another epistemology, in this case by the scientific method. Similar conflicts arose when Darwin's view of how humans first appeared on the planet challenged religious accounts of creation, or when Mendelian genetics applied to the growth of hardier strains of wheat challenged Leninist doctrine as interpreted by Lysenko. One of the most dangerous ideas at large in the current culture is that the "free market" is the ultimate arbiter of political decisions, and that there is an "invisible hand" that will direct us to the most desirable future provided the free market is allowed to actualize itself. This mystical faith is based on some reasonable empirical foundations, but when embraced as a final solution to the ills of humankind, it risks destroying both the material resources, and the cultural achievements that our species has so painstakingly developed. So the dangerous idea on which our culture is based is that the political economy has a silver bullet -- the free market -- that must take precedence over any other value, and thereby lead to peace and prosperity. It is dangerous because like all silver bullets it is an intellectual and political scam that might benefit some, but ultimately requires the majority to pay for the destruction it causes. My dangerous idea is dangerous only to those who support the hegemony of the market. It consists in pointing out that the imperial free market wears no clothes -- it does not exist in the first place, and what passes for it is dangerous to the future well being of our species. Scientist need to turn their attention to what the complex system that is human life, will require in the future. Beginnings like the Calvert-Henderson Quality of Life Indicators, which focus on such central requirements as health, education, infrastructure, environment, human rights, and public safety, need to become part of our social and political agenda. And when their findings come into conflict with the agenda of the prophets of the free market, the conflict should be examined -- who is it that benefits from the erosion of the quality of life? _________________________________________________________________ IRENE PEPPERBERG Research Associate, Psychology, Harvard University; Author, The Alex Studies [pepperberg100.jpg] The differences between humans and nonhumans are quantitative, not qualitative I believe that the differences between humans and nonhumans are quantitative, not qualitative. Why is this idea dangerous? It is hardly surprising, coming from someone who has spent her scientific career studying the abilities of (supposedly) small-brained nonhumans; moreover, the idea is not exactly new. It may be a bit controversial, given that many of my colleagues spend much of their time searching for the defining difference that separates humans and nonhumans (and they may be correct), and also given a current social and political climate that challenges evolution on what seems to be a daily basis. But why dangerous? Because, if we take this idea to its logical conclusion, it challenges almost every aspect of our lives -- scientific and nonscientific alike. Scientifically, the idea challenges the views of many researchers who continue to hypothesize about the next human-nonhuman 'great divide'...Interestingly, however, detailed observation and careful experimentation have repeatedly demonstrated that nonhumans often possess capacities once thought to separate them from humans. Humans, for example, are not the only tool-using species, nor the only tool-making species, nor the only species to act cooperatively. So one has to wonder to what degree nonhumans share other capacities still thought to be exclusively human. And, of course, the critical words here are "to what degree" -- do we count lack of a particular behavior a defining criterion, or do we accept the existence of less complex versions of that behavior as evidence for a continuum? If one wishes to argue that I'm just blurring the difference between "qualitative" and "quantitative", so be it...such blurring will not affect the dangerousness of my idea. My idea is dangerous because it challenges scientists at a more basic level, that of how we perform research. Now, let me state clearly that I'm not against animal research -- I wouldn't be alive today without it, and I work daily with captive animals that, although domestically bred (and that, by any standard, are provided with a fairly cushy existence), are still essentially wild creatures denied their freedom. But if we believe in a continuum, then we must at least question our right to perform experiments on our fellow creatures; we need to think about how to limit animal experiments and testing to what is essential, and to insist on humane (note the term!) housing and treatment. And, importantly, we must accept the significant cost in time, effort, and money thereby incurred -- increases that must come at the expense of something else in our society. The idea, taken to its logical conclusion, is dangerous because it should also affect our choices as to the origins of the clothes we wear and the foods we eat. Again, I'm not campaigning against leather shoes and T-bone steaks; I find that I personally cannot remain healthy on a totally vegetarian diet and sheepskin boots definitely ease the rigors of a Massachusetts winter. But if we believe in a continuum, we must at least question our right to use fellow creatures for our sustenance: We need to become aware of, for example, the conditions under which creatures destined for the slaughterhouse live their lives, and learn about and ameliorate the conditions in which their lives are ended. And, again, we must accept the costs involved in such decisions. If we do not believe in a clear boundary between humans and nonhumans, if we do not accept a clear "them" versus "us", we need to rethink other aspects of our lives. Do we have the right to clear-cut forests in which our fellow creatures live? To pollute the air, soil and water that we share with them, solely for our own benefit? Where do we draw the line? Life may be much simpler if we do firmly draw a line, but is simplicity a valid rationale? And, in case anyone wonders at my own personal view: I believe that humans are the ultimate generalists, creatures that may lack specific talents or physical adaptations that have been finely honed in other species, but whose additional brain power enables them -- in an exquisite manner -- to, for example, integrate information, improvise with what is present, and alter or adapt to a wide range of environments...but that this additional brain power is (and provides) a quantitative, not qualitative difference. _________________________________________________________________ BRIAN GOODWIN Biologist, Schumacher College, Devon, UK; Author, How The Leopard Changed Its Spots [goodwin100.jpg] Fields of Danger In science, the concept of a field is used to describe patterns of order in systems that are extended in space and show regularities of behaviour in time. They have always expressed ideas that are rather mysterious, but work in describing natural processes. The first example of a field principle in physics was Newton's celebrated gravitational law, which described mathematically the universal attraction between bodies with mass. This mysterious action at a distance without any wires or mechanical attachments between the bodies was regarded as a mystical, occult concept by the mechanical philosophers of the 17th and 18th centuries. They condemned Newton's idea as a violation of the principles of explanation in the new science. However, there is a healthy pragmatic element to scientific investigation, and Newton's equations worked too well to be discarded on philosophical grounds. Another celebrated example of a physical field came from the experimental work of Michael Faraday on electricity and magnetism in the 19th century. He talked about fields of force that extend out in space from electrically charged bodies, or from magnets. Faraday's painstaking and ingenious work described how these fields change with distance from the body in precise ways, as does the gravitational force. Again these forces were regarded as mysterious since they travel through apparently empty space, exerting interaction at a distance that cannot be understood mechanically. However, so precise were Faraday's measurements of the properties of electric and magnetic fields, and so vivid his description of the fields of force associated with them, that James Clerk Maxwell could take his observations and put them directly into mathematical form. These are the famous wave equations of electromagnetism on which our technology for electric motors, lighting, TV, communications and innumerable other applications is based. In the 20th century with Einstein transformed Newton's mysterious gravitational force into an even more mysterious property of space itself: it bends or curves under the influence of bodies with mass. Einstein's relativity theory did away with a force of attraction between bodies and substituted a mathematical relationship between mass and curvature of space-time. The result was a whole new way of understanding motion as natural, curved paths followed by bodies that not only cause the curvature but follow it. The universe was becoming intrinsically self-organising and subjects as observers made an entry into physics. As if Einstein's relativity wasn't enough to shake up the world known to science, the next revolution was even more disturbing. Quantum mechanics, emerging in the 1920s, did away with the classical notions of fields as smooth distributions of forces through space-time and described interactions at a distance in terms of discrete little packets of energy that travel through the void in oscillating patterns described by wave functions, of which the solutions to Schr?dinger's wave equation are the best known. Now we have not only action at a distance but something infinitely more disturbing: these interactions violate conventional notions of causality because they are non-local. Two particles that have been joined in an intimate relationship within an atom remain coherently correlated with one another in their properties no matter how far apart they may be after emission from the atom. Einstein could not bring himself to believe that this 'spooky' implication of quantum mechanics could possibly be real. The implied entanglement means that there is a holistic principle of connectedness in operation at the most elementary level of physical reality. Quantum fields have subverted our basic notions of causality and substituted a principle of wholeness in relationship for elementary particles. The idea that I have pursued in biology for much of my career is the concept that goes under the name of a morphogenetic field. This term is used to describe the processes in space and time that organise and coordinate the various activities involved in the emergence of a whole complex organism from a single cell, or from a group of cells in interaction with each another. A human embryo developing in the mother's womb from a single fertilised egg, emerging at birth as a baby with all its organs coherently arranged in a functioning body, is one of the most breathtaking phenomena in nature. However, all species share the same ability to produce new individuals of the same kind in their processes of reproduction. The remarkable organising principles that underlie such basic properties of life have been known as morphogenetic fields (fields that generate form) throughout the 20th century, though this concept produces unease and discomfort among many biologists.This unease arises for good reason. As in physics, the field concept is subversive of mechanical explanations in science, and biology holds firmly to understanding life in terms of mechanisms organised by genes. However, the complete reading of the book of life in DNA, the major project in biology during the last two decades of the 20th century, did not reveal the secrets of the organism. It was a remarkable achievement to work out the sequence of letters in the genomes of different species, human, other animals, plants, and microbes, so that many of the words of the genetic text of different species could be deciphered. Unfortunately, we were unable to make coherent sense of these words, to put them together in the way that organisms do in creating themselves during their reproduction as they develop into beings with specific morphologies and behaviours, the process of morphogenesis. What had been forgotten, or ignored, was that information only makes sense to an agent, someone or something with the know-how to interpret it. The meaning was missing because the genome researchers ignored the context of the genomes: the living cell within which genes are read and their products are organised. The organisation that is responsible for making sense of the information in the genes, an essential and basic aspect of the living state, was taken for granted. What is the nature of this complex dynamic process that knows how to make an organism, using specific information from the genes? Biology is returning to notions of space-time organisation as an intrinsic aspect of the living condition, our old friends morphogenetic fields. They are now described as complex networks of molecules that somehow read and make sense of genes. These molecular networks have intriguing properties, giving them some of the same characteristics as words in a language. Could it be that biology and culture are not so different after all; that both are based on historical traditions and languages that are used to construct patterns of relationship embodied in communities, either of cells or of individuals? These self-organising activities are certainly mysterious, but not unintelligible. My own work, with many colleagues, cast morphogenetic fields in mathematical form that revealed how space (morphology) and time (behaviour) get organised in subtle but robust ways in developing organisms and communities. Such coordinating patterns in living beings seem to be at the heart of the creativity that drives both biological and cultural evolution. Despite many differences between these fields, which need to be clarified and distinguished rather than blurred, there may be underlying commonalities that can unify biological and cultural evolution rather than separating them. This could even lead us to value other species of organism for their wisdom in achieving coherent, sustainable relationships with other species while remaining creative and innovative throughout evolution, something we are signally failing to do in our culture with its ecologically damaging style of living. _________________________________________________________________ RUDY RUCKER Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist; Author, Lifebox, the Seashell, and the Soul l [rucker100.jpg] Mind is a universally distributed quality Panpsychism. Each object has a mind. Stars, hills, chairs, rocks, scraps of paper, flakes of skin, molecules -- each of them possesses the same inner glow as a human, each of them has singular inner experiences and sensations. I'm quite comfortable with the notion that everything is a computation. But what to do about my sense that there's something numinous about my inner experience? Panpsychism represents a non-anthropocentric way out: mind is a universally distributed quality. Yes, the workings of a human brain are a deterministic computation that could be emulated by any universal computer. And, yes, I sense more to my mental phenomena than the rule-bound exfoliation of reactions to inputs: this residue is the inner light, the raw sensation of existence. But, no, that inner glow is not the exclusive birthright of humans, nor is it solely limited to biological organisms. Note that panpsychism needn't say that universe is just one mind. We can also say that each object has an individual mind. One way to visualize the distinction between the many minds and the one mind is to think of the world as a stained glass window with light shining through each pane. The world's physical structures break the undivided cosmic mind into a myriad of small minds, one in each object. The minds of panpsychism can exist at various levels. As well as having its own individuality, a person's mind would also be, for instance, a hive mind based upon the minds of the body's cells and the minds of the body's elementary particles. Do the panpsychic minds have any physical correlates? On the one hand, it could be that the mind is some substance that accumulates near ordinary matter -- dark matter or dark energy are good candidates. On the other hand, mind might simply be matter viewed in a special fashion: matter experienced from the inside. Let me mention three specific physical correlates that have been proposed for the mind. Some have argued that the experience of mind results when a superposed quantum state collapses into a pure state. It's an alluring metaphor, but as a universal automatist, I'm of the opinion that quantum mechanics is a stop-gap theory, destined to give way to a fully deterministic theory based upon some digital precursor of spacetime. David Skrbina, author of the clear and comprehensive book Panpsychism in the West, suggests that we might think of a physical system as determining a moving point in a multi-dimensional phase space that has an axis for each of the system's measurable properties. He feels this dynamic point represents the sense of unity characteristic of a mind. As a variation on this theme, let me point out that, from the universal automatist standpoint, every physical system can be thought of as embodying a computation. And the majority of non-simple systems embody universal computations, capable of emulating any other system at all. It could be that having a mind is in some sense equivalent to being capable of universal computation. A side-remark. Even such very simple systems as a single electron may in fact be capable of universal computation, if supplied with a steady stream of structured input. Think of an electron in an oscillating field; and by analogy think of a person listening to music or reading an essay. Might panpsychism be a distinction without a difference? Suppose we identify the numinous mind with quantum collapse, with chaotic dynamics, or with universal computation. What is added by claiming that these aspects of reality are like minds? I think empathy can supply an experiential confirmation of panpsychism's reality. Just as I'm sure that I myself have a mind, I can come to believe the same of another human with whom I'm in contact -- whether face to face or via their creative work. And with a bit of effort, I can identify with objects as well; I can see the objects in the room around me as glowing with inner light. This is a pleasant sensation; one feels less alone. Could there ever be a critical experiment to test if panpsychism is really true? Suppose that telepathy were to become possible, perhaps by entangling a person's mental states with another system's states. And then suppose that instead of telepathically contacting another person, I were to contact a rock. At this point panpsychism would be proved. I still haven't said anything about why panpsychism is a dangerous idea. Panpsychism, like other forms of higher consciousness, is dangerous to business as usual. If my old car has the same kind of mind as a new one, I'm less impelled to help the economy by buying a new vehicle. If the rocks and plants on my property have minds, I feel more respect for them in their natural state. If I feel myself among friends in the universe, I'm less likely to overwork myself to earn more cash. If my body will have a mind even after I'm dead, then death matters less to me, and it's harder for the government to cow me into submission. _________________________________________________________________ STEVEN PINKER Psychologist, Harvard University; Author, The Blank Slate [pinker.100.jpg] Groups of people may differ genetically in their average talents and temperaments The year 2005 saw several public appearances of what will I predict will become the dangerous idea of the next decade: that groups of people may differ genetically in their average talents and temperaments. * In January, Harvard president Larry Summers caused a firestorm when he cited research showing that women and men have non-identical statistical distributions of cognitive abilities and life priorities. * In March, developmental biologist Armand Leroi published an op-ed in the New York Times rebutting the conventional wisdom that race does not exist. (The conventional wisdom is coming to be known as Lewontin's Fallacy: that because most genes may be found in all human groups, the groups don't differ at all. But patterns of correlation among genes do differ between groups, and different clusters of correlated genes correspond well to the major races labeled by common sense. ) * In June, the Times reported a forthcoming study by physicist Greg Cochran, anthropologist Jason Hardy, and population geneticist Henry Harpending proposing that Ashkenazi Jews have been biologically selected for high intelligence, and that their well-documented genetic diseases are a by-product of this evolutionary history. * In September, political scientist Charles Murray published an article in Commentary reiterating his argument from The Bell Curve that average racial differences in intelligence are intractable and partly genetic. Whether or not these hypotheses hold up (the evidence for gender differences is reasonably good, for ethnic and racial differences much less so), they are widely perceived to be dangerous. Summers was subjected to months of vilification, and proponents of ethnic and racial differences in the past have been targets of censorship, violence, and comparisons to Nazis. Large swaths of the intellectual landscape have been reengineered to try to rule these hypotheses out a priori (race does not exist, intelligence does not exist, the mind is a blank slate inscribed by parents). The underlying fear, that reports of group differences will fuel bigotry, is not, of course, groundless. The intellectual tools to defuse the danger are available. "Is" does not imply "ought. " Group differences, when they exist, pertain to the average or variance of a statistical distribution, rather than to individual men and women. Political equality is a commitment to universal human rights, and to policies that treat people as individuals rather than representatives of groups; it is not an empirical claim that all groups are indistinguishable. Yet many commentators seem unwilling to grasp these points, to say nothing of the wider world community. Advances in genetics and genomics will soon provide the ability to test hypotheses about group differences rigorously. Perhaps geneticists will forbear performing these tests, but one shouldn't count on it. The tests could very well emerge as by-products of research in biomedicine, genealogy, and deep history which no one wants to stop. The human genomic revolution has spawned an enormous amount of commentary about the possible perils of cloning and human genetic enhancement. I suspect that these are red herrings. When people realize that cloning is just forgoing a genetically mixed child for a twin of one parent, and is not the resurrection of the soul or a source of replacement organs, no one will want to do it. Likewise, when they realize that most genes have costs as well as benefits (they may raise a child's IQ but also predispose him to genetic disease), "designer babies" will lose whatever appeal they have. But the prospect of genetic tests of group differences in psychological traits is both more likely and more incendiary, and is one that the current intellectual community is ill-equipped to deal with. _________________________________________________________________ RICHARD E. NISBETT Professor of Psychology, Co-Director of the Culture and Cognition Program, University of Michigan; Author, The Geography of Thought: How Asians and Westerners Think Differently. . . And Why [nisbett100.jpg] Telling More Than We Can Know Do you know why you hired your most recent employee over the runner-up? Do you know why you bought your last pair of pajamas? Do you know what makes you happy and unhappy? Don't be too sure. The most important thing that social psychologists have discovered over the last 50 years is that people are very unreliable informants about why they behaved as they did, made the judgment they did, or liked or disliked something. In short, we don't know nearly as much about what goes on in our heads as we think. In fact, for a shocking range of things, we don't know the answer to "Why did I?" any better than an observer. The first inkling that social psychologists had about just how ignorant we are about our thinking processes came from the study of cognitive dissonance beginning in the late 1950s. When our behavior is insufficiently justified, we move our beliefs into line with the behavior so as to avoid the cognitive dissonance we would otherwise experience. But we are usually quite unaware that we have done that, and when it is pointed out to us we recruit phantom reasons for the change in attitude. Beginning in the mid-1960s, social psychologists started doing experiments about the causal attributions people make for their own behavior. If you give people electric shocks, but tell them that you have given them a pill that will produce the arousal symptoms that are actually created by the shock, they will take much more shock than subjects without the pill. They have attributed their arousal to the pill and are therefore willing to take more shock. But if you ask them why they took so much shock they are likely to say something like "I used to work with electrical gadgets and I got a lot of shocks, so I guess I got used to it." In the 1970s social psychologists began asking whether people could be accurate about why they make truly simple judgments and decisions -- such as why they like a person or an article of clothing. For example, in one study experimenters videotaped a Belgian responding in one of two modes to questions about his philosophy as a teacher: he either came across as an ogre or a saint. They then showed subjects one of the two tapes and asked them how much they liked the teacher. Furthermore, they asked some of them whether the teacher's accent had affected how much they liked him and asked others whether how much they liked the teacher influenced how much they liked his accent. Subjects who saw the ogre naturally disliked him a great deal, and they were quite sure that his grating accent was one of the reasons. Subjects who saw the saint realized that one of the reasons they were so fond of him was his charming accent. Subjects who were asked if their liking for the teacher could have influenced their judgment of his accent were insulted by the question. Does familiarity breed contempt? On the contrary, it breeds liking. In the 1980s, social psychologists began showing people such stimuli as Turkish words and Chinese ideographs and asking them how much they liked them. They would show a given stimulus somewhere between one and twenty-five times. The more the subjects saw the stimulus the more they liked it. Needless to say, their subjects did not find it plausible that the mere number of times they had seen a stimulus could have affected their liking for it. (You're probably wondering if white rats are susceptible to the mere familiarity effect. The study has been done. Rats brought up listening to music by Mozart prefer to move to the side of the cage that trips a switch allowing them to listen to Mozart rather than Schoenberg. Rats raised on Schoenberg prefer to be on the Schoenberg side. The rats were not asked the reasons for their musical preferences.) Does it matter that we often don't know what goes on in our heads and yet believe that we do? Well, for starters, it means that we often can't answer accurately crucial questions about what makes us happy and what makes us unhappy. A social psychologist asked Harvard women to keep a daily record for two months of their mood states and also to record a number of potentially relevant factors in their lives including amount of sleep the night before, the weather, general state of health, sexual activity, and day of the week (Monday blues? TGIF?). At the end of the period, subjects were asked to tell the experimenters how much each of these factors tended to influence their mood over the two month period. The results? Women's reports of what influenced their moods were uncorrelated with what they had reported on a daily basis. If a woman thought that her sexual activity had a big effect, a check of her daily reports was just as likely to show that it had no effect as that it did. To really rub it in, the psychologist asked her subjects to report what influenced the moods of someone they didn't know: She found that accuracy was just as great when a woman was rated by a stranger as when rated by the woman herself! But if we were to just think really hard about reasons for behavior and preferences might we be likely to come to the right conclusions? Actually, just the opposite may often be the case. A social psychologist asked people to choose which of several art posters they liked best. Some people were asked to analyze why they liked or disliked the various posters and some were not asked, and everyone was given their favorite poster to take home. Two weeks later the psychologist called people up and asked them how much they liked the art poster they had chosen. Those who did not analyze their reasons liked their posters better than those who did. It's certainly scary to think that we're ignorant of so much of what goes on in our heads, though we're almost surely better off taking with a large quantity of salt what we and others say about motives and reasons. Skepticism about our ability to read our minds is safer than certainty that we can. Still, the idea that we have little access to the workings of our minds is a dangerous one. The theories of Copernicus and Darwin were dangerous because they threatened, respectively, religious conceptions of the centrality of humans in the cosmos and the divinity of humans. Social psychologists are threatening a core conviction of the Enlightenment -- that humans are perfectible through the exercise of reason. If reason cannot be counted on to reveal the causes of our beliefs, behavior and preferences, then the idea of human perfectibility is to that degree diminished. _________________________________________________________________ ROBERT R. PROVINE Psychologist and Neuroscientist, University of Maryland; Author, Laughter [provine100.jpg] This is all there is The empirically testable idea that the here and now is all there is and that life begins at birth and ends at death is so dangerous that it has cost the lives of millions and threatens the future of civilization. The danger comes not from the idea itself, but from its opponents, those religious leaders and followers who ruthlessly advocate and defend their empirically improbable afterlife and man-in-the-sky cosmological perspectives. Their vigor is understandable. What better theological franchise is there than the promise of everlasting life, with deluxe trimmings? Religious followers must invest now with their blood and sweat, with their big payoff not due until the after-life. Postmortal rewards cost theologians nothing--I'll match your heavenly choir and raise you 72 virgins. Some franchise! This is even better than the medical profession, a calling with higher overhead, that has gained control of birth, death and pain. Whether the religious brand is Christianity or Islam, the warring continues, with a terrible fate reserved for heretics who threaten the franchise from within. Worse may be in store for those who totally reject the man-in-the-sky premise and its afterlife trappings. All of this trouble over accepting what our senses tell us--that this is all there is. Resolution of religious conflict is impossible because there is no empirical test of the ghostly, and many theologians prey, intentionally or not, upon the fears, superstitions, irrationality, and herd tendencies that are our species' neurobehavioral endowment. Religious fundamentalism inflames conflict and prevents solution--the more extreme and irrational one's position, the stronger one's faith, and, when possessing absolute truth, compromise is not an option. Resolution of conflicts between religions and associated cultures is less likely to come from compromise than from the pursuit of superordinate goals, common, overarching, objectives that extend across nations and cultures, and direct our competitive spirit to further the health, well-being, and nobility of everyone. Public health and science provide such unifying goals. I offer two examples. Health Initiative. A program that improves the health of all people, especially those in developing nations, may find broad support, especially with the growing awareness of global culture and the looming specter of a pandemic. Public health programs bridge religious, political, and cultural divides. No one wants to see their children die. Conflicts fall away when cooperation offers a better life for all concerned. This is also the most effective anti-terrorism strategy, although one probably unpopular with the military industrial complex on one side, and terrorist agitators on the other. Space Initiative. Space exploration expands our cosmos and increases our appreciation of life on Earth and its finite resources. Space exploration is one of our species' greatest achievements. Its pursuit is a goal of sufficient grandeur to unite people of all nations. This is all there is. The sooner we accept this dangerous idea, the sooner we can get on with the essential task of making the most of our lives on this planet. _________________________________________________________________ DONALD HOFFMAN Cognitive Scientist, UC, Irvine; Author, Visual Intelligence [hoffman100.jpg] A spoon is like a headache A spoon is like a headache. This is a dangerous idea in sheep's clothing. It consumes decrepit ontology, preserves methodological naturalism, and inspires exploration for a new ontology, a vehicle sufficiently robust to sustain the next leg of our search for a theory of everything. How could a spoon and a headache do all this? Suppose I have a headache, and I tell you about it. It is, say, a pounding headache that started at the back of the neck and migrated to encompass my forehead and eyes. You respond empathetically, recalling a similar headache you had, and suggest a couple remedies. We discuss our headaches and remedies a bit, then move on to other topics. Of course no one but me can experience my headaches, and no one but you can experience yours. But this posed no obstacle to our meaningful conversation. You simply assumed that my headaches are relevantly similar to yours, and I assumed the same about your headaches. The fact that there is no "public headache," no single headache that we both experience, is simply no problem. A spoon is like a headache. Suppose I hand you a spoon. It is common to assume that the spoon I experience during this transfer is numerically identical to the spoon you experience. But this assumption is false. No one but me can experience my spoon, and no one but you can experience your spoon. But this is no problem. It is enough for me to assume that your spoon experience is relevantly similar to mine. For effective communication, no public spoon is necessary, just like no public headache is necessary. Is there a "real spoon," a mind-independent physical object that causes our spoon experiences and resembles our spoon experiences? This is not only unnecessary but unlikely. It is unlikely that the visual experiences of homo sapiens, shaped to permit survival in a particular range of niches, should miraculously also happen to resemble the true nature of a mind-independent realm. Selective pressures for survival do not, except by accident, lead to truth. One can have a kind of objectivity without requiring public objects. In special relativity, the measurements, and thus the experiences, of mass, length and time differ from observer to observer, depending on their relative velocities. But these differing experiences can be related by the Lorentz transformation. This is all the objectivity one can have, and all one needs to do science. Once one abandons public physical objects, one must reformulate many current open problems in science. One example is the mind-brain relation. There are no public brains, only my brain experiences and your brain experiences. These brain experiences are just the simplified visual experiences of homo sapiens, shaped for survival in certain niches. The chances that our brain experiences resemble some mind-independent truth are remote at best, and those who would claim otherwise must surely explain the miracle. Failing a clever explanation of this miracle, there is no reason to believe brains cause anything, including minds. And here the wolf unzips the sheep skin, and darts out into the open. The danger becomes apparent the moment we switch from boons to sprains. Oh, pardon the spoonerism. _________________________________________________________________ MARC D. HAUSER Psychologist and Biologist, Harvard University: Author, Wild Minds [hauser101.jpg] A universal grammar of [mental] life The recent explosion of work in molecular evolution and developmental biology has, for the first time, made it possible to propose a radical new theory of mental life that if true, will forever rewrite the textbooks and our way of thinking about our past and future. It explains both the universality of our thoughts as well as the unique signatures that demarcate each human culture, past, present and future. The theory I propose is that human mental life is based on a few simple, abstract, yet expressively powerful rules or computations together with an instructive learning mechanism that prunes the range of possible systems of language, music, mathematics, art, and morality to a limited set of culturally expressed variants. In many ways, this view isn't new or radical. It stems from thinking about the seemingly constrained ways in which relatively open ended or generative systems of expression create both universal structure and limited variation. Unfortunately, what appears to be a rather modest proposal on some counts, is dangerous on another. It is dangerous to those who abhor biologically grounded theories on the often misinterpreted perspective that biology determines our fate, derails free will, and erases the soul. But a look at systems other than the human mind makes it transparently clear that the argument from biological endowment does not entail any of these false inferences. For example, we now understand that our immune systems don't learn from the environment how to tune up to the relevant problems. Rather, we are equipped with a full repertoire of antibodies to deal with a virtually limitless variety of problems, including some that have not yet even emerged in the history of life on earth. This initially seems counter-intuitive: how could the immune system have evolved to predict the kinds of problems we might face? The answer is that it couldn't. What it evolved instead was a set of molecular computations that, in combination with each other, can handle an infinitely larger set of conditions than any single combination on its own. The role of the environment is as instructor, functionally telling the immune system about the current conditions, resulting in a process of pairing down of initial starting options. The pattern of change observed in the immune system, characterized by an initial set of universal computations or options followed by an instructive process of pruning, is seen in systems as disparate as the genetic mechanisms underlying segmented body parts in vertebrates, the basic body plan of land plants involving the shoot system of stem and leaves, and song development in birds. Songbirds are particularly interesting as the system for generating a song seems to be analogous in important ways to our capacity to generate a specific language. Humans and songbirds start with a species-specific capacity to build language and song respectively, and this capacity has limitless expressive power. Upon delivery and hatching, and possibly a bit before, the local acoustic environment begins the process of instruction, pruning the possible languages and songs down to one or possibly two. The common thread here is a starting state of universal computations or options followed by an instructive process of pruning, ending up with distinctive systems that share an underlying common core. Hard to see how anyone could find this proposal dangerous or off-putting, or even wrong! Now jump laterally, and make the move to aesthetics and ethics. Our minds are endowed with universal computations for creating and judging art, music, and morally relevant actions. Depending upon where we are born, we will find atonal music pleasing or disgusting, and infanticide obligatory or abhorrent. The common or universal core is, for music, a set of rules for combining together notes to alter our emotions, and for morality, a different set of rules for combining the causes and consequences of action to alter our permissibility judgments. To say that we are endowed with a universal moral sense is not to say that we will do the right or wrong thing, with any consistency. The idea that there is a moral faculty, grounded in our biology, says nothing at all about the good, the bad or the ugly. What it says is that we have evolved particular biases, designed as a function of selection for particular kinds of fit to the environment, under particular constraints. But nothing about this claim leads to the good or the right or the permissible. The reason this has to be the case is twofold: there is not only cultural variation but environmental variation over evolutionary time. What is good for us today may not be good for us tomorrow. But the key insight delivered by the nativist perspective is that we must uderstand the nature of our biases in order to work toward some good or better world, realizing all along that we are constrained. Appreciating the choreography between universal options and instructive pruning is only dangerous if misused to argue that our evolved nature is good, and what is good is right. That's bad. _________________________________________________________________ RAY KURZWEIL Inventor and Technologist; Author, The Singularity Is Near: When Humans Transcend Biology [kurzweil.100.jpg] The near-term inevitability of radical life extension and expansion My dangerous idea is the near-term inevitability of radical life extension and expansion. The idea is dangerous, however, only when contemplated from current linear perspectives. First the inevitability: the power of information technologies is doubling each year, and moreover comprises areas beyond computation, most notably our knowledge of biology and of our own intelligence. It took 15 years to sequence HIV and from that perspective the genome project seemed impossible in 1990. But the amount of genetic data we were able to sequence doubled every year while the cost came down by half each year. We finished the genome project on schedule and were able to sequence SARS in only 31 days. We are also gaining the means to reprogram the ancient information processes underlying biology. RNA interference can turn genes off by blocking the messenger RNA that express them. New forms of gene therapy are now able to place new genetic information in the right place on the right chromosome. We can create or block enzymes, the work horses of biology. We are reverse-engineering -- and gaining the means to reprogram -- the information processes underlying disease and aging, and this process is accelerating, doubling every year. If we think linearly, then the idea of turning off all disease and aging processes appears far off into the future just as the genome project did in 1990. On the other hand, if we factor in the doubling of the power of these technologies each year, the prospect of radical life extension is only a couple of decades away. In addition to reprogramming biology, we will be able to go substantially beyond biology with nanotechnology in the form of computerized nanobots in the bloodstream. If the idea of programmable devices the size of blood cells performing therapeutic functions in the bloodstream sounds like far off science fiction, I would point out that we are doing this already in animals. One scientist cured type I diabetes in rats with blood cell sized devices containing 7 nanometer pores that let insulin out in a controlled fashion and that block antibodies. If we factor in the exponential advance of computation and communication (price-performance multiplying by a factor of a billion in 25 years while at the same time shrinking in size by a factor of thousands), these scenarios are highly realistic. The apparent dangers are not real while unapparent dangers are real. The apparent dangers are that a dramatic reduction in the death rate will create over population and thereby strain energy and other resources while exacerbating environmental degradation. However we only need to capture 1 percent of 1 percent of the sunlight to meet all of our energy needs (3 percent of 1 percent by 2025) and nanoengineered solar panels and fuel cells will be able to do this, thereby meeting all of our energy needs in the late 2020s with clean and renewable methods. Molecular nanoassembly devices will be able to manufacture a wide range of products, just about everything we need, with inexpensive tabletop devices. The power and price-performance of these systems will double each year, much faster than the doubling rate of the biological population. As a result, poverty and pollution will decline and ultimately vanish despite growth of the biological population. There are real downsides, however, and this is not a utopian vision. We have a new existential threat today in the potential of a bioterrorist to engineer a new biological virus. We actually do have the knowledge to combat this problem (for example, new vaccine technologies and RNA interference which has been shown capable of destroying arbitrary biological viruses), but it will be a race. We will have similar issues with the feasibility of self-replicating nanotechnology in the late 2020s. Containing these perils while we harvest the promise is arguably the most important issue we face. Some people see these prospects as dangerous because they threaten their view of what it means to be human. There is a fundamental philosophical divide here. In my view, it is not our limitations that define our humanity. Rather, we are the species that seeks and succeeds in going beyond our limitations. _________________________________________________________________ HAIM HARARI Physicist, former President, Weizmann Institute of Science [harari100.jpg] Democracy may be on its way out Democracy may be on its way out. Future historians may determine that Democracy will have been a one-century episode. It will disappear. This is a sad, truly dangerous, but very realistic idea (or, rather, prediction). Falling boundaries between countries, cross border commerce, merging economies, instant global flow of information and numerous other features of our modern society, all lead to multinational structures. If you extrapolate this irreversible trend, you get the entire planet becoming one political unit. But in this unit, anti-democracy forces are now a clear majority. This majority increases by the day, due to demographic patterns. All democratic nations have slow, vanishing or negative population growth, while all anti-democratic and uneducated societies multiply fast. Within democratic countries, most well-educated families remain small while the least educated families are growing fast. This means that, both at the individual level and at the national level, the more people you represent, the less economic power you have. In a knowledge based economy, in which the number of working hands is less important, this situation is much more non-democratic than in the industrial age. As long as upward mobility of individuals and nations could neutralize this phenomenon, democracy was tenable. But when we apply this analysis to the entire planet, as it evolves now, we see that democracy may be doomed. To these we must add the regrettable fact that authoritarian multinational corporations, by and large, are better managed than democratic nation states. Religious preaching, TV sound bites, cross boundary TV incitement and the freedom of spreading rumors and lies through the internet encourage brainwashing and lack of rational thinking. Proportionately, more young women are growing into societies which discriminate against them than into more egalitarian societies, increasing the worldwide percentage of women treated as second class citizens. Educational systems in most advanced countries are in a deep crisis while modern education in many developing countries is almost non-existent. A small well-educated technological elite is becoming the main owner of intellectual property, which is, by far, the most valuable economic asset, while the rest of the world drifts towards fanaticism of one kind or another. Add all of the above and the unavoidable conclusion is that Democracy, our least bad system of government, is on its way out. Can we invent a better new system? Perhaps. But this cannot happen if we are not allowed to utter the sentence: "There may be a political system which is better than Democracy". Today's political correctness does not allow one to say such things. The result of this prohibition will be an inevitable return to some kind of totalitarian rule, different from that of the emperors, the colonialists or the landlords of the past, but not more just. On the other hand, open and honest thinking about this issue may lead either to a gigantic worldwide revolution in educating the poor masses, thus saving democracy, or to a careful search for a just (repeat, just) and better system. I cannot resist a cheap parting shot: When, in the past two years, Edge asked for brilliant ideas you believe in but cannot prove, or for proposing new exciting laws, most answers related to science and technology. When the question is now about dangerous ideas, almost all answers touch on issues of politics and society and not on the "hard sciences". Perhaps science is not so dangerous, after all. _________________________________________________________________ DAVID G. MYERS Social Psychologist; Co-author (with Letha Scanzoni); What God has Joined Together: A Christian Case for Gay Marriage [myers100.jpg] A marriage option for all Much as others have felt compelled by evidence to believe in human evolution or the warming of the planet, I feel compelled by evidence to believe a) that sexual orientation is a natural, enduring disposition and b) that the world would be a happier and healthier place if, for all people, romantic love, sex, and marriage were a package. In my Midwestern social and religious culture, the words "for all people" transform a conservative platitude into a dangerous idea, over which we are fighting a culture war. On one side are traditionalists, who feel passionately about the need to support and renew marriage. On the other side are progressives, who assume that our sexual orientation is something we did not choose and cannot change, and that we all deserve the option of life within a covenant partnership. I foresee a bridge across this divide as folks on both the left and the right engage the growing evidence of our panhuman longing for belonging, of the benefits of marriage, and of the biology and persistence of sexual orientation. We now have lots of data showing that marriage is conducive to healthy adults, thriving children, and flourishing communities. We also have a dozen discoveries of gay-straight differences in everything from brain physiology to skill at mentally rotating geometric figures. And we have an emerging professional consensus that sexual reorientation therapies seldom work. More and more young adults -- tomorrow's likely majority, given generational succession -- are coming to understand this evidence, and to support what in the future will not seem so dangerous: a marriage option for all. _________________________________________________________________ CLAY SHIRKY Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP) [shirkey100.jpg] Free will is going away. Time to redesign society to take that into account. In 2002, a group of teenagers sued McDonald's for making them fat, charging, among other things, that McDonald's used promotional techniques to get them to eat more than they should. The suit was roundly condemned as an the erosion of the sense of free will and personal responsibility in our society. Less widely remarked upon was that the teenagers were offering an accurate account of human behavior. Consider the phenomenon of 'super-sizing', where a restaurant patron is offered the chance to increase the portion size of their meal for some small amount of money. This presents a curious problem for the concept of free will -- the patron has already made a calculation about the amount of money they are willing to pay in return for a particular amount of food. However, when the question is re-asked, -- not "Would you pay $5.79 for this total amount of food?" but "Would you pay an additional 30 cents for more french fries?" -- patrons often say yes, despite having answered "No" moments before to an economically identical question. Super-sizing is expressly designed to subvert conscious judgment, and it works. By re-framing the question, fast food companies have found ways to take advantages of weaknesses in our analytical apparatus, weaknesses that are being documented daily in behavioral economics and evolutionary psychology. This matters for more than just fat teenagers. Our legal, political, and economic systems, the mechanisms that run modern society, all assume that people are uniformly capable of consciously modulating their behaviors. As a result, we regard decisions they make as being valid, as with elections, and hold them responsible for actions they take, as in contract law or criminal trials. Then, in order to get around the fact that some people obviously aren't capable of consciously modulating their behavior, we carve out ad hoc exemptions. In U.S. criminal law, a 15 year old who commits a crime is treated differently than a 16 year old. A crime committed in the heat of the moment is treated specially. Some actions are not crimes because their perpetrator is judged mentally incapable, whether through developmental disabilities or other forms of legally defined insanity. This theoretical divide, between the mass of people with a uniform amount of free will and a small set of exceptional individuals, has been broadly stable for centuries, in part because it was based on ignorance. As long as we were unable to locate any biological source of free will, treating the mass of people as if each of them had the same degree of control over their lives made perfect sense; no more refined judgments were possible. However, that binary notion of free will is being eroded as our understanding of the biological antecedents of behavior improves. Consider laws concerning convicted pedophiles. Concern about their recidivism rate has led to the enactment of laws that restrict their freedom based on things they might do in the future, even though this expressly subverts the notion of free will in the judicial system. The formula here -- heinousness of crime x likelihood of repeat offense -- creates a new, non-insane class of criminals whose penalty is indexed to a perceived lack of control over themselves. But pedophilia is not unique in it's measurably high recidivism rate. All rapists have higher than average recidivism rates. Thieves of all varieties are likelier to become repeat offenders if they have short time horizons or poor impulse control. Will we keep more kinds of criminals constrained after their formal sentence is served, as we become better able to measure the likely degree of control they have over their own future actions? How can we, if we are to preserve the idea of personal responsibility? How can we not, once we are able to quantify the risk? Criminal law is just one area where our concept of free will is eroding. We know that men make more aggressive decisions after they have been shown pictures of attractive female faces. We know women are more likely to commit infidelity on days they are fertile. We know that patients committing involuntary physical actions routinely (and incorrectly) report that they decided to undertake those actions, in order to preserve their sense that they are in control. We know that people will drive across town to save $10 on a $50 appliance, but not on a $25,000 car. We know that the design of the ballot affects a voter's choices. And we are still in the early days of even understanding these effects, much less designing everything from sales strategies to drug compounds to target them. Conscious self-modulation of behavior is a spectrum. We have treated it as a single property -- you are either capable of free will, or you fall into an exceptional category -- because we could not identify, measure, or manipulate the various components that go into such self-modulation. Those days are now ending, and everyone from advertisers to political consultants increasingly understands, in voluminous biological detail, how to manipulate consciousness in ways that weaken our notion of free will. In the coming decades, our concept of free will, based as it is on ignorance of its actual mechanisms, will be destroyed by what we learn about the actual workings of the brain. We can wait for that collision, and decide what to do then, or we can begin thinking through what sort of legal, political, and economic systems we need in a world where our old conception of free will is rendered inoperable. _________________________________________________________________ MICHAEL SHERMER Publisher of Skeptic magazine, monthly columnist for Scientific American; Author, Science Friction [shermer100.jpg] Where goods cross frontiers, armies won't Where goods cross frontiers, armies won't. Restated: where economic borders are porous between two nations, political borders become impervious to armies. Data from the new sciences of evolutionary economics, behavioral economics, and neuroeconomics reveals that when people are free to cooperate and trade (such as in game theory protocols) they establish trust that is reinforced through neural pathways that release such bonding hormones as oxytocin. Thus, modern biology reveals that where people are free to cooperate and trade they are less likely to fight and kill those with whom they are cooperating and trading. My dangerous idea is a solution to what I call the "really hard problem": how best should we live? My answer: A free society, defined as free-market economics and democratic politics -- fiscal conservatism and social liberalism -- which leads to the greatest liberty for the greatest number. Since humans are, by nature, tribal, the overall goal is to expand the concept of the tribe to include all members of the species into a global free society. Free trade between all peoples is the surest way to reach this goal. People have a hard time accepting free market economics for the same reason they have a hard time accepting evolution: it is counterintuitive. Life looks intelligently designed, so our natural inclination is to infer that there must be an intelligent designer -- a God. Similarly, the economy looks designed, so our natural inclination is to infer that we need a designer -- a Government. In fact, emergence and complexity theory explains how the principles of self-organization and emergence cause complex systems to arise from simple systems without a top-down designer. Charles Darwin's natural selection is Adam Smith's invisible hand. Darwin showed how complex design and ecological balance were unintended consequences of individual competition among organisms. Smith showed how national wealth and social harmony were unintended consequences of individual competition among people. Nature's economy mirrors society's economy. Thus, integrating evolution and economics -- what I call evonomics -- reveals that an old economic doctrine is supported by modern biology. _________________________________________________________________ ARNOLD TREHUB Psychologist, University of Massachusetts, Amherst; Author, The Cognitive Brain [trehub100.jpg] Modern science is a product of biology The entire conceptual edifice of modern science is a product of biology. Even the most basic and profound ideas of science -- think relativity, quantum theory, the theory of evolution -- are generated and necessarily limited by the particular capacities of our human biology. This implies that the content and scope of scientific knowledge is not open-ended. _________________________________________________________________ ROGER C. SCHANK Psychologist & Computer Scientist; Chief Learning Officer, Trump University; Author, Making Minds Less Well Educated than Our Own [schank100.jpg] No More Teacher's Dirty Looks After a natural disaster, the newscasters eventually excitedly announce that school is finally open so no matter what else is terrible where they live, the kids are going to school. I always feel sorry for the poor kids. My dangerous idea is one that most people immediately reject without giving it serious thought: school is bad for kids -- it makes them unhappy and as tests show -- they don't learn much. When you listen to children talk about school you easily discover what they are thinking about in school: who likes them, who is being mean to them, how to improve their social ranking, how to get the teacher to treat them well and give them good grades. Schools are structured today in much the same way as they have been for hundreds of years. And for hundreds of years philosophers and others have pointed out that school is really a bad idea: We are shut up in schools and college recitation rooms for ten or fifteen years, and come out at last with a belly full of words and do not know a thing. -- Ralph Waldo Emerson Education is an admirable thing, but it is well to remember from time to time that nothing that is worth knowing can be taught. -- Oscar Wilde Schools should simply cease to exist as we know them. The Government needs to get out of the education business and stop thinking it knows what children should know and then testing them constantly to see if they regurgitate whatever they have just been spoon fed. The Government is and always has been the problem in education: If the government would make up its mind to require for every child a good education, it might save itself the trouble of providing one. It might leave to parents to obtain the education where and how they pleased, and content itself with helping to pay the school fees of the poorer classes of children, and defraying the entire school expenses of those who have no one else to pay for them. -- JS Mill First, God created idiots. That was just for practice. Then He created school boards. -- Mark Twain Schools need to be replaced by safe places where children can go to learn how to do things that they are interested in learning how to do. Their interests should guide their learning. The government's role should be to create places that are attractive to children and would cause them to want to go there. Whence it comes to pass, that for not having chosen the right course, we often take very great pains, and consume a good part of our time in training up children to things, for which, by their natural constitution, they are totally unfit. -- Montaigne We had a President many years ago who understood what education is really for. Nowadays we have ones that make speeches about the Pythagorean Theorem when we are quite sure they don't know anything about any theorem. There are two types of education. . . One should teach us how to make a living, And the other how to live. -- John Adams Over a million students have opted out of the existing school system and are now being home schooled. The problem is that the states regulate home schooling and home schooling still looks an awful lot like school. We need to stop producing a nation of stressed out students who learn how to please the teacher instead of pleasing themselves. We need to produce adults who love learning, not adults who avoid all learning because it reminds them of the horrors of school. We need to stop thinking that all children need to learn the same stuff. We need to create adults who can think for themselves and are not convinced about how to understand complex situations in simplistic terms that can be rendered in a sound bite. Just call school off. Turn them all into apartment houses. _________________________________________________________________ SUSAN BLACKMORE Psychologist and Skeptic; Author, Consciousness: An Introduction [blackmore.100.jpg] Everything is pointless We humans can, and do, make up our own purposes, but ultimately the universe has none. All the wonderfully complex, and beautifully designed things we see around us were built by the same purposeless process -- evolution by natural selection. This includes everything from microbes and elephants to skyscrapers and computers, and even our own inner selves. People have (mostly) got used to the idea that living things were designed by natural selection, but they have more trouble accepting that human creativity is just the same process operating on memes instead of genes. It seems, they think, to take away uniqueness, individuality and "true creativity". Of course it does nothing of the kind; each person is unique even if that uniqueness is explained by their particular combination of genes, memes and environment, rather than by an inner conscious self who is the fount of creativity. _________________________________________________________________ DAVID LYKKEN Behavioral geneticist and Emeritus Professor of Psychology, University of Minnesota; Author, Happiness [lykken100.jpg] Laws requiring parental licensure I believe that, during my grandchildren's lifetimes, the U.S. Supreme Court will find a way to approve laws requiring parental licensure. Traditional societies in which children are socialized collectively, the method to which our species is evolutionarily adapted, have very little crime. In the modern U.S., the proportion of fatherless children, living with unmarried mothers, currently some 10 million in all, has increased more than 400% since 1960 while the violent crime rate rose 500% by 1994, before dipping slightly due to a delayed but equal increase in the number of prison inmates (from 240,000 to 1.4 million.) In 1990, across the 50 States, the correlation between the violent crime rate and the proportion of illegitimate births was 0.70. About 70% of incarcerated delinquents, of teen-age pregnancies, of adolescent runaways, involve (I think result from) fatherless rearing. Because these frightening curves continue to accelerate, I believe we must eventually confront the need for parental licensure -- you can't keep that newborn unless you are 21, married and self-supporting -- not just for society's safety but so those babies will have a chance for life, liberty, and the pursuit of happiness. _________________________________________________________________ CLIFFORD PICKOVER Author, Sex, Drugs, Einstein, and Elves [pickover100.jpg] We are all virtual Our desire for entertaining virtual realities is increasing. As our understanding of the human brain also accelerates, we will create both imagined realities and a set of memories to support these simulacrums. For example, someday it will be possible to simulate your visit to the Middle Ages and, to make the experience realistic, we may wish to ensure that you believe yourself to actually be in the Middle Ages. False memories may be implanted, temporarily overriding your real memories. This should be easy to do in the future -- given that we can already coax the mind to create richly detailed virtual worlds filled with ornate palaces and strange beings through the use of the drug DMT (dimethyltryptamine). In other words, the brains of people who take DMT appear to access a treasure chest of images and experience that typically include jeweled cities and temples, angelic beings, feline shapes, serpents, and shiny metals. When we understand the brain better, we will be able to safely generate more controlled visions. Our brains are also capable of simulating complex worlds when we dream. For example, after I watched a movie about people on a coastal town during the time of the Renaissance, I was "transported" there later that night while in a dream. The mental simulation of the Renaissance did not have to be perfect, and I'm sure that there were myriad flaws. However, during that dream I believed I was in the Renaissance. If we understood the nature of how the mind induces the conviction of reality, even when strange, nonphysical events happen in the dreams, we could use this knowledge to ensure that your simulated trip to the Middle Ages seemed utterly real, even if the simulation was imperfect. It will be easy to create seemingly realistic virtual realities because we don't have to be perfect or even good with respect to the accuracy of our simulations in order to make them seem real. After all, our nightly dreams usually seem quite real even if upon awakening we realize that logical or structural inconsistencies existed in the dream. In the future, for each of your own real lives, you will personally create ten simulated lives. Your day job is a computer programmer for IBM. However, after work, you'll be a knight with shining armor in the Middle Ages, attending lavish banquets, and smiling at wandering minstrels and beautiful princesses. The next night, you'll be in the Renaissance, living in your home on the Amalfi coast of Italy, enjoying a dinner of plover, pigeon, and heron. If this ratio of one real life to ten simulated lives turned out to be representative of human experience, this means that right now, you only have a one in ten chance of being alive on the actual date of today. _________________________________________________________________ JOHN ALLEN PAULOS Professor of Mathematics, Temple University, Philadelphia; Author, A Mathematician Plays the Stock Market [paulos100.jpg] The self is a conceptual chimera Doubt that a supernatural being exists is banal, but the more radical doubt that we exist, at least as anything more than nominal, marginally integrated entities having convenient labels like "Myrtle" and "Oscar," is my candidate for Dangerous Idea. This is, of course, Hume's idea -- and Buddha's as well -- that the self is an ever-changing collection of beliefs, perceptions, and attitudes, that it is not an essential and persistent entity, but rather a conceptual chimera. If this belief ever became widely and viscerally felt throughout a society -- whether because of advances in neurobiology, cognitive science, philosophical insights, or whatever -- its effects on that society would be incalculable. (Or so this assemblage of beliefs, perceptions, and attitudes sometimes thinks.) _________________________________________________________________ JAMES O'DONNELL Classicist; Cultural Historian; Provost, Georgetown University; Author, Avatars of the Word [odonnell100.jpg] Marx was right: the "state" will evaporate and cease to have useful meaning as a form of human organization From the earliest Babylonian and Chinese moments of "civilization", we have agreed that human affairs depend on an organizing power in the hands of a few people (usually with religious charisma to undergird their authority) who reside in a functionally central location. "Political science" assumes in its etymology the "polis" or city-state of Greece as the model for community and government. But it is remarkable how little of human excellence and achievement has ever taken place in capital cities and around those elites, whose cultural history is one of self-mockery and implicit acceptance of the marginalization of the powerful. Borderlands and frontiers (and even suburbs) are where the action is. But as long as technologies of transportation and military force emphasized geographic centralization and concentration of forces, the general or emperor or president in his capital with armies at his beck and call was the most obvious focus of power. Enlightened government constructed mechanisms to restrain and channel such centralized authority, but did not effectively challenge it. So what advantage is there today to the nation state? Boundaries between states enshrine and exacerbate inequalities and prevent the free movement of peoples. Large and prosperous state and state-related organizations and locations attract the envy and hostility of others and are sitting duck targets for terrorist action. Technologies of communication and transportation now make geographically-defined communities increasingly irrelevant and provide the new elites and new entrepreneurs with ample opportunity to stand outside them. Economies construct themselves in spite of state management and money flees taxation as relentlessly as water follows gravity. Who will undergo the greatest destabilization as the state evaporates and its artificial protections and obstacles disappear? The sooner it happens, the more likely it is to be the United States. The longer it takes ... well, perhaps the new Chinese empire isn't quite the landscape-dominating leviathan of the future that it wants to be. Perhaps in the end it will be Mao who was right, and a hundred flowers will bloom there. _________________________________________________________________ PHILIP ZIMBARDO Professor Emeritus of Psychology at Stanford University; Author: Shyness [zimbardo100.jpg] The banality of evil is matched by the banality of heroism Those people who become perpetrators of evil deeds and those who become perpetrators of heroic deeds are basically alike in being just ordinary, average people. The banality of evil is matched by the banality of heroism. Both are not the consequence of dispositional tendencies, not special inner attributes of pathology or goodness residing within the human psyche or the human genome. Both emerge in particular situations at particular times when situational forces play a compelling role in moving individuals across the decisional line from inaction to action. There is a decisive decisional moment when the individual is caught up in a vector of forces emanating from the behavioral context. Those forces combine to increase the probability of acting to harm others or acting to help others. That decision may not be consciously planned or taken mindfully, but impulsively driven by strong situational forces external to the person. Among those action vectors are group pressures and group identity, diffusion of responsibility, temporal focus on the immediate moment without entertaining costs and benefits in the future, among others. The military police guards who abused prisoners at Abu Ghraib and the prison guards in my Stanford Prison experiment who abused their prisoners illustrate the "Lord of the Flies" temporary transition of ordinary individuals into perpetrators of evil. We set aside those whose evil behavior is enduring and extensive, such as tyrants like Idi Amin, Stalin and Hitler. Heroes of the moment are also contrasted with lifetime heroes. The heroic action of Rosa Parks in a Southern bus, of Joe Darby in exposing the Abu Ghraib tortures, of NYC firefighters at the World Trade Center's disaster are acts of bravery at that time and place. The heroism of Mother Teresa, Nelson Mandela, and Gandhi is replete with valorous acts repeated over a lifetime. That chronic heroism is to acute heroism as valour is to bravery. This view implies that any of us could as easily become heroes as perpetrators of evil depending on how we are impacted by situational forces. We then want to discover how to limit, constrain, and prevent those situational and systemic forces that propel some of us toward social pathology. It is equally important for our society to foster the heroic imagination in our citizens by conveying the message that anyone is a hero-in-waiting who will be counted upon to do the right thing when the time comes to make the heroic decision to act to help or to act to prevent harm. _________________________________________________________________ RICHARD FOREMAN Founder & Director, Ontological-Hysteric Theater [foreman100.jpg] Radicalized relativity In my area of the arts and humanities, the most dangerous idea (and the one under who's influence I have operated throughout my artistic life) is the complete relativity of all positions and styles of procedure. The notion that there are no "absolutes" in art -- and in the modern era, each valuable effort has been, in one way or another, the highlighting and glorification of elements previous "off limits" and rejected by the previous "classical" style. Such a continual "reversal of values" has of course delivered us into the current post-post modern era, in which fragmentation, surface value and the complex weave of "sampling procedure" dominate, and "the center does not hold". I realize that my own artistic efforts have, in a small way, contributed to the current aesthetic/emotional environment in which the potential spiritual depth and complexity of evolved human consciousness is trumped by the bedazzling shuffle of the shards of inherited elements -- never before as available to the collective consciousness. The resultant orientation towards "cultural relativity" in the arts certainly comes in part from the psychic re-orientation resulting from Einstein's bombshell dropped at the beginning of the last century. This current "relativity" of all artistic, philosophical, and psychological values leaves the culture adrift, and yet there is no "going back" in spite of what conservative thinkers often recommend. At the very moment of our cultural origin, we were warned against "eating from the tree of knowledge". Down through subsequent history, one thing has led to another, until now -- here we are, sinking into the quicksand of the ever-accelerating reversal of each latest value (or artistic style). And yet -- there are many artists, like myself, committed to the believe that -- having been "thrown by history" into the dangerous trajectory initiated by the inaugural "eating from the tree of knowledge" (a perhaps "fatal curiosity" programmed into our genes) the only escape possible is to treat the quicksand of the present as a metaphorical "black hole" through which we must pass -- indeed risking psychic destruction (or "banalization") -- for the promise of emerging re-made, in new still unimaginable form, on the other side. This is the "heroic wager" the serious "experimental" artist makes in living through the dangerous idea of radicalized relativity. It is ironic, of course, that many of our greatest scientists (not all of course) have little patience for the adventurous art of our times (post Stockhausen/Boulez music, post Joyce/ Mallarme literature) and seem to believe that a return to a safer "audience friendly" classical style is the only responsible method for today's artists. Do they perhaps feel psychologically threatened by advanced styles that supercede previous principals of coherence? They are right to feel threatened by such dangerous advances into territory for which conscious sensibility if not yet fully prepared. Yet it is time for all serious minds to "bite the bullet" of such forays into the unknown world in which the dangerous quest for deeper knowledge leads scientist and artist alike. _________________________________________________________________ JOHN GOTTMAN Psychologist; Founder of Gottman Institute; Author, The Mathematics of Marriage [gottman100.jpg] Emotional intelligence The most dangerous idea I know of is emotional intelligence. Within the context of the cognitive neuroscience revolution in psychology, the focus on emotions is extraordinary. The over-arching idea that there is such a thing as emotional intelligence, that it has a neuroscience, that it is inter-personal, i.e., between two brains, rather than within one brain, are all quite revolutionary concepts about human psychology. I could go on. It is also a revolution in thinking about infancy, couples, family, adult development, aging, etc. _________________________________________________________________ PIET HUT Professor of Astrophysics, Institute for Advanced Study, Princeton [hut100.jpg] A radical reevaluuation of the character of time Copernicus and Darwin took away our traditional place in the world and our traditional identity in the world. What traditional trait will be taken away from us next? My guess is that it will be the world itself. We see the first few steps in that direction in the physics, mathematics and computer science of the twentieth century, from quantum mechanics to the results obtained by G?del, Turing and others. The ontologies of our worlds, concrete as well as abstract, have already started to melt away. The problem is that quantum entanglement and logical incompleteness lack the in-your-face quality of a spinning earth and our kinship with apes. We will have to wait for the ontology of the traditional world to unravel further, before the avant-garde insights will turn into a real revolution. Copernicus upset the moral order, by dissolving the strict distinction between heaven and earth. Darwin did the same, by dissolving the strict distinction between humans and other animals. Could the next step be the dissolution of the strict distinction between reality and fiction? For this to be shocking, it has to come in a scientifically respectable way, as a very precise and inescapable conclusion -- it should have the technical strength of a body of knowledge like quantum mechanics, as opposed to collections of opinions on the level of cultural relativism. Perhaps a radical reevaluation of the character of time will do it. In everyday experience, time flows, and we flow with it. In classical physics, time is frozen as part of a frozen spacetime picture. And there is, as yet, no agreed-upon interpretation of time in quantum mechanics. What if a future scientific understanding of time would show all previous pictures to be wrong, and demonstrate that past and future and even the present do not exist? That stories woven around our individual personal history and future are all just wrong? Now that would be a dangerous idea. _________________________________________________________________ DAN SPERBER Social and cognitive scientist, CNRS, Paris; author, Explaining Culture [sperber100.jpg] Culture is natural A number of us -- biologists, cognitive scientists, anthropologists or philosophers -- have been trying to lay down the foundations for a truly naturalistic approach to culture. Sociobiologists and cultural ecologists have explored the idea that cultural behaviors are biological adaptations to be explained in terms of natural selection. Memeticists inspired by Richard Dawkins argue that cultural evolution is an autonomous Darwinian selection process merely enabled but not governed by biological evolution. Evolutionary psychologists, Cavalli-Sforza, Feldman, Boyd and Richerson, and I are among those who, in different ways, argue for more complex interactions between biology and culture. These naturalistic approaches have been received not just with intellectual objections, but also with moral and political outrage: this is a dangerous idea, to be strenuously resisted, for it threatens humanistic values and sound social sciences. When I am called a "reductionist", I take it as a misplaced compliment: a genuine reduction is a great scientific achievement, but, too bad, the naturalistic study of culture I advocate does not to reduce to that of biology or of psychology. When I am called a "positivist" (an insult among postmodernists), I acknowledge without any sense of guilt or inadequacy that indeed I don't believe that all facts are socially constructed. On the whole, having one's ideas described as "dangerous" is flattering. Dangerous ideas are potentially important. Braving insults and misrepresentations in defending these ideas is noble. Many advocates of naturalistic approaches to culture see themselves as a group of free-thinking, deep-probing scholars besieged by bigots. But wait a minute! Naturalistic approaches can be dangerous: after all, they have been. The use of biological evidence and arguments purported to show that there are profound natural inequalities among human "races", ethnic groups, or between women and men is only too well represented in the history of our disciplines. It is not good enough for us to point out (rightly) that 1) the science involved is bad science, 2) even if some natural inequality were established, it would not come near justifying any inequality in rights, and 3) postmodernists criticizing naturalism on political grounds should begin by rejecting Heidegger and other reactionaries in their pantheon who also have been accomplices of policies of discrimination. This is not enough because the racist and sexist uses of naturalism are not exactly unfortunate accidents. Species evolve because of genetic differences among their members; therefore you cannot leave biological difference out of a biological approach. Luckily, it so happens that biological differences among humans are minor and don't produce sub-species or "races," and that human sexual dimorphism is relatively limited. In particular, all humans have mind/brains made up of the same mechanisms, with just fine-tuning differences. (Think how very different all this would be if -- however improbably -- Neanderthals had survived and developed culturally like we did so that there really were different human "races"). Given what anthropologists have long called "the psychic unity of the human kind", the fundamental goal for a naturalistic approach is to explain how a common human nature -- and not biological differences among humans -- gives rise to such a diversity of languages, cultures, social organizations. Given the real and present danger of distortion and exploitation, it must be part of our agenda to take responsibility for the way this approach is understood by a wider public. This, happily, has been done by a number of outstanding authors capable of explaining serious science to lay audiences, and who typically have made the effort of warning their readers against misuses of biology. So the danger is being averted, and let's just move on? No, we are not there yet, because the very necessity of popularizing the naturalistic approach and the very talent with which this is being done creates a new danger, that of arrogance. We naturalists do have radical objections to what Leda Cosmides and John Tooby have called the "Standard Social Science Model." We have many insightful hypotheses and even some relevant data. The truth of the matter however is that naturalistic approaches to culture have so far remained speculative, hardly beginning to throw light on just fragments of the extraordinarily wide range of detailed evidence accumulated by historians, anthropologists, sociologists and others. Many of those who find our ideas dangerous fear what they see as an imperialistic bid to take over their domain. The bid would be unrealistic, and so is the fear. The real risk is different. The social sciences host a variety of approaches, which, with a few high profile exceptions, all contribute to our understanding of the domain. Even if it involves some reshuffling, a naturalistic approach should be seen as a particularly welcome and important addition. But naturalists full of grand claims and promises but with little interest in the competence accumulated by others are, if not exactly dangerous, at least much less useful than they should be, and the deeper challenge they present to social scientists' mental habits is less likely to be properly met. _________________________________________________________________ MARTIN E.P. SELIGMAN Psychologist, University of Pennsylvania, Author, Authentic Happiness [seligman100.jpg] Relativism In looking back over the scientific and artistic breakthroughs in the 20th century, there is a view that the great minds relativized the absolute. Did this go too far? Has relativism gotten to a point that it is dangerous to the scientific enterprise and to human well being? The most visible person to say this is none other than Pope Benedict XVI in his denunciations of the "dictatorship of the relative." But worries about relativism are not only a matter of dispute in theology; there are parallel dissenters from the relative in science, in philosophy, in ethics, in mathematics, in anthropology, in sociology, in the humanities, in childrearing, and in evolutionary biology. Here are some of the domains in which serious thinkers have worried about the overdoing of relativism: o In philosophy of science, there is ongoing tension between the Kuhnians (science is about "paradigms," the fashions of the current discipline) and the realists (science is about finding the truth). o In epistemology there is the dispute between the Tarskian correspondence theorists ("p" is true if p) versus two relativistic camps, the coherence theorists ("p" is true to the extent it coheres with what you already believe is true) and the pragmatic theory of truth ("p" is true if it gets you where you want to go). o At the ethics/science interface, there is the fact/value dispute: that science must and should incorporate the values of the culture in which it arises versus the contention that science is and should be value free. o In mathematics, G?del's incompleteness proof was widely interpreted as showing that mathematics is relative; but G?del, a Platonist, intended the proof to support the view that there are statements that could not be proved within the system that are true nevertheless. Einstein, similarly, believed that the theory of relativity was misconstrued in just the same way by the "man is the measure of all things" relativists. o In the sociology of high accomplishment, Charles Murray (Human Accomplishment) documents that the highest accomplishments occur in cultures that believe in absolute truth, beauty, and goodness. The accomplishments, he contends, of cultures that do not believe in absolute beauty tend to be ugly, that do not belief in absolute goodness tend to be immoral, and that do not believe in absolute truth tend to be false. o In anthropology, pre-Boasians believed that cultures were hierarchically ordered into savage, barbarian, and civilized, whereas much of modern anthropology holds that all social forms are equal. This is the intellectual basis of the sweeping cultural relativism that dominates the humanities in academia. o In evolution, Robert Wright (like Aristotle) argues for a scala naturae, with the direction of evolution favoring complexity by its invisible hand; whereas Stephen Jay Gould argued that the fern is just as highly evolved as Homo sapiens. Does evolution have an absolute direction and are humans further along that trajectory than ferns? o In child-rearing, much of twentieth century education was profoundly influenced by the "Summerhillians" who argued complete freedom produced the best children, whereas other schools of parenting, education, and therapy argue for disciplined, authoritative guidance. o Even in literature, arguments over what should go into the canon revolve around the absolute-relative controversy. o Ethical relativism and its opponents are all too obvious instances of this issue I do not know if the dilemmas in these domains are only metaphorically parallel to one another. I do not know if illumination in one domain will not illuminate the others. But it might and it is just possible that the great minds of the twenty-first century will absolutize the relative. _________________________________________________________________ HOWARD GARDNER Psychologist, Harvard University; Author, Changing Minds [gardner100.jpg] Following Sisyphus, not Pandora According to myth, Pandora unleashed all evils upon the world; only hope remained inside the box. Hope for human survival and progress rests on two assumptions: (1) Human constructive tendencies can counter human destructive tendencies, and (2) Human beings can act on the basis of long-term considerations, rather than merely short-term needs and desires. My personal optimism, and my years of research on "good work", could not be sustained without these assumptions. Yet I lay awake at night with the dangerous thought that pessimists may be right. For the first time in history -- as far as we know! -- we humans live in a world that we could completely destroy. The human destructive tendencies described in the past by Thomas Hobbes and Sigmund Freud, the "realist" picture of human beings embraced more recently by many sociobiologists, evolutionary psychologists, and game theorists might be correct; these tendencies could overwhelm any proclivities toward altruism, protection of the environment, control of weapons of destruction, progress in human relations, or seeking to become good ancestors. As one vivid data point: there are few signs that the unprecedented power possessed by the United States is being harnessed to positive ends. Strictly speaking, what will happen to the species or the planet is not a question for scientific study or prediction. It is a question of probabilities, based on historical and cultural considerations, as well as our most accurate description of human nature(s). Yet, science (as reflected, for example, in contributions to Edge discussions) has recently invaded this territory with its assertions of a biologically-based human moral sense. Those who assert a human moral sense are wagering that, in the end, human beings will do the right thing. Of course, human beings have the capacities to make moral judgments -- that is a mere truism. But my dangerous thought is that this moral sense is up for grabs -- that it can be mobilized for destructive ends (one society's terrorist is another society's freedom fighter) or overwhelmed by other senses and other motivations, such as the quest for power, instant gratification, or annihilation of one's enemies. I will continue to do what I can to encourage good work -- in that sense, Pandoran hope remains. But I will not look upon science, technology, or religion to preserve life. Instead, I will follow Albert Camus' injunction, in his portrayal of another mythic figure endlessly attempting to push a rock up a hill: one should imagine Sisyphus happy. From mario7k at gmail.com Fri Jan 6 02:02:56 2006 From: mario7k at gmail.com (Mario Ribeiro) Date: Thu, 5 Jan 2006 22:02:56 -0400 Subject: [Paleopsych] Independent: 'Chronic happiness' the key to success In-Reply-To: References: Message-ID: <7f0d59090601051802m48e87f6et7db65f4885553584@mail.gmail.com> yeah definetly, send it to me. its interesting the idea that happiness and cheerfullness may and should come before u achive what ever ur looking for... not after... ei pai, comprei 4 livros na amazon.com pra universidade. dois dos quais digitais... to estudando aki... remotely... not bad The Growth of the International Economy Globalization Sociology The Problem of Sociology bjs M On 1/2/06, Premise Checker wrote: > > http://news.independent.co.uk/world/science_technology/article333972.ece > 19 December 2005 10:27 > > By Lyndsay Moss > Published: 19 December 2005 > > The key to success may be "chronic happiness" rather than simply hard > work and the right contacts, psychologists have found. > > Many assume a successful career and personal life leads to happiness. > But psychologists in the US say happiness can bring success. > > Researchers from the universities of California, Missouri and Illinois > examined connections between desirable characteristics, life success and > well-being in more than 275,000 people. > > They found that happy individuals were predisposed to seek out new goals > in life, leading to success, which also reinforced their already > positive emotions. > > The psychologists addressed questions such as whether happy people were > more successful than unhappy people, and whether happiness came before > or after a perceived success. > > Writing in Psychological Bulletin, published by the American > Psychological Association, they concluded that "chronically happy > people" were generally more successful in many areas of life than less > happy people. > > The key to success may be "chronic happiness" rather than simply hard > work and the right contacts, psychologists have found. > > Many assume a successful career and personal life leads to happiness. > But psychologists in the US say happiness can bring success. > > Researchers from the universities of California, Missouri and Illinois > examined connections between desirable characteristics, life success and > well-being in more than 275,000 people. > > They found that happy individuals were predisposed to seek out new goals > in life, leading to success, which also reinforced their already > positive emotions. > > The psychologists addressed questions such as whether happy people were > more successful than unhappy people, and whether happiness came before > or after a perceived success. > > Writing in Psychological Bulletin, published by the American > Psychological Association, they concluded that "chronically happy > people" were generally more successful in many areas of life than less > happy people. > _______________________________________________ > paleopsych mailing list > paleopsych at paleopsych.org > http://lists.paleopsych.org/mailman/listinfo/paleopsych > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at panix.com Fri Jan 6 03:44:09 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 22:44:09 -0500 (EST) Subject: [Paleopsych] Discovery: Robot Demonstrates Self Awareness Message-ID: Robot Demonstrates Self Awareness http://dsc.discovery.com/news/briefs/20051219/awarerobot_tec.html [Uh oh...too late for AI friendliness, time to buy How to Survive a Robot Uprising.] By Tracy Staedter, Discovery News Dec. 21, 2005- A new robot can recognize the difference between a mirror image of itself and another robot that looks just like it. This so-called mirror image cognition is based on artificial nerve cell groups built into the robot's computer brain that give it the ability to recognize itself and acknowledge others. The ground-breaking technology could eventually lead to robots able to express emotions. Under development by Junichi Takeno and a team of researchers at Meiji University in Japan, the robot represents a big step toward developing self-aware robots and in understanding and modeling human self-consciousness. "In humans, consciousness is basically a state in which the behavior of the self and another is understood," said Takeno. Humans learn behavior during cognition and conversely learn to think while behaving, said Takeno. To mimic this dynamic, a robot needs a common area in its neural network that is able to process information on both cognition and behavior. Takeno and his colleagues built the robot with blue, red or green LEDs connected to artificial neurons in the region that light up when different information is being processed, based on the robot's behavior. "The innovative part is the independent nodes in the hierarchical levels that can be linked and activated," said Thomas Bock of the Technical University of Munich in Germany. For example, two red diodes illuminate when the robot is performing behavior it considers its own, two green bulbs light up when the robot acknowledges behavior being performed by the other. One blue LED flashes when the robot is both recognizing behavior in another robot and imitating it. Imitation, said Takeno, is an act that requires both seeing a behavior in another and instantly transferring it to oneself and is the best evidence of consciousness. In one experiment, a robot representing the "self" was paired with an identical robot representing the "other." When the self robot moved forward, stopped or backed up, the other robot did the same. The pattern of neurons firing and the subsequent flashes of blue light indicated that the self robot understood that the other robot was imitating its behavior. In another experiment, the researchers placed the self robot in front of a mirror. In this case, the self robot and the reflection (something it could interpret as another robot) moved forward and back at the same time. Although the blue lights fired, they did so less frequently than in other experiments. In fact, 70 percent of the time, the robot understood that the mirror image was itself. Takeno's goal is to reach 100 percent in the coming year. From checker at panix.com Fri Jan 6 03:44:19 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 22:44:19 -0500 (EST) Subject: [Paleopsych] Economist: The robots are coming... from Japan... Message-ID: The robots are coming... from Japan... http://www.economist.com/World/asia/displayStory.cfm?story_id=5323427 Special Report Japan's humanoid robots Better than people Dec 20th 2005 | TOKYO Why the Japanese want their robots to act more like humans Getty Images HER name is MARIE, and her impressive set of skills comes in handy in a nursing home. MARIE can walk around under her own power. She can distinguish among similar-looking objects, such as different bottles of medicine, and has a delicate enough touch to work with frail patients. MARIE can interpret a range of facial expressions and gestures, and respond in ways that suggest compassion. Although her language skills are not ideal, she can recognise speech and respond clearly. Above all, she is inexpensive . Unfortunately for MARIE, however, she has one glaring trait that makes it hard for Japanese patients to accept her: she is a flesh-and-blood human being from the Philippines. If only she were a robot instead. Robots, you see, are wonderful creatures, as many a Japanese will tell you. They are getting more adept all the time, and before too long will be able to do cheaply and easily many tasks that human workers do now. They will care for the sick, collect the rubbish, guard homes and offices, and give directions on the street. This is great news in Japan, where the population has peaked, and may have begun shrinking in 2005. With too few young workers supporting an ageing population, somebody?or something?needs to fill the gap, especially since many of Japan's young people will be needed in science, business and other creative or knowledge-intensive jobs. Many workers from low-wage countries are eager to work in Japan. The Philippines, for example, has over 350,000 trained nurses, and has been pleading with Japan?which accepts only a token few?to let more in. Foreign pundits keep telling Japan to do itself a favour and make better use of cheap imported labour. But the consensus among Japanese is that visions of a future in which immigrant workers live harmoniously and unobtrusively in Japan are pure fancy. Making humanoid robots is clearly the simple and practical way to go. Japan certainly has the technology. It is already the world leader in making industrial robots, which look nothing like pets or people but increasingly do much of the work in its factories. Japan is also racing far ahead of other countries in developing robots with more human features, or that can interact more easily with people. A government report released this May estimated that the market for ?service robots? will reach ?1.1 trillion ($10 billion) within a decade. The country showed off its newest robots at a world exposition this summer in Aichi prefecture. More than 22m visitors came, 95% of them Japanese. The robots stole the show, from the nanny robot that babysits to a Toyota that plays a trumpet. And Japan's robots do not confine their talents to controlled environments. As they gain skills and confidence, robots such as Sony's QRIO (pronounced ?curio?) and Honda's ASIMO are venturing to unlikely places. They have attended factory openings, greeted foreign leaders, and rung the opening bell on the New York Stock Exchange. ASIMO can even take the stage to accept awards. The friendly face of technology So Japan will need workers, and it is learning how to make robots that can do many of their jobs. But the country's keen interest in robots may also reflect something else: it seems that plenty of Japanese really like dealing with robots. Few Japanese have the fear of robots that seems to haunt westerners in seminars and Hollywood films. In western popular culture, robots are often a threat, either because they are manipulated by sinister forces or because something goes horribly wrong with them. By contrast, most Japanese view robots as friendly and benign. Robots like people, and can do good. The Japanese are well aware of this cultural divide, and commentators devote lots of attention to explaining it. The two most favoured theories, which are assumed to reinforce each other, involve religion and popular culture. Most Japanese take an eclectic approach to religious beliefs, and the native religion, Shintoism, is infused with animism: it does not make clear distinctions between inanimate things and organic beings. A popular Japanese theory about robots, therefore, is that there is no need to explain why Japanese are fond of them: what needs explaining, rather, is why westerners allow their Christian hang-ups to get in the way of a good technology. When Honda started making real progress with its humanoid-robot project, it consulted the Vatican on whether westerners would object to a robot made in man's image. Japanese popular culture has also consistently portrayed robots in a positive light, ever since Japan created its first famous cartoon robot, Tetsuwan Atomu, in 1951. Its name in Japanese refers to its atomic heart. Putting a nuclear core into a cartoon robot less than a decade after Hiroshima and Nagasaki might seem an odd way to endear people to the new character. But Tetsuwan Atom?being a robot, rather than a human?was able to use the technology for good. Over the past half century, scores of other Japanese cartoons and films have featured benign robots that work with humans, in some cases even blending with them. One of the latest is a film called ?Hinokio?, in which a reclusive boy sends a robot to school on his behalf and uses virtual-reality technology to interact with classmates. Among the broad Japanese public, it is a short leap to hope that real-world robots will soon be able to pursue good causes, whether helping to detect landmines in war-zones or finding and rescuing victims of disasters. The prevailing view in Japan is that the country is lucky to be uninhibited by robophobia. With fewer of the complexes that trouble many westerners, so the theory goes, Japan is free to make use of a great new tool, just when its needs and abilities are happily about to converge. ?Of all the nations involved in such research,? the Japan Times wrote in a 2004 editorial, ?Japan is the most inclined to approach it in a spirit of fun.? These sanguine explanations, however, may capture only part of the story. Although they are at ease with robots, many Japanese are not as comfortable around other people. That is especially true of foreigners. Immigrants cannot be programmed as robots can. You never know when they will do something spontaneous, ask an awkward question, or use the wrong honorific in conversation. But, even leaving foreigners out of it, being Japanese, and having always to watch what you say and do around others, is no picnic. It is no surprise, therefore, that Japanese researchers are forging ahead with research on human interfaces. For many jobs, after all, lifelike features are superfluous. A robotic arm can gently help to lift and reposition hospital patients without being attached to a humanoid form. The same goes for robotic spoons that make it easier for the infirm to feed themselves, power suits that help lift heavy grocery bags, and a variety of machines that watch the house, vacuum the carpet and so on. Yet the demand for better robots in Japan goes far beyond such functionality. Many Japanese seem to like robot versions of living creatures precisely because they are different from the real thing. An obvious example is AIBO, the robotic dog that Sony began selling in 1999. The bulk of its sales have been in Japan, and the company says there is a big difference between Japanese and American consumers. American AIBO buyers tend to be computer geeks who want to hack the robotic dog's programming and delve in its innards. Most Japanese consumers, by contrast, like AIBO because it is a clean, safe and predictable pet. AIBO is just a fake dog. As the country gets better at building interactive robots, their advantages for Japanese users will multiply. Hiroshi Ishiguro, a robotocist at Osaka University, cites the example of asking directions. In Japan, says Mr Ishiguro, people are even more reluctant than in other places to approach a stranger. Building robotic traffic police and guides will make it easier for people to overcome their diffidence. Karl MacDorman, another researcher at Osaka, sees similar social forces at work. Interacting with other people can be difficult for the Japanese, he says, ?because they always have to think about what the other person is feeling, and how what they say will affect the other person.? But it is impossible to embarrass a robot, or be embarrassed, by saying the wrong thing. To understand how Japanese might find robots less intimidating than people, Mr MacDorman has been investigating eye movements, using headsets that monitor where subjects are looking. One oft-cited myth about Japanese, that they rarely make eye contact, is not really true. When answering questions put by another Japanese, Mr MacDorman's subjects made eye contact around 30% of the time. But Japanese subjects behave intriguingly when they talk to Mr Ishiguro's android, ReplieeQ1. The android's face has been modeled on that of a famous newsreader, and sophisticated actuators allow it to mimic her facial movements. When answering the android's questions, Mr MacDorman's Japanese subjects were much more likely to look it in the eye than they were a real person. Mr MacDorman wants to do more tests, but he surmises that the discomfort many Japanese feel when dealing with other people has something to do with his results, and that they are much more at ease when talking to an android. Eventually, interactive robots are going to become more common, not just in Japan but in other rich countries as well. As children and the elderly begin spending time with them, they are likely to develop emotional reactions to such lifelike machines. That is human nature. Upon meeting Sony's QRIO, your correspondent promptly referred to it as ?him? three times, despite trying to remember that it is just a battery-operated device. What seems to set Japan apart from other countries is that few Japanese are all that worried about the effects that hordes of robots might have on its citizens. Nobody seems prepared to ask awkward questions about how it might turn out. If this bold social experiment produces lots of isolated people, there will of course be an outlet for their loneliness: they can confide in their robot pets and partners. Only in Japan could this be thought less risky than having a compassionate Filipina drop by for a chat. From checker at panix.com Fri Jan 6 03:44:28 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 22:44:28 -0500 (EST) Subject: [Paleopsych] New Scientist: Civilisation has left its mark on our genes Message-ID: Civilisation has left its mark on our genes http://www.newscientist.com/article.ns?id=dn8483&print=true * 22:00 19 December 2005 * Bob Holmes Darwins fingerprints can be found all over the human genome. A detailed look at human DNA has shown that a significant percentage of our genes have been shaped by natural selection in the past 50,000 years, probably in response to aspects of modern human culture such as the emergence of agriculture and the shift towards living in densely populated settlements. One way to look for genes that have recently been changed by natural selection is to study mutations called single-nucleotide polymorphisms (SNPs) single-letter differences in the genetic code. The trick is to look for pairs of SNPs that occur together more often than would be expected from the chance genetic reshuffling that inevitably happens down the generations. Such correlations are known as linkage disequilibrium, and can occur when natural selection favours a particular variant of a gene, causing the SNPs nearby to be selected as well. Robert Moyzis and his colleagues at the University of California, Irvine, US, searched for instances of linkage disequilibrium in a collection of 1.6 million SNPs scattered across all the human chromosomes. They then looked carefully at the instances they found to distinguish the consequences of natural selection from other phenomena, such as random inversions of chunks of DNA, which can disrupt normal genetic reshuffling. This analysis suggested that around 1800 genes, or roughly 7% of the total in the human genome, have changed under the influence of natural selection within the past 50,000 years. A second analysis using a second SNP database gave similar results. That is roughly the same proportion of genes that were altered in maize when humans domesticated it from its wild ancestors. Domesticated humans Moyzis speculates that we may have similarly domesticated ourselves with the emergence of modern civilisation. One of the major things that has happened in the last 50,000 years is the development of culture, he says. By so radically and rapidly changing our environment through our culture, weve put new kinds of selection [pressures] on ourselves. Genes that aid protein metabolism perhaps related to a change in diet with the dawn of agriculture turn up unusually often in Moyziss list of recently selected genes. So do genes involved in resisting infections, which would be important in a species settling into more densely populated villages where diseases would spread more easily. Other selected genes include those involved in brain function, which could be important in the development of culture. But the details of any such sweeping survey of the genome should be treated with caution, geneticists warn. Now that Moyzis has made a start on studying how the influence of modern human culture is written in our genes, other teams can see if similar results are produced by other analytical techniques, such as comparing human and chimp genomes. Journal reference: Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas.0509691102) Evolution Learn more about the struggle to survive in our comprehensive [11]special report. Related Articles * [12]Can biology do better than faith? * [13]http://www.newscientist.com/article.ns?id=dn8254 * 02 November 2005 * [14]Human brains are still evolving * [15]http://www.newscientist.com/article.ns?id=mg18725174.600 * 17 September 2005 * [16]Evolution: Blink and you'll miss it * [17]http://www.newscientist.com/article.ns?id=mg18725071.100 * 09 July 2005 Weblinks * [18]Robert Moyzis, University of California * [19]http://www.ucihs.uci.edu/biochem/faculty/moyzis.html * [20]SNP fact sheet, Human Genome Project * [21]http://www.ornl.gov/sci/techresources/Human_Genome/faq/snps.shtml * [22]Evolution special report, New Scientist * [23]http://www.newscientist.com/channel/life/evolution References 11. http://www.newscientist.com/channel/life/evolution 12. http://www.newscientist.com/article.ns?id=dn8254 13. http://www.newscientist.com/article.ns?id=dn8254 14. http://www.newscientist.com/article.ns?id=mg18725174.600 15. http://www.newscientist.com/article.ns?id=mg18725174.600 16. http://www.newscientist.com/article.ns?id=mg18725071.100 17. http://www.newscientist.com/article.ns?id=mg18725071.100 18. http://www.ucihs.uci.edu/biochem/faculty/moyzis.html 19. http://www.ucihs.uci.edu/biochem/faculty/moyzis.html 20. http://www.ornl.gov/sci/techresources/Human_Genome/faq/snps.shtml 21. http://www.ornl.gov/sci/techresources/Human_Genome/faq/snps.shtml 22. http://www.newscientist.com/channel/life/evolution 23. http://www.newscientist.com/channel/life/evolution From checker at panix.com Fri Jan 6 03:44:41 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 22:44:41 -0500 (EST) Subject: [Paleopsych] spiked: Why humans are superior to apes Message-ID: Why humans are superior to apes http://www.spiked-online.com/Printable/0000000CA40E.htm 4.2.24 by Helene Guldberg Humanism, in the sense of a faith in humanity's potential to solve problems through the application of science and reason, is taking quite a battering today. As the UK medical scientist Raymond Tallis warns, the role of mind and of self-conscious agency in human affairs is denied 'by anthropomorphising or "Disneyfying" what animals do and "animalomorphising" what human beings get up to' (1). One of the most extreme cases of 'animalomorphism' in recent years has come from the philosopher John Gray, professor of European thought at the London School of Economics. In his book Straw Dogs: Thoughts on Humans and Other Animals, Gray argues that humanity's belief in our ability to control our destiny and free ourselves from the constraints of the natural environment is as illusory as the Christian promise of salvation (2). Gray presents humanity as no better than any other living organism - even bacteria. We should therefore not be too concerned about whether humans have a future on this planet, he claims. Rather, it is the balance of the world's ecosystem that we should really worry about: 'Homo rapiens is only one of very many species, and not obviously worth preserving. Later or sooner, it will become extinct. When it is gone the Earth will recover.' Thankfully, not many will go along with John Gray's image of humans as a plague upon the planet. For our own sanity, if nothing else, we cannot really subscribe to such a misanthropic and nihilistic worldview. If we did, surely we would have no option other than to kill ourselves - for the good of the planet - and try to take as many people with us as possible? However, even if many will reject Gray's extreme form of anti-humanism, many more will go along with the notion that animals are ultimately not that different from us. The effect is the same: to denigrate human abilities. Today, a belief in human exceptionalism is distinctly out of fashion. Almost every day we are presented with new revelations about how animals are more like us than we ever imagined. A selection of news headlines includes: 'How animals kiss and make up'; 'Male birds punish unfaithful females'; 'Dogs experience stress at Christmas'; 'Capuchin monkeys demand equal rights'; 'Scientists prove fish intelligence'; 'Birds going through divorce proceedings'; 'Bees can think say scientists'; 'Chimpanzees are cultured creatures' (3). The argument is at its most powerful when it comes to the great apes -chimpanzees, gorillas and orangutans. One of the most influential opponents of the 'sanctification of human life', as he describes human exceptionalism, is Peter Singer, author of Animal Liberation and co-founder of the Great Ape Project (4). Singer argues that we need to 'break the species barrier' and extend rights to the great apes, in the first instance, followed by all other animal species. The great apes are not only our closest living relatives, argues Singer, but they are also beings who possess many of the characteristics that we have long considered distinctive to humans. Is it the case that apes are just like us? Primatology has indeed shown that apes, and even monkeys, communicate in the wild. Jane Goodall's observations of chimpanzees show that not only do they use tools, but that they also make them - using sticks to fish for termites, stones as anvils or hammers, and leaves as cups or sponges. Anybody watching juvenile chimps playfighting, tickling each other and giggling, will be struck by their human-like mannerisms and their apparent expressions of glee. But one has to go beyond first impressions in order to establish to what extent great ape abilities can be compared to those of humans. Is it the case that ape behaviour is the result of a capacity for some rudimentary form of human-like insight? Or can it be explained through Darwinian evolution and associative learning? Associative learning, or contingent learning, are concepts developed in the early twentieth century by BF Skinner, one of the most influential psychologists, to describe a type of learning that is the result of an association between an action and the reinforcer - in the absence of any insight. BF Skinner became famous for his work with rats, pigeons and chickens using his 'Skinner Box'. In one experiment he rewarded chickens with a small amount of food (the reinforcer) when they pecked a blue button (the action). If the chicken pecked a yellow, green, or red button, it would get nothing. Associative or contingent learning, concepts developed by the school of behaviourism, is based on the idea that animals behave in the way that they do because this kind of behaviour has had certain consequences in the past, not because they have any insight into why they are doing what they do. In Intelligence of Apes and Other Rational Beings (2003), primatologist Duane Rumbaugh and comparative psychologist David Washburn argue that ape behaviour cannot be explained on the basis of contingent learning alone (5). Apes are rational, they claim, and do make decisions using higher order reasoning skills. But the evidence for this is weak, and getting weaker, as more rigorous methodologies are being developed for investigating the capabilities of primates. As a result, many of the past claims about apes' capacity for insight into their own actions and those of their fellow apes are now being questioned. Cultural transmission and social learning The cultural transmission of behaviour, where actions are passed on through some kind of teaching, learning or observation rather than through genetics, is used as evidence of apes' higher order reasoning abilities. This is currently being revised. The generation-upon-generation growth in human abilities has historically been seen as our defining characteristic. Human progress has been made possible through our ability to reflect on what we, and our fellow humans, are doing - thereby teaching, and learning from, each other. The first evidence of cultural transmission among primates was found in the 1950s in Japan, with observations of the spread of potato washing among macaque monkeys (6). One juvenile female pioneered the habit, followed by her mother and closest peers. Within a decade, the whole of the population under middle age was washing potatoes. A review by Andrew Whiten and his colleagues of a number of field studies reveals evidence of at least 39 local variations in behavioural patterns, including tool-use, communication and grooming rituals, among chimpanzees - behaviours that are common in some communities and absent in others (7). So it seems that these animals are capable of learning new skills and of passing them on to their fellows. The question remains: what does this tell us about their mental capacities? The existence of cultural transmission is often taken as evidence that the animals are capable of some form of social learning (such as imitation) and possibly even teaching. But there is in fact no evidence of apes being able to teach their young. Michael Tomasello, co-director of the Wolfgang K?hler Primate Research Center in Germany, points out that 'nonhuman primates do not point to distal entities in the environment, they do not hold up objects for others to see and share, and they do not actively give or offer objects to other individuals. They also do not actively teach one another' (8). Yet even if apes cannot actively teach each other, if they are capable of social learning - in terms of imitation (which it has long been assumed that they are) - this does still imply they are capable of quite complex cognitive processes. Imitation involves being able to appreciate not just what an act looks like when performed by another individual, but also what it is like to do that act oneself. They must be able to put themselves in another person's shoes, so to speak. However, comparative psychologist Bennett Galef points out, after scrutinising the data from Japan, that the rate the behaviour spread among the macaque monkeys was very slow and steady, not accelerated as one might expect in the case of imitation (9). It took up to a decade for what, in human terms, would be described as a tiny group of individuals to acquire the habit of the 'innovator'. Compare this to the human ability to teach new skills and ways of thinking and to learn from each other's insights: which laid the foundation for the agricultural and industrial revolutions, the development of science and technology and the transformations of our ways of living that flow from these. Reviewing the literature on primate behaviour, it emerges that there is in fact no consensus among scientists as to whether apes are capable of the simplest form of social learning - imitation (10). Instead it could be the case that the differences in their behavioural repertoires are the result of what has been coined stimulus enhancement. It has been shown in birds, for instance, that the stimulus enhancement of a feeding site may occur if bird A sees bird B gaining food there. In other words, their attention has been drawn to a stimulus, without any knowledge or appreciation of the significance of the stimulus. Others argue that local variations may be due to observational conditioning, where an animal may learn about the positive or negative consequences of actions, not on the basis of experiencing the outcomes themselves, but on the basis of seeing the responses of other animals. This involves a form of associative learning (learning from the association between an action and the reinforcer), rather than any insight. Michael Tomasello emphasises the special nature of human learning. Unlike animals, he argues, humans understand that in the social domain relations between people involve intentionality, and in the physical domain that relations between objects involve causality (11). We do not tend to respond blindly to what others do or say, but, to some degree, analyse their motives. Similarly we have some understanding how physical processes work, which means we can manipulate the physical world to our advantage and continually develop and perfect the tools we use to do so. Social learning and teaching depends on these abilities, and human children begin on this task at the end of their first year. Because other primates do not understand intentionality or causality they do not engage in cultural learning of this type. The fact that it takes chimps up to four years to acquire the necessary skills to select and adequately use tools to crack nuts implies that they are not capable of true imitation, never mind any form of teaching. Young chimps invest a lot of time and effort in attempts to crack nuts that are, after all, an important part of their diet. The slow rate of their development raises serious questions about their ability to reflect on what they and their fellow apes are doing. Language But can apes use language? Groundbreaking research by Robert Seyfarth and Dorothy Cheney in the 1980s on vervet monkeys in the wild showed that their vocalisations went beyond merely expressing emotions such as anger or fear. Their vocalisations could instead be described as 'referential' - in that they refer to objects or events (12). But it could not be established from these studies whether the callers vocalised with the explicit intent of referring to a particular object or event, for instance the proximity of a predator. And Seyfarth and Cheney were careful to point out that there was no evidence that the monkeys had any insight into what they were doing. Their vocalisations could merely be the result of a form of associative learning. Later experiments have attempted to refine analyses in order to establish whether there is an intention to communicate: involving an understanding that someone else may have a different perspective or understanding of a situation from themselves, and using communication in order to change the others' understanding. It is too early to draw any firm conclusions on this question from research carried out to date. There is no evidence that primates have any, even rudimentary, human-like insight into the effect of their communications. But neither is there clear evidence that they do not. What is clear, however, is that primates, as with all non-human animals, only ever communicate about events in the here and now. They do not communicate about possible future events or previously encountered ones. Ape communications cannot therefore be elevated to the status of human language. Human beings debate and discuss ideas, constructing arguments, drawing on past experiences and imagining future possibilities, in order to change the opinions of others. This goes way beyond warning fellow humans about a clear and present danger. Deception and Theory of Mind What about the fact that apes have been seen to deceive their fellows? Does this not point towards what some have described as a Machiavellian Intelligence (13)? Primatologists have observed apes in the wild giving alarm calls when no danger is present, with the effect of distracting another animal from food or a mate. But again the question remains whether they are aware of what they are doing. To be able to deceive intentionally, they would have to have some form of a 'theory of mind' - that is, the recognition that one's own perspectives and beliefs are sometimes different from somebody else's. Although psychologist Richard Byrne argues that the abilities of the great apes are limited compared with even very young humans, he claims that 'some "theory of mind" in great apes but not monkeys now seems clear' (14). However, as the cognitive neuroscientist Marc Hauser points out, most studies of deception have left the question of intentionality unanswered (15). Studies that do attribute beliefs-about-beliefs to apes tend to rely heavily on fascinating, but largely unsubstantiated, anecdotes. As professor of archaeology Steven Mithen points out, 'even the most compelling examples can be explained in terms of learned behavioural contingencies [associative learning], without recourse to higher order intentionality' (16). So even if apes are found to deceive, that does not necessarily imply that the apes know that they are deceiving. The apes may just be highly adaptive and adept at picking up useful routines that bring them food, sex or safety, without necessarily having any understanding or insight into what they are doing. Self-awareness Although there is no substantive evidence of apes having a theory of mind, they may possess its precursor - a rudimentary self-awareness. This is backed up by the fact that, apart from human beings, apes are the only species able to recognise themselves in the mirror. In developmental literature, the moment when human infants first recognise themselves in the mirror (between 15 and 21 months of age) is seen as an important milestone in the emergence of the notion of 'self'. How important is it, then, that apes can make the same sort of mirror recognition? The development of self-awareness is a complex process with different elements emerging at different times. In humans, mirror recognition is only the precursor to a continually developing capacity for self-awareness and self-evaluation. Younger children's initial self-awareness is focused around physical characteristics. With maturity comes a greater appreciation of psychological characteristics. When asking 'who am I?', younger children use outer visible characteristics - such as gender and hair colour - while older children tend to use inner attributes - such as feelings and abilities. The ability of apes to recognise themselves in the mirror does not necessarily imply a human-like self-awareness or the existence of mental experiences. They seem able to represent their own bodies visually, but they never move beyond the stage reached by human children in their second year of life. Children Research to date presents a rather murky picture of what primates are and are not capable of. Field studies may not have demonstrated conclusively that apes are incapable of understanding intentionality in the social domain or causality in the physical domain, but logically this must be the case. Understanding of this sort would lead to a much more flexible kind of learning. It may be the case that the great apes do possess some rudimentary form of human-like insight. But the limitations of this rudimentary insight (if it exists at all) becomes clear when exploring the emergence, and transformative nature, of insight in young children. We are not born with the creative, flexible and imaginative thinking that characterises humans. It emerges in the course of development: humans develop from helpless biological beings into conscious beings with a sense of self and an independence of thought. The study of children can therefore give us great insights into the human mind. As Peter Hobson, professor of developmental psychopathology and author of The Cradle of Thought: Exploring the Origins of Thinking, states: 'It is always difficult to consider things in the abstract, and this is especially the case when what we are considering is something as elusive as the development of thought. It is one of the great benefits of studying very young children that one can see thinking taking place as it is lived out in a child's observable behaviour' (17). Thinking is more internalised, and therefore hidden, in older children and adults, but it is more externalised and nearer to the surface in children who are just beginning to talk. Hobson puts a persuasive case for human thought, language, and self-awareness developing 'in the cradle of emotional engagement between the infant and caregiver'. Emotional engagement and communication, he argues, are the foundation on which creative symbolic thought develops. Through reviewing an array of clinical and experimental studies, Hobson captures aspects of human exchanges that happen before thought. He shows that even in early infancy children have a capacity to react to the emotions of others. This points to an innate desire to engage with fellow human beings, he argues. However, with development, that innate desire is transformed into something qualitatively different. So, for instance, at around nine months of age, infants begin to share their experiences of objects or actions with others. They begin to monitor the emotional responses of adults, such as responding to facial expression or the tone of voice. When faced with novel situations or objects, infants look at their carers' faces and, by picking up emotional signals, they decide on their actions. When they receive positive/encouraging signals, they engage; when the signals are anxious/negative, they retreat. Towards the middle of the second year these mutually sensitive interpersonal engagements are transformed into more conscious exchanges of feelings, views and beliefs. Hobson is able to show that the ability to symbolise emerges out of the cradle of early emotional engagements. With the insight that people-with-minds have their own subjective experiences and can give things meanings comes the insight that these meanings can be anchored in symbols. This, according to Hobson, is the dawn of thought and the dawn of language: 'At this point, [the child] leaves infancy behind. Empowered by language and other forms of symbolic functioning, she takes off into the realms of culture. The infant has been lifted out of the cradle of thought. Engagement with others has taught this soul to fly.' (p274) The Russian psychologist Lev Vygotsky showed that a significant moment in the development of the human individual occurs when language and practical intelligence converge (18). It is when thought and speech come together that children's thinking is raised to new heights and they start acquiring truly human characteristics. Language becomes a tool of thought allowing children increasingly to master their own behaviour. As Vygotsky pointed out, young children will often talk out loud - to themselves it seems - when carrying out particular tasks. This 'egocentric speech' does not disappear, but gradually becomes internalised into private inner speech - also known as thought. Vygotsky and Luria concluded that 'the meeting between speech and thinking is a major event in the development of the individual; in fact, it is this connection that raises human thinking to extraordinary heights' (19). Apes never develop the ability to use language to regulate their own actions in the way that even toddlers are able to do. With the development of language, children's understanding of their own and other people's minds is transformed. So by three or four years of age, most children have developed a theory of mind. This involves an understanding of their own and others' mental life, including the understanding that others may have false beliefs and that they themselves may have had false beliefs. When my nephew Stefan was three years of age, he excitedly told me that 'this is my right hand [lifting his right hand] and this is my left hand [lifting his left hand]. But this morning [which is the phrase he used for anything that has happened in the past] I told daddy that this was my left hand [lifting his right hand] and this is my right hand [lifting his left hand]'. He was amused by the fact that he had been mistaken in his knowledge of what is right and what is left. He clearly had developed an understanding that people, including himself, have beliefs about things and that those beliefs can be wrong as well as right. Once children are able to think about thoughts in this way, their thinking has been lifted to a different height. The formal education system requires children to go much further in turning language and thought in upon themselves. Children must learn to direct their thought processes in a conscious manner. Above all, they need to become capable of consciously manipulating symbols. Literacy and numeracy serve important functions in aiding communication and manipulating numbers. But, above all, they have transformative effects on children's thinking, in particular on the development of abstract thought and reflective processes. In the influential book Children's Minds, child psychologist Margaret Donaldson shows that 'those very features of the written word which encourage awareness of language may also encourage awareness of one's own thinking and be relevant to the development of intellectual self-control, with incalculable consequences for the kinds of thinking which are characteristic of logic, mathematics and the sciences' (20). The differences in language, tool-use, self-awareness and insight between apes and humans are vast. A human child, even as young as two years of age, is intellectually head and shoulders above any ape. Denigrating humans As American biological anthropologist Kathleen R Gibson states: 'Other animals possess elements that are common to human behaviours, but none reaches the human level of accomplishment in any domain - vocal, gestural, imitative, technical or social. Nor do other species combine social, technical and linguistic behaviours into a rich, interactive and self-propelling cognitive complex.' (21) In the six million years since the human and ape lines first diverged, the behaviour and lifestyles of apes have hardly changed. Human behaviour, relationships, lifestyles and culture clearly have. We have been able to build upon the achievements of previous generations. In just the past century we have brought, through constant innovation, vast improvements to our lives: including better health, longer life expectancy, higher living standards and more sophisticated means of communication and transport. Six million years of ape evolution may have resulted in the emergence of 39 local behavioural patterns - in tool-use, communication and grooming rituals. However this has not moved them beyond their hand-to-mouth existence nor led to any significant changes in the way they live. Our lives have changed much more in the past decade - in terms of the technology we use, how we communicate with each other, and how we form and sustain personal relationships. Considering the vast differences in the way we live, it is very difficult to sustain the argument that apes are 'just like us'. What appears to be behind today's fashionable view of ape and human equivalence is a denigration of human capacities and human ingenuity. The richness of human experience is trivialised because human experiences are lowered to, and equated with, those of animals. Dr Roger Fouts from the Chimpanzee and Human Communication Institute expresses this anti-human view well in his statement. '[Human] intelligence has not only moved us away from our bodies, but from our families, communities, and even Earth itself. This may be a big mistake for the survival of our species in the long run.' (22) Investigations into apes' behaviour could shed some useful light on how they resemble us - and give us some insight into our evolutionary past, several million years back. Developing a science true to its subject matter could give us real insights into what shapes ape behaviour. Stephen Budiansky's fascinating book If A Lion Could Talk shows how evolutionary ecology (the study of how natural selection has equipped animals to lead the lives they do) is showing us how animals process information in ways that are uniquely their own, much of which we can only marvel at (23). But as Karl Marx pointed out in the late nineteenth century: 'What distinguishes the worst architect from the best of bees is this, that the architect raises his structure in imagination before he erects it in reality. At the end of every labour process, we get a result that already existed in the imagination of the labourer at its commencement.'(24) Much animal behaviour is fascinating. But, as Budiansky shows, it is also the case that animals do remarkably stupid things in situations very similar to those where they previously seemed to show a degree of intelligence. This is partly because they learn many of their clever feats by pure accident. But it is also because animal learning is highly specialised. Their ability to learn is not a result of general cognitive processes but 'specialised channels attuned to an animal's basic hard-wired behaviours' (23). It is sloppy simply to apply human characteristics and motives to animals. It blocks our understanding of what is specific about animal behaviour, and degrades what is unique about our humanity. It is ironic that we, who have something that no other organism has - the ability to evaluate who we are, where we come from and where we are going, and, with that, our place in nature - increasingly seem to use this unique ability in order to downplay the exceptional nature of our own capacities and achievements. Read on: [2]spiked-issue: Animals (1) New Humanist, November 2003 (2) Straw Dogs: Thoughts on Humans and Other Animals, by John Gray, Granta, August 2002 (3) [3]'How animals kiss and make up', BBC News, 13 October 2003; [4]Male birds punish unfaithful females, Animal Sentience, 31 October; [5]Dogs experience stress at Christmas, Animal Sentience, 10 December 2003; [6]Capuchin monkeys demand equal rights, Animal Sentience, 20 September 2003; [7]Scientists prove fish intelligence, 31 August 2003; [8]Birds going through divorce proceedings, Animal Sentience, 18 August 2003; [9]Bees can think say scientists, Guardian, 19 April 2001; [10]Chimpanzees are cultured creatures, Guardian, 24 September 2002 (4) See the [11]Great Ape project website (5) Intelligence of Apes and Other Rational Beings, by Duane M Rumbaugh and David A Washburn (buy this book from [12]Amazon (UK) or [13]Amazon (USA)) (6) Frans de Waal, Nature, Vol 399, 17 June 1999 (7) Nature, Vol 399, 17 June 1999 (8) Michael Tomasello, 'Primate Cognition: Introduction to the issue', Cognitive Science Vol 24 (3) 2000, p358 (9) BG Galef, Human Nature 3, 157-178, 1990 (10) See a detailed review by Andrew Whiten, 'Primate Culture and Social Learning', Cognitive Science Vol 24 (3), 2000 (11) Tomasello and Call, Primate Cognition, Oxford University Press, 1997 (12) [14]Peter Singer: Curriculum Vitae (13) Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans, (eds) Andrew Whiten and Richard Byrne, Oxford 1990. Buy this book from [15]Amazon (USA) or [16]Amazon (UK) (14) [17]How primates learn novel complex skills: The evolutionary origins of generative planning?, by Richard W Byrne (15) M Hauser, 'A primate dictionary?', Cognitive Science Vol 24(3) 2000 (16) The Prehistory of the Mind: A Search for the Origins of Art, Religion and Science, Steven Mithen, Phoenix, 1998. Buy this book from or [18]Amazon (UK) or [19]Amazon (USA) (17) The Cradle of Thought: exploring the origins of thinking, Peter Hobson, Macmillan, 22 February 2002, p76. Buy this book from [20]Amazon (UK) or [21]Amazon (USA) (18) Thought and Language, Lev Vygotsky, MIT, 1986 (19) Ape, Primitive Man and Child, Lev Vygotsky, 1991, p140 (20) Children's Minds, Margaret Donaldson, HarperCollins, 1978, p95 (21) Tools, Language and Cognition in Human Evolution, Kathleen R Gibson, 1993, p7-8 (22) [22]CHCI Frequently Asked Questions: Chimpanzee Facts (23) If a Lion Could Talk: Animal Intelligence and the Evolution of Consciousness, by Stephen Budiansky. Buy this book from [23]Amazon (UK) or [24]Amazon (USA) (24) Capital, Karl Marx, vol 1 p198 [bug.gif] [pixel.gif] Reprinted from : [25]http://www.spiked-online.com/Articles/0000000CA40E.htm _________________________________________________________________ spiked, Signet House, 49-51 Farringdon Road, London, EC1M 3JP Email: [35]info at spiked-online.com References 2. http://www.spiked-online.com/Sections/Science/OnAnimals/Index.htm 3. http://news.bbc.co.uk/1/hi/scotland/3183516.stm 4. http://www.animalsentience.com/news/2003-10-31a.htm 5. http://www.animalsentience.com/news/2003-12-10.htm 6. http://www.animalsentience.com/news/2003-09-20.htm 7. http://www.animalsentience.com/news/2003-08-31.htm 8. http://www.animalsentience.com/news/2003-08-18.htm 9. http://www.guardian.co.uk/uk_news/story/0,3604,474807,00.html 10. http://education.guardian.co.uk/higher/artsandhumanities/story/0,12241,798331,00.html 11. http://www.greatapeproject.org/ 12. http://www.amazon.co.uk/exec/obidos/ASIN/0300099835/spiked 13. http://www.amazon.com/exec/obidos/tg/detail/-/0300099835/spiked-20 14. http://www.princeton.edu/%7Euchv/faculty/singercv.html 15. http://www.amazon.com/exec/obidos/tg/detail/-/0198521758/spiked-20 16. http://www.amazon.co.uk/exec/obidos/ASIN/0521559499/spiked 17. http://www.saga-jp.org/coe_abst/byrne.htm 18. http://www.amazon.co.uk/exec/obidos/ASIN/0500281009/spiked 19. http://www.amazon.com/exec/obidos/tg/detail/-/0500281009/spiked-20 20. http://www.amazon.co.uk/exec/obidos/ASIN/0333766334/spiked 21. http://www.amazon.com/exec/obidos/tg/detail/-/0195219546/qid=1077209516/spiked-20 22. http://www.cwu.edu/~cwuchci/chimpanzee_info/faq_info.htm 23. http://www.amazon.com/exec/obidos/tg/detail/-/0684837102/spiked 24. http://www.amazon.com/exec/obidos/tg/detail/-/0684837102/spiked-20 25. http://www.spiked-online.com/Articles/0000000CA40E.htm 26. http://www.spiked-online.com/ 27. http://www.spiked-online.com/sections/culture/index.htm 28. http://www.spiked-online.com/sections/health/index.htm 29. http://www.spiked-online.com/sections/life/index.htm 30. http://www.spiked-online.com/sections/liberties/index.htm 31. http://www.spiked-online.com/sections/politics/index.htm 32. http://www.spiked-online.com/sections/risk/index.htm 33. http://www.spiked-online.com/sections/science/index.htm 34. http://www.spiked-online.com/sections/technology/index.htm 35. http://www.spiked-online.com/forms/genEmail.asp?sendto=9§ion=central From checker at panix.com Fri Jan 6 03:44:53 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 22:44:53 -0500 (EST) Subject: [Paleopsych] WP: Dr. Gridlock: Balanced Views on a Roadside Sobriety Test Message-ID: Balanced Views on a Roadside Sobriety Test http://www.washingtonpost.com/wp-dyn/content/article/2005/12/21/AR2005122101158_pf.html [I practiced the art of standing on the MetRoRail without holding onto anything. It took me several weeks, but now I can do so even when the train sways from side to side, as it does from Farragut North to MetRo Center and from Van Ness to Tenleytown. But the worst, at least on the trains I have takes, is the trip between Union Station and Rhode Island Avenue. Not only does the train sway from side to side, it goes up and down and the speed is quite variable. [Learing to balance myself on the MetRoRail helps me enormously when I am out running over ice. When I stumble or slip, my brain has been trained enough so that I rarely fall down. This is especially important, since as I go down the stately march to senility at age 61, I don't heal nearly so rapidly as I used to. [I also practice standing on one foot with my hands over my head whenever I get a chance, like when waiting for or riding elevators. My co-workers often give me a puzzled look, but when I explain why, they are all smiles. And get puzzled, too, when I greet them brightly first thing on Monday mornings, saying, "Thank God it's Monday!" The puzzlement turns to smiles when I follow this up with saying, "I'm a workaholic."] [People vary enormously in their ability to balance. I well remember the day when I was living in Little Rock--I was six or seven at the time--when Dad huffed and puffed up and down hills with me on my bike teaching me how to ride. Yet my brother, Dick, not then for he was 4 1/2 years younger than me but when he was about five, needed Dad not at all. He just got on his bike and rode away! He was later to become an excellent hockey player and still coaches the sport. But my experience with ice skating was abysmal. I was so cautious that it took me ten minutes to go once around the rink. I did not persist in learning ice skating. This was also true of skiing, though I did learn to roller skate, but never to roller blade, which activity was after my time. [I hadn't ridden a bike since around 1967 when around 2000 I borrowed my older daughter's (Alice's) bike and went down the Capital Crescent Trail. I was extremely cautious for a few miles, but after that riding came back to me, and I was riding comfortably, though not as well as I did when I was a child. [I learned the skill of double-clutching when my Austin-Healey 3000 did not syncromesh going down from second to first gear. This is twice as complicated as simple shifting. We haven't had a stick shift since our Volkswagen Beetle gave up the ghost in 1983. But I know I could at least single-clutch pretty well at once. Double-clutching would have to come back to me.] By Ron Shaffer Thursday, December 22, 2005; GZ13 Balanced Views on a Roadside Sobriety Test In a past column, an Annapolis man expressed concern that motorists suspected of drunken driving were sometimes asked by police to stand on one foot for 30 seconds [Dr. Gridlock, Dec. 8]. He noted that people in his morning health club class couldn't hold that position for 30 seconds, and they were all sober. I replied that I can't do it either -- not even close. That prompted the following responses. Dear Dr. Gridlock: I am going on 71 years, do not engage in regular exercise and was able to stand on one foot for 30 seconds. No problem at all -- on either leg. The gentleman may need to find a new exercise class! Mary Lucas Annapolis Dear Dr. Gridlock: I will never forget the evening I was humiliated on the side of the road after being stopped for speeding. I was returning home from dinner, driving my sports car faster than I should have been. I noticed a car coming from behind at even greater speed and immediately pulled over into the slow lane to let it go by. To my surprise, the other driver was a state police officer, and he was pulling me over. He asked for my ID, and when I opened my purse, an empty beer bottle from the only drink I had had that evening was prominently visible. I had saved it for the foreign label. He asked me to step out of the vehicle and undergo a number of tests. I passed the "touch your nose" test and the "walk the line" test but was then asked to stand on one foot for a period of time. I pointed out that I was standing on gravel and wearing high heels, but that made no difference. Of course I failed. I was then forced to take the roadside breathalyzer test and passed immediately. I went on my way with my speeding ticket but was completely rattled by the late-night roadside shenanigans. Although he was polite, the storm-trooper attitude the police officer assumed still rankles. Susan Guyaux Crownsville Makes me wonder: Why not administer the breathalyzer test before the other exercises? Dear Dr. Gridlock: I can stand on one leg, either one, and count to 30 with no trouble. I'm 79, so maybe I've learned how to balance by now. Those people in the exercise class must all have a balance problem, or maybe they all stopped by the local tavern before going to class. Ed MacArthur Greenbelt I can't even come close to a 30 count. Maybe too many years of inhaling exhaust fumes. Dear Dr. Gridlock: In response to your request for the information about the police roadside sobriety test, my understanding is that it's not the ability to stand on one foot for a count of 30 that helps detect inebriation, but the manner in which you go about your attempt. Also, at least in Maryland, the field sobriety tests alone are not enough to convict: You must also fail a breathalyzer test. So even if you are miserable at the physical tests but pass the breathalyzer, they will let you go. Similarly, if you pass the physical tests but the breathalyzer shows your blood alcohol content to be above the legal limit, they will have a case. Sadly, I have been given the roadside sobriety and breathalyzer tests numerous times because I play in bands and work as a DJ and thus am (soberly) leaving bars and nightclubs around closing time. Being on the road late at night apparently is enough probable cause to detain me and subject me to testing, irrespective of my driving performance. Eric Myers Germantown Well, I'm glad the ability to stand on one foot is not the sole indicator of one's sobriety. A Changing Metro Dear Dr. Gridlock: Oh, great. Not only is Metro going to get rid of seats in subway cars, which will leave standees who aren't tall enough to reach the overhead poles with nothing to hang onto, but they're also going to get rid of the center vertical poles between the exits? Wonderful! I'm 5 feet tall, and I cannot reach -- or at least can't grip -- the overhead poles. I'm also not built like a linebacker, which means I can't force my way into the middle of a crowded car to grab hold of a seat rail. I'm not quite elderly, but getting there, and I have a bad back. The "elderly and handicapped" seats, when I can get one or find someone kind enough to relinquish theirs, have been my salvation when using Metro. And, when I couldn't sit, I would hang on for dear life to that center pole. Now the seats are going, and the pole, too. How is someone like me supposed to use Metro? And why do they persist in making a subway ride more of an ordeal all the time? And where on Earth are those additional subway cars that have been on order for years now, which would make it possible to run six-car trains on all lines during busy hours? Why do fares keep going up and we get less service that is more a burden to use? Me, I have the option to drive. I can add my bit to the city's pollution and gridlock. Thanks, Metro. Lynda Meyers Arlington Metro is not getting rid of any of the seats designated for seniors and the disabled. As part of tests of 24 reconfigured cars, all the vertical ceiling-to-floor poles are being eliminated, but many more vertical poles are being installed from the backs of seats to the ceiling. Further, spring-loaded strap handles are being suspended from the overhead bars. That is being done to see if cars can be loaded, and unloaded, with more efficiency than the free-for-all that exists now. As for more cars, they are coming. Metro expects to have eight-car trains on 20 percent of its fleet by the end of 2006, 30 percent by the end of 2007 and 50 percent by the end of 2008. Dear Dr. Gridlock: I have been reading the suggested solutions to the difficulty of getting on and off Metro trains. If Metro can manage to fine-tune the brakes to allow people to line up on station platforms and have the train stop right in front of them, it could have riders entering in the middle and exiting on the sides. That would not require the removal of any seats. Liliana Ward Alexandria Maybe. Removal of the vertical poles and seats around the center doors would be intended to spread standees throughout the cars, rather than block those trying to board through the middle doors. I do hope Metro will try one-way boarding and exiting. From checker at panix.com Fri Jan 6 03:45:04 2006 From: checker at panix.com (Premise Checker) Date: Thu, 5 Jan 2006 22:45:04 -0500 (EST) Subject: [Paleopsych] Normal Lebrecht: Too much Mozart makes you sick Message-ID: Too much Mozart makes you sick http://www.scena.org/columns/lebrecht/051214-NL-250mozart.html 5.12.14 [I think I'll pass on the 172 CD set of the "complete" Mozart. It costs $300 but most of the performers are unknown to me. I didn't get the Philips "complete" Mozart in 1991 on 180 CDs. The "complete" Bach on H?nnsler took up 171 CDs. I got that for $200 and have wanted it ever since I bought a copy of the big Schmieder BWV in 1964 directly from the publisher. I slogged through them all, but I must report that none of these pushed aside any of my favorites. But the Robert Levin performance of the first klavier concerto was quite exciting. And the four discs of organ music played by Bine Katrine Bryndorf were almost as good as Walcha. [Why are the best performers today all women? Bryndorf on the organ, Lara St. John on the violin, Helene Grimaud and Mitsuko Uchida on the piano, and Marin Alsop holding the baton? [Mozart mostly cranked out music for others, as was common in those days, Haydn and Telemann being other examples. He composed great masterpieces, esp. the piano concerti, of course, but he wrote little just for himself. Beethoven did crank out stuff, but he wrote more for himself than any composer before or since. And Brahms is right up there. Bach was a big cranker-outer, esp. the cantatas, but he also wrote a great deal of music for himself, much more than Mozart did. [Lehbrecht, however, ignores the timeless masterpieces that Mozart wrote. And recall Mr. Mencken's remark that Mozart (and Wagner) could not avoid genuine music creeping into their operas. But it's good to have some balance in the discussion. Charles Murray, an obvious Beethoven lover, had to decide from the sources he used to rank human achievement in his book by that title, whether Beethoven or Mozart was the greater. It was a toss up, but he came down on Beethoven's side. [Norm's a little overheated. I'd rank Mozart in the top ten and put him ahead of Shostakovich. But his corrective is badly needed!] ------------------- They are steam cleaning the streets of Vienna ahead of next month's birthday weekend when pilgrim walks are planned around the composer's shrines. Salzburg is rolling out brochures for its 2006 summer festival, which will stage every opera in the Kochel canon from infantile fragments to The Magic Flute, 22 in all. Pierre Boulez, the pope of musical modernism, will break 80 years of principled abstinence to conduct a mostly-Mozart concert, a celebrity virgin on the altar of musical commerce. Wherever you go in the coming year, you won't escape Mozart. The 250th anniversary of his birth on January 27 1756 is being celebrated with joyless efficiency as a tourist magnet to the land of his birth and a universal sales pitch for his over-worked output. The complete 626 works are being marketed on record in two special-offer super coffers. All the world's orchestras will be playing Mozart, wall to wall, starting with the Vienna Philharmonic on tour this weekend. Mozart is the superstore wallpaper of classical music, the composer who pleases most and offends least. Lively, melodic, dissonance free: what's not to like? The music is not just charming, it's full of good vibes. The Mozart Effect, an American resource centre which ascribes 'transformational powers' to Austria's little wonderlad, collects empirical evidence to show that Mozart, but no other music, improves learning, memory, winegrowing and toilet training and should be drummed into classes of pregnant mothers like breathing exercises. A 'molecular basis' identified in Mozart's sonata for two pianos is supposed to have stimulated exceptional brain activity in laboratory rats. How can one argue with such 'proof'? Science, after all, confirms what we want to believe - that art is good for us and that Mozart, in his short-lived naivety, represents a prelapsarian ideal of organic beauty, unpolluted by industrial filth and loss of faith. Nice, if only it were true. The chocolate-box image of Mozart as a little miracle can be promptly banged on the head. The hard-knocks son of a cynical court musician, Mozart was taught from first principles to ingratiate himself musically with people of wealth and power. The boy, on tour from age five, hopped into the laps of queens and played limpid consolations to ruthless monarchs. Recognising that his music was better than most, he took pleasure in humiliating court rivals and crudely abused them in letters back home. A coprophiliac obsession with bodily functions, accurately evinced in Peter Shaffer's play and Milos Forman's movie Amadeus, was a clear sign of arrested emotional development. His marriage proved unstable and his inability to control the large amounts he earned from wealthy Viennese patrons was a symptom of the infantile behaviour that hastened his early death and pauper burial. Musical genius he may have been, but Mozart was no Einstein. For secrets of the universe, seek elsewhere. The key test of any composer's importance is the extent to which he reshaped the art. Mozart, it is safe to say, failed to take music one step forward. Unlike Bach and Handel who inherited a dying legacy and vitalised it beyond recognition, unlike Haydn who invented the sonata form without which music would never have acquired its classical dimension, Mozart merely filled the space between staves with chords that he knew would gratify a pampered audience. He was a provider of easy listening, a progenitor of Muzak.