[Paleopsych] Economist: Technology Quarterly
Premise Checker
checker at panix.com
Tue Sep 28 14:47:03 UTC 2004
Economist: Technology Quarterly
[Too many articles to send separately. Something of interest to everyone!
I omitted some from the Quarterly and included articles not in the issue
but were linked.]
Twin studies, genetics and the environment: Claiming one's inheritance
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3084532&subjectID=348894
SCIENCE & TECHNOLOGY
Aug 12th 2004 | TWINSBURG, OHIO
The relationship between genes and experience is becoming better
understood
THE scientific study of twins goes back to the late 19th century,
when
Francis Galton, an early geneticist, realised that they came in two
varieties: identical twins born from one egg and non-identical twins
that had come from two. That insight turned out to be key, although
it
was not until 1924 that it was used to formulate what is known as
the
twin rule of pathology, and twin studies really got going.
The twin rule of pathology states that any heritable disease will be
more concordant (that is, more likely to be jointly present or
absent)
in identical twins than in non-identical twins--and, in turn, will
be
more concordant in non-identical twins than in non-siblings. Early
work, for example, showed that the statistical correlation of
skin-mole counts between identical twins was 0.4, while
non-identical
twins had a correlation of only 0.2. (A score of 1.0 implies perfect
correlation, while a score of zero implies no correlation.) This
result suggests that moles are heritable, but it also implies that
there is an environmental component to the development of moles,
otherwise the correlation in identical twins would be close to 1.0.
Twin research has shown that whether or not someone takes up smoking
is determined mainly by environmental factors, but once he does so,
how much he smokes is largely down to his genes. And while a
person's
religion is clearly a cultural attribute, there is a strong genetic
component to religious fundamentalism. Twin studies are also
unravelling the heritability of various aspects of human
personality.
Traits from neuroticism and anxiety to thrill- and novelty-seeking
all
have large genetic components. Parenting matters, but it does not
determine personality in the way that some had thought.
More importantly, perhaps, twin studies are helping the
understanding
of diseases such as cancer, asthma, osteoporosis, arthritis and
immune
disorders. And twins can be used, within ethical limits, for medical
experiments. A study that administered vitamin C to one twin and a
placebo to the other found that it had no effect on the common cold.
The lesson from all today's twin studies is that most human traits
are
at least partially influenced by genes. However, for the most part,
the age-old dichotomy between nature and nurture is not very useful.
Many genetic programs are open to input from the environment, and
genes are frequently switched on or off by environmental signals. It
is also possible that genes themselves influence their environment.
Some humans have an innate preference for participation in sports.
Others are drawn to novelty. Might people also be drawn to certain
kinds of friends and types of experience? In this way, a person's
genes might shape the environment they act in as much as the
environment shapes the actions of the genes.
Twin research: Two of a kind
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3084541&subjectID=348894
SCIENCE & TECHNOLOGY
Aug 12th 2004 | TWINSBURG, OHIO
Researchers have descended on a small town in Ohio for two frenzied
days of work. Twin research has never been so popular
IN THE first weekend of every August, the town of Twinsburg, Ohio,
holds a parade. Decorated floats, cars and lorries roll slowly past
neat, white houses and clipped lawns, while thousands of onlookers
clap and wave flags in the sunshine. The scene is a perfect little
slice of America. There is, though, something rather strange about
the
participants: they all seem to come in pairs. Identical twins of all
colours, shapes, ages and sizes are assembling for the world's
largest
annual gathering of their kind.
The Twinsburg meeting is of interest to more people than just the
twins themselves. Every year, the festival attracts dozens of
scientists who come to prod, swab, sample and question the
participants. For identical twins are natural clones: the odd
mutation
aside, they share 100% of their genes. That means studying them can
cast light on the relative importance of genetics and environment in
shaping particular human characteristics.
In the past, such research has been controversial. Josef Mengele, a
Nazi doctor working at the Auschwitz extermination camp during the
second world war, was fascinated by twins. He sought them out among
arrivals at the camp and preserved them from the gas-chambers for a
series of brutal experiments. After the war, Cyril Burt, a British
psychologist who worked on the heredity of intelligence, tainted
twin
research with results that appear, in retrospect, to have been
rather
too good. Some of his data on identical twins who had been reared
apart were probably faked. In any case, the prevailing ideology in
the
social sciences after the war was Marxist, and disliked suggestions
that differences in human potential might have underlying genetic
causes. Twin studies were thus viewed with suspicion.
Womb mates
The ideological pendulum has swung back, however, as the human
genome
project and its aftermath have turned genes from abstract concepts
to
real pieces of DNA. The role of genes in sensitive areas such as
intelligence is acknowledged by all but a few die-hards. The
interesting questions now concern how nature and nurture interact to
produce particular bits of biology, rather than which of the two is
more important (see [5]article). Twin studies, which are a good way
to
ask these questions, are back in fashion, and many twins are
enthusiastic participants in this research. Laura and Linda Seber,
for
example, are identical twins from Sheffield Village, Ohio. They have
been coming to Twinsburg for decades. Over the years, they have
taken
part in around 50 experiments. They have had their reactions
measured,
been deprived of sleep for a night and had electrodes attached to
their brains. Like many other twins, they do it because they find
the
tests interesting and want to help science.
Research at the Twinsburg festival began in a small way, with a
single
stand in 1979. Gradually, news spread, and more scientists began
turning up. This year, half a dozen groups of researchers were
lodged
in a specially pitched research tent.
In one corner of this tent, Paul Breslin, who works at the Monell
Institute in Philadelphia, watched over several tables where twins
sat
sipping clear liquids from cups and making notes. It was the team's
third year at Twinsburg. Dr Breslin and his colleagues want to find
out how genes influence human perception, particularly the senses of
smell and taste and those (warmth, cold, pain, tingle, itch and so
on)
that result from stimulation of the skin. Perception is an example
of
something that is probably influenced by both genes and experience.
Even before birth, people are exposed to flavours such as chocolate,
garlic, mint and vanilla that pass intact into the bloodstream, and
thus to the fetus. Though it is not yet clear whether such pre-natal
exposure shapes taste-perception, there is evidence that it shapes
preferences for foods encountered later in life.
However, there are clearly genetic influences at work, as well--for
example in the ability to taste quinine. Some people experience this
as intensely bitter, even when it is present at very low levels.
Others, whose genetic endowment is different, are less bothered by
it.
Twin studies make this extremely clear. Within a pair of identical
twins, either both, or neither, will find quinine hard to swallow.
Non-identical twins will agree less frequently.
On the other side of the tent Dennis Drayna, from the National
Institute on Deafness and Other Communication Disorders, in
Maryland,
was studying hearing. He wants to know what happens to sounds after
they reach the ear. It is not clear, he says, whether sound is
processed into sensation mostly in the ear or in the brain. Dr
Drayna
has already been involved in a twin study which revealed that the
perception of musical pitch is highly heritable. At Twinsburg, he is
playing different words, or parts of words, into the left and right
ears of his twinned volunteers. The composite of the two sounds that
an individual reports hearing depends on how he processes this
diverse
information and that, Dr Drayna believes, may well be influenced by
genetics.
Elsewhere in the marquee, Peter Miraldi, of Kent State University in
Ohio, was trying to find out whether genes affect an individual's
motivation to communicate with others. A number of twin studies have
shown that personality and sociability are heritable, so he thinks
this is fertile ground. And next to Mr Miraldi was a team of
dermatologists from Case Western Reserve University in Cleveland.
They
are looking at the development of skin diseases and male-pattern
baldness. The goal of the latter piece of research is to find the
genes responsible for making men's hair fall out.
The busiest part of the tent, however, was the queue for
forensic-science research into fingerprints. The origins of this
study
are shrouded in mystery. For many months, the festival's organisers
have been convinced that the Secret Service--the American government
agency responsible for, among other things, the safety of the
president--is behind it. When The Economist contacted the Secret
Service for more information, we were referred to Steve Nash, who is
chairman of the International Association for Identification (IAI),
and is also a detective in the scientific investigations section of
the Marin County Sheriff's Office in California. The IAI, based in
Minnesota, is an organisation of forensic scientists from around the
world. Among other things, it publishes the Journal of Forensic
Identification.
Mr Nash insists that the work on twins is being sponsored by the
IAI,
and has nothing to do with the Secret Service. He says the
organisation collects sets of similar finger and palm prints so that
improvements can be made in the ability to distinguish ordinary sets
of prints. Although identical twins tend to share whorls, loops and
arches in their fingerprints because of their common heredity, the
precise patterns of their prints are not the same.
Just who will benefit from this research is unclear. Although the
IAI
is an international organisation, not everyone in it will have
access
to the twin data. Deciding who does will have "a lot of parameters",
according to Mr Nash. He says that the work is being assisted by the
American government at the county, state and federal level, and that
government agencies will have access to the data for their research.
He takes pains to stress that this will be for research purposes,
and
says none of the data will be included in any criminal databases.
But
this cloak-and-dagger approach suggests that, while twin studies
have
come a long way, they have not shaken off their controversial past
quite yet. If they are truly to do so, a little more openness from
the
Feds would be nice.
Genetics and embryology: nobody's perfect
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=2705295&subjectID=348894
BOOKS
May 27th 2004
GENETICS is a science of dysfunction. A smoothly running organism
provides few insights into the way it is built and run by DNA. But
if
part of that DNA goes wrong, inferences can be made about what the
faulty bit is supposed to do when it is working properly.
Nowadays this can be done systematically, as the technology exists
to
knock genes out, one at a time, from experimental animals. But
historically--and still today for people, as opposed to laboratory
rats--genetics has mainly been a study of those natural genetic
accidents called mutations. It is this history, as much as the
present, that Armand Marie Leroi, an evolutionary biologist at
Imperial College, London, addresses in his book, published recently
in
Britain.
Freaks of nature have always attracted the attention of the curious
and the prurient. Many of them are caused by genetic abnormalities,
and it is with famous human freaks of the past--both their
deformities
and their intensely human lives--that Mr Leroi fills much of his
book.
There are stories of tragedy. People whose sexual identities have
been
scrambled seem to have had a particularly difficult time throughout
the ages. There are also triumphs, such as that of Joseph
Boruwlaski,
a man who turned his short stature into a brand, and ended his days
rich and ennobled. But Mr Leroi never loses sight of the underlying
biology.
Boruwlaski's lack of height, for example, looks to Mr Leroi as if it
was caused by a failure of one of the genes involved in making, or
responding to, growth hormone. Achondroplasia, a more common
stature-reducing mutation, is caused by a change in the receptor
protein for a series of molecules called fibroblast growth factors.
Indeed, in 99% of cases a change in a single link in the amino-acid
chain that makes up the receptor is what causes the problem. On such
chances do lives turn. And sciences too. "Mutants" contains,
artfully
arranged, an excellent layman's guide to human embryology--much of
it
knowledge that has been built up by analysis of those chance
mistakes.
For those who truly wish to know their origins without consulting a
dry academic tome, this is a book to read. Nor are punches pulled
about the sometimes dubious history of the subject. The science of
dysfunction was often a dysfunctional science. Anatomists of earlier
centuries engaged in ruthless scrambles to acquire, dissect and boil
for their bones the bodies of those dead mutants who had come to the
public notice. And eugenics, with all its ghastliness and evil, was
the child of genetics when it thought it knew far more than it
really
did (a state of affairs which, to listen to some geneticists today,
has not completely gone away).
Mr Leroi does not fall into that error. Instead, he uses genetics's
ignorance to raise the question of who, exactly, is a mutant.
Red-headed people, for example, might balk at the description. Yet
genetics suggests it would be a justified one, since red-headedness
is
due to a dysfunctional gene for one of the two forms of melanin that
form the palette from which skin and hair colours are painted. If
red-headedness conferred some selective advantage to the Celts (in
whom it is common), then it would count in biological parlance as a
polymorphism. But if it does, that advantage has never been
discerned.
However, as Mr Leroi points out, the statistics of mutation rates
and
gene numbers suggest that everyone is a mutant many times over. The
average adult, according to his calculations, carries 295
deleterious
mutations. Moralists have long pointed out that nobody is perfect.
Genetics seems to confirm that.
Mutants: On Genetic Variety and the Human Body.
By Armand Marie Leroi.
Viking Press; 448 pages; $25.95. Harper Collins; £20
Mental health: Psycho politics
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3182371&subjectID=348945
BRITAIN
Sep 9th 2004
The government responds to worries about mad people who kill
MENTAL health is the Cinderella of the National Health Service. It
generally registers on politicians' radar screen only when the
public
gets worked up about the dangers posed by mad people. New government
proposals to reform mental health law are a response to such
concerns.
The draft bill will make it easier to detain people with a mental
disorder who pose a threat to others.
Popular worries about this danger are stoked whenever someone with a
history of mental illness commits a murder. Such tragedies are often
blamed upon the switch to treating people in the community, which
has
gathered momentum in the past two decades.
When mentally ill people are discharged from hospital, they can fail
to follow the treatments they need. The bill deals with this by
allowing mandatory treatment within the community. It also closes a
loophole in the current legislation under which individuals with a
personality disorder cannot be detained unless there is a good
chance
that treatment will improve the condition. The new legislation
allows
them to be detained if psychiatrists think treatment is clinically
appropriate even if it may not work.
The proposals are a step back from the broader powers of detention
envisaged in an earlier version of the bill. However, Paul Farmer,
chair of the Mental Health Alliance, is "deeply disappointed at the
failure to address the fundamental flaws in the first draft".
The bill's opponents believe that it will infringe the rights of
mentally ill people. "It is discriminatory to say that people who
retain their full decision-making capacity can be forced to have
medical treatment," argues Tony Zigmond, vice-president of the Royal
College of Psychiatrists.
But according to Rosie Winterton, the health minister, the bill is
designed to take account of the Human Rights Act. Patients will be
able to choose for themselves who represents them; at present, this
role is automatically assigned to their closest relation. All
compulsory treatment beyond the first 28 days will have to be
authorised by a new, independent tribunal.
Aside from their ethical objections, the bill's opponents say that
the
case for more compulsory treatment is weak. The mentally ill are
responsible for a relatively small number of murders and other
killings (see chart). An historical analysis of homicides found that
the proportion committed by the mentally ill fell between the late
1950s and the mid 1990s.
Other research suggests that popular concern is misplaced. A recent
study in the British Medical Journal found that killings by
strangers
are more often linked to alcohol and drug misuse than to severe
mental
illness. Clearly there is some trade-off between the number of
mentally ill people who are detained and the homicide rate. But Mr
Zigmond says that up to 5,000 people with a mental disorder would
have
to be detained to prevent one homicide.
Such arguments have not swayed the government. Ministers know that
they are much more likely to be blamed for a murder committed by
someone with a history of mental disorder than for a stabbing after
a
night of binge-drinking. If nothing else, the reforms will allow
them
to enter a plea of diminished political responsibility.
Health spending: the older they get, the more they cost
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3222332&subjectID=348945
UNITED STATES
Sep 23rd 2004
How much spending on health care goes to old people?
IN THIS election no group is treated with more respect than
"seniors"
are. One in six adult Americans is above 65 years old, but they may
well account for one in four voters; hence the attempts to spend
ever
more on their health. But do the elderly really get such a rough
deal?
The main chunk of public money spent on old people's health is
Medicare, which cost $281 billion--2.6% of GDP--in 2003. Not all
this
money goes to old people: around 15% of Medicare's 41m beneficiaries
in 2003 were under 65, because the programme also covers some of the
working-age disabled. But the oldsters more than make up for this
elsewhere.
Take, for instance, Medicaid, the joint federal-state programme that
pays for health care for poor people and cost about $270 billion
last
year, according to the Congressional Budget Office. Some
three-quarters of the 51m individuals enrolled in Medicaid in 2003
are
poor children, their parents and pregnant women. Yet they receive
little more than a quarter of the benefit payments. The lion's share
of the money goes to the old and the disabled. The old, who comprise
only 10% of the beneficiaries, account for 28% of Medicaid spending.
Put these numbers together, and the elderly, who make up about 12%
of
America's total population, consume nearly 60% of spending on the
country's two biggest health-care programmes: an amount equal to
around 3% of GDP. And that does not include other ways in which old
people get their health care paid for by the government--such as the
close-to-$30-billion that goes to veterans' health care.
In one way there is nothing unusual in this. The costs of medical
care
are concentrated in the final years of life--something that is
reflected in health-care spending in every country. But two things
may
begin to irk younger Americans. First, America's generosity to the
old
stands in marked contrast to the lack of coverage offered to other
age
groups--notably the 45m people who have no health insurance.
Second, the elderly's share of the pie is bound to increase. In 2006
Medicare will start to pick up a big chunk of the cost of
prescription
drugs for the elderly, pushing up the cost of that programme alone
to
3.4% of GDP. At the end of this decade, the huge generation of
baby-boomers will start to swell the ranks of the old. By 2024,
outlays on Medicare will exceed those on the Social Security pension
system. By 2030, when there will be almost 80m beneficiaries,
spending
on Medicare alone will reach 7% of GDP--the same proportion as
Britain
spends on its National Health Service, which covers everyone.
Down on the pharm
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3171546&subjectID=526354
TECHNOLOGY QUARTERLY
REPORTS
Sep 16th 2004
Biotechnology: Will genetically engineered goats, rabbits and flies
be
the low-cost drug factories of the future?
EARLIER this year, the regulators at the European Medicines Agency
(EMEA) agreed to consider an unusual new drug, called ATryn, for
approval. It was developed to treat patients with hereditary
antithrombin deficiency, a condition that leaves them vulnerable to
deep-vein thrombosis. What makes ATryn so unusual is that it is a
therapeutic protein derived from the milk of a transgenic goat: in
other words, an animal that, genetically speaking, is not all goat.
The human gene for the protein in question is inserted into a goat's
egg, and to ensure that it is activated only in udder cells, an
extra
piece of DNA, known as a beta-caseine promoter, is added alongside
it.
Since beta caseine is made only in udders, so is the protein. Once
extracted from the goat's milk, the protein is indistinguishable
from
the antithrombin produced in healthy humans. The goats have been
carefully bred to maximise milk production, so that they produce as
much of the drug as possible. They are, in other words, living drug
factories.
ATryn is merely the first of many potential animal-derived drugs
being
developed by GTC Biotherapeutics of Framingham, Massachusetts. The
company's boss, Geoffrey Cox, says his firm has created 65
potentially
therapeutic proteins in the milk of its transgenic goats and cows,
45
of which occurred in concentrations of one gram per litre or higher.
Female goats are ideal transgenic "biofactories", GTC claims,
because
they are cheap, easy to look after and can produce as much as a
kilogram of human protein per year. All told, Dr Cox reckons the
barn,
feed, milking station and other investments required to make
proteins
using transgenic goats cost less than $10m--around 5% of the cost of
a
conventional protein-making facility. GTC estimates that it may be
able to produce drugs for as little as $1-2 per gram, compared with
around $150 using conventional methods. Goats' short gestation
period--roughly five months--and the fact that they reach maturity
within a year means that a new production line can be developed
within
18 months. And increasing production is as simple as breeding more
animals. So if ATryn is granted approval, GTC should be able to
undercut producers of a similar treatment, produced using
conventional
methods, sales of which amount to $250m a year.
GTC is not the only game in town, however. Nexia, based in Montreal,
is breeding transgenic goats to produce proteins that protect
against
chemical weapons. TransOva, a biotech company based in Iowa, is
experimenting with transgenic cows to produce proteins capable of
neutralising anthrax, plague and smallpox. Pharming, based in the
Netherlands, is using transgenic cows and rabbits to produce
therapeutic proteins, as is Minos BioSystems, a Greek-Dutch start-up
which is also exploring the drugmaking potential of fly larvae.
It all sounds promising, but the fact remains that medicines derived
from transgenic animals are commercially untested, and could yet run
into regulatory, safety or political problems. At the same time,
with
biotechnology firms becoming increasingly risk-averse in response to
pressure from investors and threats of price controls from
politicians, transgenic animal-derived medicines might be exactly
what
the pharmaceuticals industry is lacking: a scalable, cost-effective
way to make drugs that can bring products to market within a decade
or
so, which is relatively quick by industry standards.
Just say no to Frankendrugs?
So a great deal depends on the EMEA's decision, particularly given
previous European scepticism towards genetically modified crops. But
as far as anyone can tell, the signs look promising. In a conference
call in August, Dr Cox told analysts that the EMEA had so far raised
no concerns about the transgenic nature of his firm's product.
But as the fuss over genetically modified crops showed, public
opinion
is also important. While some people may regard the use of animals
as
drug factories as unethical, however, the use of genetic engineering
to treat the sick might be regarded as more acceptable than its use
to
increase yields and profits in agriculture. Conversely, tinkering
with
animal genes may be deemed to be less acceptable than tinkering with
plant genes. A poll conducted in America in 2003 by the Pew
Initiative
on Food and Biotechnology found that 81% of those interviewed
supported the use of transgenic crops to manufacture affordable
drugs,
but only 49% supported the use of transgenic animals to make
medicines.
Even some biotech industry executives are unconvinced that medicines
made primarily from animal-derived proteins will ever be safe enough
to trust. Donald Drakeman of Medarex, a firm based in Princeton, New
Jersey, is among the sceptics. His firm creates human antibodies in
transgenic mice, clones the antibodies and then uses conventional
processes to churn out copies of the antibodies by the thousand.
"With
goat and cow milk, especially, I worry about the risk of animal
viruses and prions being transferred in some minute way," he says.
(Bovine spongiform encephalitis, or "mad cow disease", is thought to
be transmitted by a rogue form of protein called a prion.)
Another concern, raised by lobby groups such as Greenpeace and the
Union of Concerned Scientists, is that transgenic animals might
escape
into the wild and contaminate the gene pool, triggering all kinds of
unintended consequences. There is also concern that an animal from
the
wild could find its way into GTC's pens, make contact with one of
the
transgenic animals, and then escape to "expose" other animals in the
wild. Or what if the transgenic animals somehow got into the human
food chain?
Short of sabotage, none of these scenarios seems very likely,
however.
Since transgenic goats, for example, are living factories whose
worth
depends on their producing as much milk as possible, every measure
is
taken to keep them happy, healthy, well fed and sequestered from
non-transgenic animals. As animals go, goats and cows are relatively
unadventurous creatures of habit, are more easily hemmed in than
horses, and are usually in no mood to run away when pregnant--which
they are for much of the time at places like GTC and TransOva.
The uncertainty over regulatory and public reactions is one of the
reasons why, over the past four years, at least two dozen firms
working to create drugs from transgenic animals have gone bust. Most
were in Europe. GTC, which leads the field, has nothing to worry
about, however, since it is sitting on around $34m in cash. Also
sitting pretty is Nexia, particularly since it began to focus on the
use of transgenic animals to make medicines that can protect against
nerve agents.
Nexia became known as the spider-silk company, after it created
transgenic goats capable of producing spider silk (which is, in
fact,
a form of protein) in their milk. It is now working to apply the
material, which it calls BioSteel, in medical applications. Using
the
same approach, the company has now developed goats whose milk
contains
proteins called bioscavengers, which seek out and bind to nerve
agents
such as sarin and VX. Nexia has been contracted by the US Army
Medical
Research Institute of Chemical Defence and DRDC Suffield, a Canadian
biodefence institute, to develop both prophylactic and therapeutic
treatments. Nexia believes it can produce up to 5m doses within two
years.
Today, the most common defence against nerve agents is a
post-exposure
"chem-pack" of atropine, which works if the subject has genuinely
been
exposed to a nerve agent, but produces side-effects if they have
not.
"You do not want to take this drug if you haven't been exposed,"
says
Nexia's chief executive, Jeff Turner. The problem is that it is not
always possible to tell if someone has been exposed or not. But
Nexia's treatment, says Dr Turner, "won't hurt you, no matter what."
The buzz around flies
But perhaps the most curious approach to making
transgenic-animal-derived medicines is that being taken by Minos
BioSystems. It is the creation of Roger Craig, the former head of
biotechnology at ICI, a British chemical firm, and his colleagues
Frank Grosveld of Erasmus University in the Netherlands and Babis
Savakis of the Institute of Molecular Biology and Biotechnology in
Crete. While others concentrate on goats, Minos is using flies.
"Mice
won't hit scale, cows take too damn long to prepare for research, GM
plants produce GM pollen that drifts in the wind, chickens have
long-term stability of germ-line expression issues, and they carry
viruses and new strains of 'flu--I quite like flies, myself," says
Dr
Craig.
A small handful of common house flies, he says, can produce billions
of offspring. A single fly can lay 500 eggs that hatch into larvae,
a
biomass factory capable of expressing growth hormone, say, or
antibodies, which can then be extracted from the larval serum. The
set-up cost of producing antibodies using flies would, Dr Craig
estimates, be $20m-40m, compared with $200m to $1 billion using
conventional methods. "In addition to getting some investors, the
key
here is gaining regulatory and pharma acceptance of the idea that
flies have to be good for something," he says. This will take time,
he
admits, and could be a hard sell. But if the idea of using
transgenic
goats to make drugs takes hold, flies might not be such a leap.
For the time being, then, everything hinges on GTC's goats. The
EMEA's
verdict is expected before the end of the year. Yet even if Dr Cox
wins final approval to launch ATryn next year, he too faces a
difficult task convincing the sceptics that transgenic animals are a
safe, effective and economical way to make drugs. As Monsanto and
other proponents of genetically modified crops have learned in
recent
years, it takes more than just scientific data to convince biotech's
critics that their fear and loathing are misplaced.
Digital bioprospecting: Finding drugs in the library
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3219828&subjectID=531766
SCIENCE & TECHNOLOGY
Sep 23rd 2004
Researchers are searching for new medicines lurking in old herbal
texts
IT IS a miracle that the "Ambonese Herbal", a 17th-century medical
text compiled by Georg Everhard Rumpf, a German botanist, ever made
it
to the printing press. Rumphius, as the author styled himself in
Latin, was an employee of the Dutch East India Company. He was
stationed on Ambon, in the Malay archipelago (now part of
Indonesia).
He began collecting and drawing plants in 1657, and continued even
after going blind in 1670. Four years later he survived an
earthquake
that killed his wife and daughter, but he then lost all his work in
a
fire in 1687. Undaunted, Rumphius dictated a new version of his
book,
the first volume of which was shipped to Europe in 1692, only to be
sunk by the French. Fortunately there was a copy, and Rumphius went
on
to compile six more volumes, completing the last just before his
death
in 1702. His employers sat on the book for decades, however, fearing
that rival nations would benefit from the medical knowledge it
contained. Finally, a botanist in Amsterdam published the work
between
1741 and 1755.
The "Ambonese Herbal" explains the medical uses of nearly 1,300
species native to the Malay archipelago, based on Rumphius's
quizzing
of the local population. Medicines shipped from Europe were either
useless or unavailable in sufficient quantities, Rumphius complained
in the preface, so using local remedies made much more sense. His
epic
work is just one of many historical texts that contain such
"ethnomedical" information.
The medicinal value of plants is still recognised. Roughly half of
the
anti-cancer drugs developed since the 1960s, and about 100 other
drugs
on the market, are derived from plants. In the past, figuring out
which plants to screen for therapeutic potential involved
ethnomedical
study in which traditional healers--from village shamans to
tale-telling old wives--were asked to identify valuable species.
More
recently, this approach has given way to high-throughput screening,
in
which thousands of random specimens are methodically tested by robot
technicians.
But both methods have their drawbacks: the knowledge of traditional
healers is being lost as they die out, and high-throughput screening
has not proved to be very efficient. That is why a team led by Eric
Buenz, a researcher at the Mayo Clinic College of Medicine in
Rochester, Minnesota, has proposed a new, hybrid approach. Hundreds
of
unstudied herbal texts, dating from Ancient Greece to the modern
age,
are sitting in libraries around the world. By sifting through these
texts and comparing the results with modern medical databases, it
should be possible to identify promising candidate species for
further
examination and screening. The researchers explain this strategy in
a
paper published this month in Trends in Pharmacological Sciences.
To test their idea, Mr Buenz and his colleagues analysed the first
volume of the "Ambonese Herbal". The text, originally in Dutch and
Latin, is in the process of being translated into English. Two
reviewers went through the English translation of the first volume
and
extracted all the medical references. They then drew up a table
listing each species, the symptoms for which it was prescribed, and
hence its probable pharmacological function. The sap of Semecarpus
cassuvium, the wild cadju tree, for example, is listed as a
treatment
for shingles. This suggests that it has antiviral properties.
The list of species was then checked against a database called the
International Plant Names Index, to identify misspellings and
synonyms. After that, each species was looked up in NAPRALERT, a
database listing all known biochemical and ethnomedical references
to
plants, to see if it had been mentioned in the medical literature.
It
was thus possible both to determine how accurate the information in
the "Ambonese Herbal" is, and to identify candidates for further
investigation.
Of the 42 plants described in Rumphius's first volume as having
medical properties, 24 had biochemical matches in NAPRALERT, which
suggests that they are indeed effective. Nine of the others had
ethnomedical matches, which means their potential use as medicines
is
already known about, but has not been followed up by modern science.
But nine plants did not appear in NAPRALERT at all, and are
therefore
potential sources of novel drugs.
The next step, says Mr Buenz, is to scale up and automate the
process.
"Our work with the Rumphius herbal was a proof of concept," he says.
"The push now is to make the project high throughput with
bioinformatics." Book scanners, he observes, have become cheaper and
more efficient in recent years. The latest models can scan 1,000
pages
an hour, yet are gentle enough to handle old and delicate tomes. And
by using natural-language processing software to look for particular
expressions, and cross-referencing potential matches with medical
and
botanical databases, the text can be analysed quickly.
Manually combing through the text of the first volume of the
"Ambonese
Herbal" took four weeks, says Mr Buenz, but his experimental
automated
system did the same work in a few hours. The big challenges are
dealing with foreign languages, old typefaces and variations in
terminology--but translation systems and databases are improving all
the time. Text mining will never replace other methods of drug
discovery, but tapping the accumulated medical expertise locked up
in
old documents could, it seems, provide some helpful shortcuts.
Supercharching the brain
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171454
Sep 16th 2004
Biotechnology: New drugs promise to improve memory and sharpen
mental
response. Who should be allowed to take them?
DO YOU have an important meeting tomorrow, or perhaps an
examination,
for which you would like your mental powers to be at their peak?
Within a few years, you may have the option of taking a "cognitive
enhancer"--a drug that sharpens your mental faculties. During the
1990s--declared "decade of the brain" by America's Congress and the
National Institutes of Health--much progress was made in
understanding
the processes of memory and cognition. Advances in genetics,
molecular
biology and brain-imaging technologies allowed researchers to
scrutinise the brain's workings and gave them the potential to
create
drugs to enhance aspects of its performance. Though there are very
few
products on the market that reflect this increased understanding,
that
may soon change.
At least 40 potential cognitive enhancers are currently in clinical
development, says Harry Tracy, publisher of NeuroInvestment, an
industry newsletter based in Rye, New Hampshire. Some could reach
the
market within a few years. For millions, these breakthroughs could
turn out to be lifesavers or, at the very least, postpone the
development of a devastating disease. In America alone, there are
currently about 4.5m people suffering from Alzheimer's disease, and
their ranks are expected to grow to 6m by 2020. Mild Cognitive
Impairment (MCI), defined as memory loss without any significant
functional impairment, is estimated to afflict at least another 4.5m
people. Because the majority of MCI patients will eventually develop
Alzheimer's, many doctors believe that intervening in the early
stages
of the disease could significantly delay its onset.
But there is a fine line between curing the ill and enhancing the
well. The gradual deterioration of mental faculties is part of the
natural process of ageing. There are now about 85m people aged 50
and
over in America, many of whom may already fit the definition of
"age-related cognitive decline", a category so vague it includes
people who become distressed over such mild glitches as forgetting
their keys or glasses. Should they be offered "cognitive enhancers"
too?
And the interest in such drugs will not stop there, predicts James
McGaugh, who directs the Centre for the Neurobiology of Learning and
Memory at the University of California at Irvine. Next in line could
be executives who want to keep the names of customers at the tips of
their tongues, or students cramming for exams. "There's an awful lot
of sales potential," says Dr McGaugh. That is putting it mildly. But
are such drugs really feasible--and if they are, who should be
allowed
to take them?
Thanks for the memories
A handful of small companies are at the forefront of the fledgling
field of cognitive enhancement. Among them is six-year-old Memory
Pharmaceuticals, based in Montvale, New Jersey, which has two
compounds in early-stage clinical trials and recently went public.
The
company's visionary and Nobel prize-winning co-founder, Eric Kandel,
has been unravelling the processes of learning and memory for more
than four decades with the help of Aplysia, a type of colossal sea
slug that grows up to a foot in length. While it has only about
20,000
neurons (humans have 100 billion), individual neurons are large
enough
to be distinguished by eye--making them easy to study.
When a shock is applied to Aplysia's tail or head, messages travel
around a circuit of neurons, causing it to retract its gill for
protection. The same fundamental process occurs in humans too:
neurons
"talk" to each other across a gap, the synapse, via chemicals called
neurotransmitters, which bind to receptors at the receiving end. One
shock in Aplysia creates a memory that lasts for minutes; several
shocks spaced out over time will be remembered for days or longer.
Dr
Kandel showed that the process of acquiring long-term memories does
not change the basic circuitry of nerve cells. Rather, it creates
new
synaptic connections between them, and solidifies existing ones.
In 1990, Dr Kandel's laboratory at Columbia University found the
first
clue to one of the key elements underlying that process--"cyclic AMP
response element binding protein", or CREB. It turns out that CREB
functions like a molecular switch that can turn genes off or on,
thus
manipulating the production of proteins that bring on lasting
structural changes between neurons. Lowering the threshold for
turning
on that switch causes memories to be consolidated more easily. After
creating compounds that successfully manipulated the CREB pathway in
rodents, the company signed a partnership with Swiss pharmaceutical
giant Hoffmann-La Roche worth up to $106m.
Helicon Therapeutics of Farmingdale, New York, is pursuing the same
target, with competing patents, albeit more slowly. In the mid-1990s
the firm's co-founder, Tim Tully, a neuroscientist at Cold Spring
Harbor Laboratory of Long Island, New York, performed his own
groundbreaking CREB studies in fruit flies. In one particular
experiment, Dr Tully and his colleagues compared normal flies with
those that had been genetically engineered so that the CREB switch
was
permanently turned on. While crawling in a small tunnel in the
presence of an odour, the insects received an electric shock. Just
one
such jarring experience was enough to teach the enhanced flies to
run
away from the same odour in future: they had, in effect, perfect
recall, or what is sometimes called "photographic memory" in humans.
The normal flies, however, required a total of ten training sessions
to learn the same lesson. By the end of this year, Helicon hopes to
move one particularly promising compound into clinical trials.
You must remember this
Not everyone believes CREB-enhancers will boost human mental
performance, however. Among the sceptics is Joe Tsien, director of
the
Centre for Systems Neurobiology at Boston University, who created a
buzz a few years ago when he engineered "Doogie," a strain of
intelligent mice. Dr Tsien points to a study published in the
Journal
of Neuroscience last year, which found that mice with CREB "deleted"
from a part of the brain called the hippocampus showed little
impairment of long-term memory formation. Moreover, he notes, CREB
is
not a brain-specific molecule, but is present throughout the body.
"That doesn't bode well for the notion that it's a memory switch,"
argues Dr Tsien. Even if the drugs work, he adds, nasty side-effects
could appear--one of the main reasons promising compounds never make
it to the market.
Saegis Pharmaceuticals, based in Half Moon Bay, California, is
taking
a different approach--three of them, in fact. The company has
licensed
in three compounds, each one acting on a different pathway in the
brain. Moreover, all of them have already demonstrated efficacy in
animals, and two of them safety in humans. The company's lead
candidate, SGS742, which has just entered a mid-stage clinical trial
for Alzheimer's disease, appears to alter brain chemistry in several
distinct ways. Most importantly, the drug binds to GABA B receptors,
which act as pre-synaptic gatekeepers for various neurotransmitters.
By docking on to these receptors, SGS742 blocks their inhibitory
actions. This enables many more neurotransmitter messengers to
travel
from one nerve cell to another.
Besides pursuing compounds that originated elsewhere, Saegis is busy
developing its own drug pipeline. The firm enlisted Michela
Gallagher,
a research psychologist at Johns Hopkins University, to help
identify
new drug targets in animal models. Dr Gallagher, who has studied the
ageing brains of rats for more than a decade, has developed an
elaborate system with which she grades the rats based on their
ability
to master a variety of cognitive challenges, such as memorising a
specific location in a maze. Interestingly, she has found that both
humans and rats develop age-related memory loss gradually and in
similar proportion. By comparing the gene-expression profiles of
rats
of different ages and abilities, she has been able to pinpoint over
300 genes that play a part in the process. Because people share
those
genes, Dr Gallagher reckons her research will hasten the development
of memory drugs.
Currently only a handful of drugs to treat Alzheimer's are approved
in
America, and none for MCI. Most of them prevent the breakdown of
acetylcholine, a neurotransmitter. Unfortunately, these medications
are not that effective. While patients show small gains on tests,
many
doctors doubt that the scores translate into meaningful lifestyle
improvements, such as the ability to continue living at home.
Moreover, the drugs often have unpleasant side-effects, such as
nausea
and vomiting, which may be why they have failed to interest healthy
people. But that could change with the next generation of drugs.
Because of their huge market potential, any drug approved for MCI
will
have to show an immaculate safety profile, predicts Dr Tracy.
For an indication of what might happen if a safe and effective
cognitive enhancer were to reach the market, consider the example of
modafinil. Manufactured by Cephalon, a biotech company based in West
Chester, Pennsylvania, and sold under the names "Provigil" and
"Alertec", the drug is a stimulant that vastly improves alertness in
patients with narcolepsy, shift-work sleep disorder and sleep apnea.
Since it first reached the market in America in 1999, sales have
shot
through the roof, reaching $290m in 2003 and expected to grow by at
least 30% this year.
Much of the sales growth of modafinil has been driven by its
off-label
use, which accounts for as much as 90% of consumption. With its
amazing safety profile--the side-effects generally do not go beyond
mild headache or nausea--the drug is increasingly used to alleviate
sleepiness resulting from all sorts of causes, including depression,
jet lag or simply working long hours with too little sleep. Cephalon
itself is now focusing on moving the drug through late-stage
clinical
trials for attention deficit hyperactivity disorder in children.
Ritalin, an amphetamine now widely used to treat this disorder, is
in
the same category as morphine for its addictive potential. Most
experts believe that modafinil, by contrast, is far less likely to
be
abused.
Nothing new under the sun
While there are those who scoff at the idea of using a
brain-boosting
drug, Arthur Caplan, a bioethicist at the University of Pennsylvania
in Philadelphia, does not think it would be particularly new, or
inherently wrong, to do so. "It's human nature to find things to
improve ourselves," he says. Indeed, for thousands of years, people
have chewed, brewed or smoked substances in the hopes of boosting
their mental abilities as well as their stamina. Since coffee first
became popular in the Arab world during the 16th century, the drink
has become a widely and cheaply available cognitive enhancer. The
average American coffee drinker sips more than three cups a day (and
may also consume caffeine-laced soft drinks).
Prescription drugs, though never intended for widespread use, have
followed suit. Ritalin, for example, is used by some college
students
to increase their ability to study for long hours. Not surprisingly,
some worry about the use of such drugs to gain an unfair advantage.
Modafinil has already surfaced in doping scandals. Kelli White, an
American sprinter who took first place in the 100-metre and
200-metre
competitions at last year's World Championships in Paris, later
tested
positive for the drug. Initially she insisted that it had been
prescribed to treat narcolepsy, but subsequently admitted to using
other banned substances as well. As a result, she was forced to
return
the medals she won last year and, along with a handful of other
American athletes, was barred from competitions for two years.
Nonetheless, such performance-enhancing properties are exactly why
the
armed forces have taken an interest in brain-boosting drugs. For
soldiers on the battlefield, who may sleep only four hours a night
for
weeks, a boost in alertness could mean the difference between life
and
death. Pilots on long missions are also at risk: fatigue means they
have slower reaction times and impaired attention spans, says John
Caldwell, a research psychologist at the US Air Force Fatigue
Countermeasures Branch, who has been studying the effects of sleep
deprivation in pilots for a decade. Worst of all, pilots are prone
to
"microsleeps"--short, involuntary naps that can last up to 30
seconds.
Since the second world war, pilots of American fighter jets have
been
known to use amphetamines, known as "go pills", to stop them dozing
off at the controls.
But there are drawbacks to amphetamines. Besides their addictive
potential, they are strong stimulants, which can prevent soldiers
from
sleeping when a legitimate opportunity arises. But with modafinil,
which has a much more subtle effect on the nervous system, napping
is
an option, says Dr Caldwell. Last December, America's air force
authorised the use of modafinil as an alternative to
dextroamphetamine
for two-seater bomber missions lasting more than 12 hours. While the
drug has not yet been approved for use by solo fighter pilots,
approval is expected soon.
Better than coffee?
Last year, Nancy Jo Wesensten, a research psychologist at the Walter
Reed Army Institute of Research in Silver Spring, Maryland, compared
the effects of three popular alertness drugs--modafinil,
dextroamphetamine and caffeine--head to head, using equally potent
doses. Forty-eight subjects received one of the drugs, or a placebo,
after being awake for 65 hours. The researchers then administered a
battery of tests. All of the drugs did a good job restoring
wakefulness for six to eight hours. After that, says Dr Wesensten,
the
performance of the subjects on caffeine declined because of its
short
half-life (a fact that could be easily remedied by consuming another
dose, she points out). The other two groups reached their
operational
limit after 20 hours--staying awake for a total of 85 hours.
When the researchers looked at the drugs' effects on higher
cognitive
functions, such as planning and decision-making, they found each
drug
showed strengths and weaknesses in different areas. Caffeine was
particularly effective in boosting a person's ability to estimate
unknown quantities. When asked 20 questions that required a specific
numeric answer--such as "how high off a trampoline can a person
jump?"--92% of volunteers on caffeine and 75% on modafinil showed
good
estimation skills. But only 42% on dextroamphetamine did so--the
same
proportion as the sleep-deprived subjects who had received a
placebo.
The Defence Advanced Research Projects Agency (DARPA), the research
arm of America's defence department, is funding an initiative to
find
new and better ways to sustain performance during sleep deprivation.
Among its collaborators are Yaakov Stern, a neuroscientist, and
Sarah
Lisanby, a psychiatrist, both of Columbia University. Using
functional
magnetic-resonance imaging, Dr Stern has been observing the brains
of
healthy volunteers before and after forgoing sleep.
In the process, he has discovered a neural circuit that is linked to
prolonged periods of wakefulness while performing memory tasks.
Interestingly, its areas of activation vary from person to person,
depending on the ability to tolerate sleep deprivation. Dr Lisanby
is
an expert in transcranial magnetic stimulation, the use of strong
magnetic fields to facilitate or impede the communication of nerve
cells using a coil held close to the head. She now plans to test
stimulating the very regions in the brain that appear to correspond
to
better cognitive performance during long hours of wakefulness.
DARPA is also supporting the research of Samuel Deadwyler, a
neuroscientist at Wake Forest University in Winston-Salem, North
Carolina, who is studying the effects of ampakines, so called
because
they bind to AMPA receptors. There, they amplify the actions of
glutamate, a neurotransmitter involved in two-thirds of all brain
communications. Roger Stoll, the boss of Cortex Pharmaceuticals,
which
has been developing the compounds, has called them "a hearing aid
for
the brain".
According to Dr Deadwyler's tests in primates, Cortex's new drug
candidate, CX717, which just entered human clinical trials, appears
to
eliminate the cognitive deficits that go hand in hand with sleep
loss.
Monkeys deprived of sleep for 30 hours and then given an injection
of
the compound even do slightly better in short-term memory tests than
well-rested monkeys without the drug. And unlike amphetamines, which
put the whole body in a state of alert, CX717 only increases
activity
in key brain areas--without any addictive potential.
What pills cannot do
Drugs that can boost wakefulness or provide a short-term improvement
in mental agility exist today, and seem likely to proliferate in
future. But since coffee does both already--caffeine is humanity's
most widely consumed drug--there is little reason to object to this
state of affairs, provided no laws are broken and the risks of
side-effects or addiction are minimal.
Besides, cognitive enhancers merely improve the working of the
brain:
they cannot help people remember something they never learned in the
first place. No single pill will make you a genius, says Fred Gage,
a
neuroscientist at the Salk Institute in California, as there is no
pharmaceutical substitute for a rich learning environment. In
experiments with genetically identical mice, he found that the ones
brought up with lots of toys and space had 15% more neurons in an
area
of the brain important for memory formation. And the brain had not
just created more cells: fewer of them were dying off. "Any pill
coming down the road", says Dr Gage, "is going to be taken in the
context of how you behave."
And too much enhancement might even be counter-productive--at least
for healthy people. As Dr Kandel and his colleague Larry Squire, of
the University of California, San Diego, point out in their book
"Memory: From Mind to Molecules", there is a reason why the brain
forgets things: to prevent cluttering up our minds. People with the
natural ability to remember all sorts of minute details often get
bogged down in them, and are unable to grasp the larger concepts. So
it remains to be seen whether a pill can be any more effective than
a
good night's sleep and a strong cup of coffee.
Deus ex machinima?
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171417
Sep 16th 2004
Computer graphics: Hollywood movies increasingly resemble computer
games. Now a growing band of enthusiasts is using games to make
films
PAUL MARINO vividly recalls the first time he watched an animated
film
made from a video game. It was 1996, and Mr Marino, an Emmy
award-winning computer animator and self-described video-game
addict,
was playing "Quake"--a popular shoot-'em-up--on the internet with a
handful of friends. They heard that a rival group of Quake players,
known as the Rangers, had posted a film online. Nasty, brutish and
short, the 90-second clip, "Diary of a Camper", was a watershed. It
made ingenious use of Quake's "demo-record" feature, which enabled
users to capture games and then e-mail them to their friends. (That
way, gamers could share their fiercest battles, or show how they had
successfully completed a level.) The Rangers took things a step
further by choreographing the action: they had plotted out a game,
recorded it, and keyed in dialogue that appeared as running text.
Pretty soon, Mr Marino and others began posting their own "Quake
movies", and a new medium was born.
Is it a game or a film?
Eight years on, this new medium--known as "machinima" ("machine"
crossed with "cinema")--could be on the verge of revolutionising
animation. Around the world, growing legions of would-be digital
Disneys are using the powerful graphical capabilities of popular
video
games such as "Quake", "Half-Life" and "Unreal Tournament" to create
films at a fraction of the cost of "Shrek" or "Finding Nemo". There
is
an annual machinima film festival in New York, and the genre has
seen
its first full-length feature, "Anachronox". Spike TV, an American
cable channel, hired machinima artists to create shorts for its 2003
video game awards, and Steven Spielberg used the technique to
storyboard parts of his film "A.I." At machinima.com, hobbyists have
posted short animated films with dialogue, music and special
effects.
All of this is possible because of the compact way in which
multi-player games encode information about different players'
movements and actions. Without an efficient means of transmitting
this
information to other players across the internet, multi-player games
would suffer from jerky motion and time lags. Machinima exploits the
same notation to describe and manipulate the movements of characters
and camera viewpoints. The same games also allow virtual
environments
to be created quickly and easily, which allows for elaborate sets
and
props.
Games publishers have now begun to incorporate machinima into their
products. Epic Games has built a movie-making tool into its
spectacularly successful "Unreal Tournament" series, for example,
and
many games include level-design software that both gamers and
machinima artists can exploit. Later this year, Valve Software plans
to release "Half-Life 2", a long-awaited game that will include
tools
specifically geared toward machinima: in-game characters will have
realistic facial expressions with 40 different controllable muscles,
and eyes that glint. Conversely, machinima creators have built
movie-making tools on the foundations of games. Fountainhead
Entertainment licensed "Quake III" to create a point-and-click
software package called Machinimation, which it used to produce "In
the Waiting Line" by the British band Zero 7. It became the first
machinima music video to be widely shown on MTV last year.
Those in the video-games industry are fond of quoting the statistic
that sales of games now exceed Hollywood's box-office receipts.
Could
film-production technology also be overshadowed by games software?
"Machinima can be considered Hollywood meets Moore's law," says Mr
Marino, the author of a new book on machinima[3]* and executive
director of the Academy of Machinima Arts & Sciences, which holds an
annual film festival in New York. He points out that a 30-strong
animation team at Pixar took four years and $94m to create "Finding
Nemo". Animation studios' desire to cut costs and production time,
coupled with advances in video-game graphics technology offering the
potential for photo-realistic "cinematic computing" could, he
believes, eventually allow machinima to take over the animated-film
business. It is affordable, allows for a great deal of creative
freedom and, when compared with conventional forms of manual or
computer-based animation, is both faster and, says Mr Marino, more
fun.
A glimpse of the future of animation?
This is not to say that machinima is ready for prime time just yet.
The production quality is good, and will only get better with the
next
generation of video games, such as "Doom 3". But it still has a long
way to go to match Pixar's "Monsters, Inc." some frames of which
(there are 24 per second) took 90 hours to generate using over 400
computers. And because machinima movie-makers have been for the most
part video-game nerds, their films have historically lacked two
crucial elements: story and character. "There are no Ingmar Bergmans
yet," says Graham Leggat of the Film Society at Lincoln Centre.
"Last
year's machinima festival winner, `Red vs Blue', was based on sketch
comedy. Most other efforts are of the standard video-game
shoot-'em-up
variety." It is, in short, a situation akin to the earliest days of
cinema.
The tools will also have to improve. At the moment, machinima-makers
must use a patchwork of utilities developed by fellow enthusiasts.
"Quake", for example, has its own programming language that can be
used to build movie-making tools. This enabled Uwe Girlich, a German
programmer, to create a program called LMPC (Little Movie Processing
Centre), which translated a particular sequence of in-game actions
into text. David Wright, an American programmer, then released a
program called "KeyGrip" to convert this text back into visual
scenes,
and to allow simple editing. Other programs allowed machinima-makers
to add dialogue and special effects. As the games have advanced over
the years, so have their associated tools. But the machinima-making
process is still nowhere near as slick as desktop video-editing, for
example, which together with the rise of digital video cameras has
placed liveaction film-making tools in the hands of everyday
computer
users.
Another problem is that if a machinima-maker were to score a hit,
there might be legal trouble. So far, makers of video games have
looked the other way as their games were used in ways they never
intended. But if someone were to make money from a film that relied
on
one of its games, a game-maker might be tempted to get the lawyers
involved. For now, this does not concern Mr Marino, who believes
that
machinima is here to stay. "Five years ago, the games were not
nearly
as vivid as they are today," he says. "The same goes with Machinima.
We may not be on the level of `Shrek', but that will change. It's
inevitable."
* "[4]3D Game-Based Filmmaking: The Art of Machinima", Paraglyph
Press, $40.
4.
http://www.amazon.com/exec/obidos/tg/detail/-/1932111859/theeconomist
Science fiction? Not any more
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171407
Sep 16th 2004
Communications: Taking its cue from "Star Trek", an American company
has devised a clever new form of voice-driven wireless communicator
SCIENCE fiction has often been the source of inspiration for new
technologies. The exo-skeletons and head-mounted displays featured
in
the film "Aliens", for example, spawned a number of military-funded
projects to try to create similar technologies. Automatic sliding
doors might never have become popular had they not appeared on the
television series "Star Trek". And the popularity of flip-top or
"clamshell" mobile phones may stem from the desire to look like
Captain Kirk flipping open his communicator on the same programme.
Now it seems that "Star Trek" has done it again. This month,
American
soldiers in Iraq will begin trials of a device inspired by the "comm
badge" featured in "Star Trek: The Next Generation". Like crew
members
of the starship Enterprise, soldiers will be able to talk to other
members of their unit just by tapping and then speaking into a small
badge worn on the chest. What sets the comm badge apart from a mere
walkie-talkie, and appeals to "Star Trek" fans, is the system's
apparent intelligence. It works out who you are calling from spoken
commands, and connects you instantly.
The system, developed by Vocera Communications of Cupertino,
California, uses a combination of Wi-Fi wireless networking and
voice-over-internet protocol (VoIP) technologies to link up the
badges
via a central server, akin to a switchboard. The badges are already
being used in 80 large institutions, most of them hospitals, to
replace overhead paging systems, says Brent Lang, Vocera's
vice-president.
Like its science-fiction counterpart, the badge is designed so that
all functions can be carried out by pressing a single button. On
pressing it, the caller gives a command and specifies the name of a
person or group of people, such as "call Dr Smith" or "locate the
nearest anaesthesiologist". Voice-recognition software interprets
the
commands and locates the appropriate person or group, based on
whichever Wi-Fi base-station they are closest to. The person
receiving
the call then hears an audible alert stating the name of the caller
and, if he or she wishes to take the call, responds by tapping the
badge and starting to speak.
That highlights a key difference between the "Star Trek" comm badge
and the real-life version: Vocera's implementation allows people to
reject incoming calls, rather than having the voice of the caller
patched through automatically. But even the most purist fans can
forgive Vocera for deviating from the script in this way, says David
Batchelor, an astrophysicist and "Star Trek" enthusiast at NASA's
Goddard Space Flight Centre in Greenbelt, Maryland. For there are,
he
notes, some curious aspects to the behaviour of the comm badges in
"Star Trek". In particular, the fictional badge seems to be able to
predict the future. When the captain of the Enterprise says "Picard
to
sick-bay: Medical emergency on the bridge," for example, his badge
somehow connects him to the sick-bay before he has stated the
destination of the call.
Allowing badge users to reject incoming calls if they are busy,
rather
than being connected instantly, was a feature added at the request
of
customers, says Mr Lang. But in almost all other respects the badges
work just like their fictional counterparts. This is not very
surprising, says Lawrence Krauss, an astrophysicist at Case Western
Reserve University in Cleveland, Ohio, and the author of "The
Physics
of Star Trek". In science fiction, and particularly in "Star Trek",
most problems have technological fixes. Sometimes, it seems, those
fixes can be applied to real-world problems too.
Vocera's system is particularly well suited to hospitals, says
Christine Tarver, a clinical manager at El Camino Hospital in
Mountain
View, California. It allows clinical staff to reach each other far
more quickly than with beepers and overhead pagers. A recent study
carried out at St Agnes Healthcare in Baltimore, Maryland, assessed
the amount of time spent by clinical staff trying to get hold of
each
other, both before and after the installation of the Vocera system.
It
concluded that the badges would save the staff a total of 3,400
hours
each year.
Nursing staff often end up playing phone tag with doctors, which
wastes valuable time, says Ms Tarver. And although people using the
badges sometimes look as though they are talking to themselves, she
says, many doctors prefer it because it enables them to deal with
queries more efficiently. The system can also forward calls to
mobile
phones; it can be individually trained to ensure that it understands
users with strong accents; and it can even be configured with
personalised ringtones.
In Iraq, soldiers will use the Vocera badges in conjunction with
base-stations mounted on Humvee armoured vehicles. Beyond medical
and
military uses, Vocera hopes to sell the technology to retailers and
hotels. And the firm's engineers are now extending the system to
enable the badges to retrieve stored information, such as patient
records or information about a particular drug, in response to
spoken
commands. Their inspiration? Yet another "Star Trek" technology: the
talking ship's computer.
Home is where the future is
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171381
Sep 16th 2004
THE idea of the smart, automated home of the future has a
surprisingly
long history. As early as 1893, Answers magazine enthused about the
electrical home of the future, "fitted throughout with electricity,
electric stoves in every room...all the stoves can be lighted by
pressing a button at the bed-side...doors and windows fitted with
electric fastenings". By 1959, the designers of the "Miracle
Kitchen"
that went on show at the American National Exhibition in Moscow
promised that "household chores in the future will be gone for the
American housewife at the touch of a button or the wave of a hand."
Modern visions of the smart home feature fridges that propose
recipes
based on available ingredients, cupboards that order groceries just
before they run out, and various internet-capable kitchen
appliances.
Despite all the hype, however, the home of the future has resolutely
remained just that.
Yet the idea refuses to die. In Seattle, for example, Microsoft's
prototype home of the future is so thoroughly networked that when
you
ask for the time, the house answers, and when you put flour and a
food
processor on the kitchen counter, it asks if you would like to make
some bread and offers to project the recipe on to the counter.
Over at the Massachusetts Institute of Technology's Media Lab, Ted
Selker and his colleagues are experimenting with a smart spoon. It
has
salt, acidity and temperature sensors, and can even understand what
it
is stirring. "This spoon is basically a tongue," says Dr Selker.
Using
a simple display, the spoon advises you when, for example, you put
too
much salt or vinegar in the salad dressing, or your pudding is too
hot. Counter Intelligence, another Media Lab project, uses talking
ingredients to walk you through the preparation of various dishes.
Dr
Selker's group is also working with Stop & Shop, a retail chain, to
develop handheld computers that help shoppers find the ingredients
they want, and then suggest ways to prepare them.
Meanwhile, at Accenture's Sophia Antipolis laboratory in France,
researchers are developing a device called a "persuasive mirror".
Why
persuasive? Because it does not reflect what you actually look like,
but what you will look like if you fail to eat well and exercise
properly. Accenture's researchers are also dreaming up ways for the
elderly to share digital scrapbooks online with their grandchildren,
and smart systems that talk you through simple home-improvement
tasks,
such as installing a new light fixture.
Clearly, the dream of the smart home is alive and well. Indeed, the
spread of broadband internet links, mobile phones and, in
particular,
home wireless networks over the past few years has led some people
to
conclude that the dream might even be about to become a reality.
"The
thing that has changed over the last five years," says Franklin
Reynolds of Nokia's research centre in Boston, "is that five years
ago
we talked about all of this and there didn't seem to be any movement
in the marketplace. Now I see a hint that things are changing."
Wireless networks are a key building block for smart homes, since
they
enable devices to talk to each other, and to the internet, without
the
need to run cables all over the place. Always-on broadband links
help
too, since they enable appliances to send and receive information
whenever they want to. Wireless chips embedded into every device
could
transform an ordinary home into a distributed computing system. "The
era of the stand-alone device is over," says Jonathan Cluts,
Microsoft's head of consumer prototyping and strategy. "Soon, you
literally won't be willing to buy anything that doesn't somehow
communicate with the other things in your home or your life."
Proponents of the smart home are also heartened by the proliferation
of mobile phones, which are now the world's most ubiquitous digital
devices. Nokia, the world's largest handset-maker, hopes to turn its
tiny phones into universal remote-control devices, able to control
everything from the television to the lights to the microwave. "It's
the Swiss Army knife approach," says Mr Reynolds. "After all,
everyone--in our dreams at least--carries a mobile phone around with
them." You might, for example, use your mobile phone to turn on the
heating while you are out, or check the view from your holiday
home's
security camera.
But there are still several large obstacles to overcome. The first
is
the challenge of making devices easy to use and simple to connect to
each other. The aim, says Mr Cluts, "is to keep you free from
thinking
about the technology." Robert Pait of Seagate Technologies, a maker
of
hard disks, says that will not happen until technology companies
shift
the burden of learning from consumers to machines. "Humans should be
able to intuitively tell devices what to do," he says. "Today we're
all on a sharp learning curve with almost everything we buy."
Agreeing on standards will be just as much of a challenge. The smart
home will remain out of reach as long as devices from different
manufacturers cannot talk to each other. For a start, that means
agreeing on a wireless-networking standard: but there are several
rival technologies. HomeRF, once touted as the ultimate
home-networking standard, has been wiped out by Wi-Fi, but newer
technologies such as ZigBee and ultrawideband are now in the running
too. ZigBee is good for low-speed, low-power transmissions
(automated
meter readings, for example), while ultrawideband is ideal for
linking
up home-entertainment devices, though it is currently mired in a
standards war (see [3]article).
But several standards initiatives are afoot, says Brad Myers, who is
in charge of the Pebbles project, an effort at Carnegie Mellon
University in Pittsburgh to streamline the connectivity of home
appliances. The Universal Plug and Play Forum, for example, aims to
encourage "simple and robust connectivity" between devices from
different manufacturers. The Internet Home Alliance, which has the
backing of companies such as IBM, Microsoft and Whirlpool, is also
working to improve connectivity between household devices. And in
Europe, 168 companies have joined the Digital Living Network
Alliance
to streamline communication between PCs, handheld devices and home
entertainment systems. "We're making progress," says Dr Myers.
Perhaps
believers in the smart home can take heart. For even a standards war
is a step forward, since it suggests that there will, someday, be a
market worth fighting over.
Pictures as passwords
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171359
Sep 16th 2004
Computer security: Passwords are a cheap, cheerful and ancient
security measure. But might it make more sense to use pictures
instead?
HOW many passwords do you have? Of course, you do use separate
passwords for your various e-mail accounts, office or university
logons and e-commerce sites, and change them regularly--don't you?
Actually, the chances are that you don't. Despite the advice of
security experts, most people use the same one or two passwords for
everything, because it is simply too difficult to remember dozens of
different ones. Worse, many people use common words as
passwords--such
as hello, god and sex. About half choose family or pet names, and a
third choose the names of celebrities. This makes life easy for
malicious hackers: they can download dictionaries of the most
popular
passwords from the internet, and having worked out the password for
one account, often find that it works on the owner's other accounts
too.
A nonsense word made up of numbers and letters, or the first letters
of each word in a phrase, is more secure. But too many such
pA55w0rds
can be difficult to remember, particularly since office workers now,
on average, have to remember passwords for between six and 20
systems.
No wonder 70% of workers forget their password at some time or
another, forcing companies to spend an average of $18 per user per
year dishing out new ones. And forcing employees to use different
passwords, and to change them regularly, can be counterproductive:
they are then even more likely to forget their passwords, and may
end
up writing them down. Might the idea of the password, which is
thousands of years old, have finally had its day?
Proponents of graphic or pictorial passwords certainly think so. In
May, the United States Senate deployed a system called Passfaces,
developed by Real User, a firm based in Annapolis, Maryland, and
formerly a British-based company called ID Arts. In essence,
Passfaces
uses a random series of faces (photographs of British students, in
fact) as a password instead of a series of numbers and letters.
Users
are shown a series of faces, and are encouraged to imagine who each
face reminds them of, or what they imagine that person to be like.
When logging on, the same faces are then presented in order, but
each
one is shown together with eight other faces. The user clicks on the
familiar face in each case, and if the correct sequence of faces is
chosen the system grants access. Unlike a password, a series of
faces
cannot be written down or told to another person, which makes it
more
secure, says Paul Barrett, Real User's chief executive. And
recalling
a series of faces is easier than it sounds, because of the human
brain's innate ability to remember and recognise faces.
Passfaces builds on the results established by earlier
picture-recognition security systems. In the late 1990s, for
example,
Rachna Dhamija of the University of California at Berkeley developed
a
graphical password system called Déjà Vu, and asked students on the
Berkeley campus to test it. She found that over 90% of the students
could remember their pictorial passwords, while just 70% could
recall
character-based passwords. However, when allowed to choose their own
pictures, most opted to choose the most easily recognisable ones.
Over
half, for instance, chose a picture of the Golden Gate Bridge, which
can be seen from the campus. Using abstract images instead proved
far
more secure.
The study prompted two other computer scientists, Fabian Monrose of
Johns Hopkins University and Mike Reiter at Carnegie Mellon
University, to build a password system called Faces. Like Passfaces,
it uses mug shots. But the researchers found that allowing users of
the system to choose their own series of faces was a bad idea. They
demonstrated that given the race and sex of the user--neither of
which
is terribly difficult to guess in the US Senate--they could predict
the sequence of faces on the first or second attempt for 10% of
users.
People, it turns out, tend to favour faces of their own race and opt
for attractive people over ugly ones. So, like character-based
passwords, picture-based passwords are more secure when generated
randomly, rather than chosen by the user.
Pictorial passwords need not rely on faces, however, as two
Microsoft
Research projects demonstrate. The first, called Click Passwords,
replaces passwords with a series of clicks in particular areas of an
image. The clicks need not be pinpoint accurate: the required
accuracy
can be set to between ten and 100 screen pixels. Darko Kirovski, the
researcher who created the system, uses an image of 60 flags from
around the world, which allows users to click either on a whole flag
or on a detail of the flag. But any image can be used.
The second system was developed by Adam Stubblefield, a research
intern. While driving home from Microsoft's campus one day, he
realised that cloud formations reminded him of real-world objects.
By
substituting inkblots for cloud formations, he could draw on decades
of psychological testing using the Rorschach Inkblot test. In
particular, if the same inkblot is shown to different people they
will
come up with different associations--and individuals tend to make
the
same associations even after long intervals. With Mr Stubblefield's
method, users are shown a series of computer-generated inkblots, and
type the first and last letter of whatever they think the inkblot
resembles. This series of letters is then used as their password:
the
inkblots are, in other words, used as prompts.
Neither of these projects has made it out of the laboratory yet. But
Microsoft is clearly thinking beyond passwords. Speaking at the RSA
data-security conference earlier this year, Bill Gates, Microsoft's
chairman, predicted the gradual demise of passwords. "They just
don't
meet the challenge for anything you really want to secure," he said.
Like many people, Mr Gates believes that a combination of smart
cards
and biometric devices, such as fingerprint scanners and
facial-recognition systems, are the ultimate answer. But with an
average price tag of $50-100 per user and lingering questions about
their reliability, biometric devices have yet to spread beyond a few
niche markets. Password-based security, in contrast, is cheap to
implement since it requires no special hardware, but its limitations
are becoming daily more apparent. Using pictures as passwords seems
an
attractive middle ground, since it provides more security for very
little additional cost. It could be an idea whose time has come.
Gadgets with a sporting chance
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171369
Sep 16th 2004
Consumer electronics: New sports equipment, from tennis rackets to
running shoes, uses processing power to enhance performance. Is that
fair?
WHY should aspiring athletes stand on the sidelines when a spot of
electronic assistance can put them in the middle of the game? That
is
the question many sports-equipment makers are asking as they sense
an
opportunity to boost their sales with high-tech products. You could
call it the revenge of the nerds: a new wave of microchip-equipped
sporting goods promises to enhance the performance of novices and
non-sporting types alike--and could even make difficult sports
easier.
Take cross-country skiing. Victor Petrenko, an engineer at Dartmouth
College's Ice Research Lab in New Hampshire, has invented some smart
ski-brakes that, he believes, will increase the popularity of
cross-country skiing by making the sport less challenging for
beginners. The brakes, currently being tested by a ski manufacturer
in
the Alps, offer the necessary friction for a bigger "kick-off force"
and make the skis less likely to slide backwards in their tracks. To
make this happen, an electric current from the bottom of the skis
pulses through the ice, melting a thin layer of snow that instantly
refreezes and acts as a sort of glue.
This is not the only form of smart ski to hit the slopes. Atomic, a
leading ski-maker based in Austria, plans to introduce a system
later
this year that runs a diagnostic safety check to ensure that the ski
binding is properly closed, with the result being shown on a tiny
built-in liquid-crystal display.
Meanwhile, tennis equipment manufacturers are hoping that innovation
will bring new zip to their business as well. They certainly need to
do something: according to SportScanInfo, a market-research firm
based
in Florida, sales of tennis rackets in America fell 12.5% during the
first half of 2004 compared with the first half of 2003.
With the ball clearly in their court, researchers at Head, a maker
of
sporting equipment, have devised a product that should appeal to
players suffering from tennis elbow. A chip inside the racket
controls
piezo-electric fibres, which convert mechanical energy from the
ball's
impact into electrical potential energy. This energy is then used to
generate a counter-force in the piezo-electric fibres that causes a
dampening effect. All of this, the firm says, translates into less
stress on the elbow. Head claims that residual vibrations in the
racket are dampened twice as fast as in conventional rackets,
reducing
the shock experienced by the player's arm by more than 50%.
No doubt purists will object that this is simply not cricket.
Rule-makers in many sports are now being forced to consider the
implications of equipment that promises to augment athletes'
performance with electronic muscle. The International Tennis
Federation, that body that is responsible for setting the rules of
the
game, has specified in its most recent guidelines that "no energy
source that in any way changes or affects the playing
characteristics
of a racket may be built into or attached to a racket."
Yet despite such wording, the guideline does not actually eliminate
the use of Head's smart rackets, because there is no external energy
source--the damping effect relies solely on energy from the ball's
impact. Though high-tech equipment may cause controversy on the
court,
tennis clubs have to adhere to the guidelines set for the sport,
explains Stuart Miller, the ITF's technical manager. And if the
rules
allow self-generated forces to modify a racket's response, so be it.
Adidas
Put on your smart shoes
Different sports have encountered different technologies, though the
future will undoubtedly bring more overlap. In golf, gadgets that
pinpoint the location of the green using the global positioning
system
(GPS), for example, face challenges from the game's
standards-setting
institutions. The rule-making body of the Royal and Ancient Golf
Club
of St Andrews, which oversees the game in all countries except
America
and its dependencies, currently prohibits the use of
distance-measuring devices. As a result, golfers cannot rely on GPS
aids in a tournament. While technological innovation in golf
equipment
should continue, the player's skill should remain the predominant
factor, says David Rickman, who is in charge of the club's rules and
equipment standards.
The trend towards high-tech assistance is not limited to sports with
a
reputation for expensive gear, however. Even running, that most
basic
of sports, provides scope for electronic enhancement. The Adidas 1
running shoe, which is due to be launched in December, incorporates
a
battery-powered sensor that takes about 1,000 readings a second. A
microprocessor then directs a tiny embedded electric motor to adjust
the characteristics of the sneaker, enabling it to change the degree
of cushioning depending on the surface conditions and the wearer's
running style and foot position. The race for the smartest use of
microchips in sporting equipment, it seems, has begun.
Data you can virtually touch
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171347
Sep 16th 2004
Computer interfaces: Is haptic technology, which allows users to
"feel" virtual objects, finally ready to come out of the laboratory?
IN THE virtual world of a computer's imagination, you can look, but
you can't touch. Advances in computer graphics have made it possible
to create images that can fool the eye, yet they remain out of
reach,
mere phantoms trapped behind the glass of a computer monitor. With
the
right technology, however, it is possible to create a physical
illusion to match the optical one. Such "haptic" technology is
currently restricted to a few niches. But it is falling in price,
and
could be about to become rather more widespread.
Haptics is the science of simulating pressure, texture, temperature,
vibration and other touch-related sensations. The term is derived
from
a Greek word meaning "able to lay hold of". It is one of those
technologies much loved by researchers, but rarely seen in
commercial
products. In the laboratory, haptic systems are becoming
increasingly
sophisticated and capable. William Harwin, the head of a
haptics-research team at the University of Reading, believes that
such
systems are now ready for much wider use.
"Our latest project has seen a significant step towards creating the
hardware, software and control foundations for a high-fidelity,
multi-finger, haptic interface device," he says. The user's fingers
fit into rubber cups mounted on robot arms, the movement of which is
carefully constrained by a computer to give the illusion of contact
with a hard surface. It is then possible to model free-floating
three-dimensional objects that can be explored from all sides.
It is even possible to mimic impossible objects. By joining two
Möbius
strips along their boundaries, you create a structure known as a
Klein
bottle. The bottle has only one surface: its inside is its outside.
This strange mathematical object is impossible to construct in real
life, yet the Reading team has made a virtual one that you can reach
out and touch.
What can this technology be used for? So far, the most mature market
is in medicine, where haptics are often used in training devices for
doctors. Surgical-simulation devices are currently the bread and
butter of many haptics companies. Immersion, a firm based in San
Jose,
makes virtual "keyhole surgery" simulators and needle-insertion
simulators that provide a realistic "pop" as the needle enters the
virtual vein. It is a far cry from the days when oranges were used
as
training devices. Dean Chang, the firm's chief of technology,
believes
that eventually all surgical training will be done this way, just as
all pilots now train using flight simulators.
Recently, haptics have also been finding their way into consumer
products. Many video-game controllers, such as force-feedback
steering
wheels and joysticks, already contain simple haptic devices to
enable
virtual rally drivers and pilots to feel the bumps of artificial
roads
or the rumble of machine guns. Mobile phones are next: Immersion has
collaborated with Samsung, the world's third-largest handset-maker,
to
produce a technology called VibeTone, which will make its first
appearance at the end of the year. Just as existing phones can be
programmed to play different ring tones depending on the caller,
VibeTone allows for different vibrations. Without reaching into your
pocket, you will be able to tell whether it is your boss, spouse, or
babysitter who is calling.
The falling cost of processing power is helping to make haptics
feasible in new areas, says Mr Chang. "Every year when computing
power
gets cheaper, you can do haptics simulations with a cheaper
microprocessor," he says. The processing power required to control a
force-feedback steering wheel, for example, once required a
desk-sized
computer, but can now be handled easily by a simple commodity
microprocessor.
That still leaves the cost of hardware. But here, too, prices are
falling, notes Curt Rawley, chief executive of SensAble
Technologies,
a company based in Woburn, Massachusetts. In the past, he says, the
technology has been expensive, hard to program, and difficult to
integrate with other software. But where the prices of haptics
devices
used to start at $30,000, some systems now cost less than $3,000.
SensAble has just launched a development toolkit that allows haptics
to be added to almost any piece of software, and costs $1,950,
including hardware. The firm hopes to stimulate demand for its
haptic
gear, which is currently used in the design and visualisation of
products from running shoes to toys.
The ultimate goal is the integration of haptics with computer
graphics, to create touchable holograms. Just such a system was
demonstrated by SensAble last month at SIGGRAPH, a computer-graphics
conference in Los Angeles. The holographic virtual-reality home
theatre is still decades away, no doubt. But the advent of haptics
in
joysticks and mobile phones is a step in the right direction.
Last gasp of the fax machine
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171500
Sep 16th 2004
Office technology: That most exasperating piece of equipment, the
fax
machine, is on its way out. But it will take a very long time to die
WHO hasn't felt the urge to smash up the office fax with a hammer at
least once? The machines are slow, testy and prone to
breaking--usually at the worst possible moment. They became
indispensable items of office life in the 1980s and 1990s, when huge
rolls of paper curled from out-trays as lengthy documents arrived.
(More advanced machines cut the paper, but then the individual pages
ended up on the floor in random order.) Such clunkiness was
nonetheless a major advance from 150 years earlier, when Alexander
Bain, a Scottish inventor, patented the first fax--a device that
connected two styluses using a pendulum and a telegraph wire.
Thank goodness, then, that faxes are now going the way of the
typewriter and carbon paper. E-mail is mostly responsible: it is
easier, cheaper (especially for communicating abroad) and paperless.
Whereas fax machines must be checked constantly to see whether
something has come in, e-mail simply pops up on screen. Stand-alone
fax machines have been especially hard-hit, though multi-function
machines--which combine the fax machine with a copier, printer and
scanner--have also struggled. Peter Davidson, a fax consultant, says
that sales of fax machines worldwide fell from 15m in 2000 to 13m in
2001 and are still falling. He estimates that faxes now account for
just 4% of companies' phone bills, down from 13% ten years ago.
Americans especially are shedding them fast: by 2006, Mr Davidson
predicts, their spending on fax machines will be less than half what
it was in 2002.
Junk faxing has helped to keep the machines whirring. But it too is
fading as governments crack down. In January, for example, America's
telecoms regulator, the Federal Communications Commission, fined
[3]Fax.com, a marketing company based in California, $5.4m (the
biggest such penalty ever) for mass-faxing unsolicited
advertisements
in violation of a law passed in 1991. Fax.com had defended itself on
the grounds of free speech, an argument echoed by telemarketers, who
are also under fire as people rebel against intrusive salesmanship.
As well as fining six companies in the last five years, the FCC has
issued more than 200 warnings. Stronger limits on fax marketing,
requiring anyone sending an advertising fax to have written
permission
from the recipient, are due to come into force in January 2005,
though
Congress may yet soften this to allow businesses and charities
demonstrating an "established business relationship" with customers
to
send them faxes without prior permission.
Even so, new technologies and regulations will not kill off faxes
just
yet. The machines are still helpful for communicating with people in
rural areas or poor countries where internet access is spotty. They
also transmit signatures: although electronic signatures have been
legally binding in America since 2000, hardly anyone actually uses
them. Besides, some companies are only just adopting e-mail. Abbey,
a
British bank, used to rely heavily on faxes to transmit information
between its headquarters and branches. Personal e-mail for branch
employees was only installed this year as part of a technological
overhaul.
Publishers, among the first to embrace fax machines because they
sped
up the editing process, may be the last to bid them goodbye. Stephen
Brough of Profile Books, a London publisher affiliated with The
Economist, says that faxes are still useful in transmitting orders
to
distributors, and in allowing authors to indicate changes on page
proofs easily. (Electronic editing, in which multiple versions of
the
same file swiftly proliferate, can be a nightmare.) Publishing
contracts, which involve lots of crossing-outs and additions, can
also
be edited by fax. At Lonely Planet, a travel-guide company, a
publishing assistant says she is sometimes asked to fax pages of
company stationery to other publishers as proof of identity.
The persistence of the fax has much to do with the perils of e-mail.
Because it is such a pain to operate, the fax is generally used with
discretion (a relief after e-mail overload). Faxes also allow
lawyers,
among others, to have exchanges that they can later shred, without
leaving an electronic record. The biggest gripe about document
transmission via e-mail, however, is attachments: unless you have
the
right software, they are meaningless.
"One of the most common academic experiences is the failed
attachment:
a person sends you an attachment with incomprehensible formatting of
immense length that crashes your system," says Gillian Evans, a
history professor at Cambridge University. "Then there is an
irascible
exchange of often quite stylish e-mails--at the end of which one of
the parties says, `For goodness' sake, send me a fax!'" This is
especially true, she says, during summers, when professors are often
at home using slow, dial-up internet connections. Unless e-mail
improves drastically, in other words, the fax machine seems likely
to
retain a devoted, if shrinking, following.
And the winners are...
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171490
Sep 16th 2004
THIS newspaper was established in 1843 to take part in "a severe
contest between intelligence, which presses forward, and an
unworthy,
timid ignorance obstructing our progress". One of the chief ways in
which intelligence presses forward is through innovation, which is
now
recognised as one of the most important contributors to economic
growth. Innovation, in turn, depends on the creative individuals who
dream up new ideas and turn them into reality.
The Economist recognises these talented people through our annual
Innovation Awards, presented in five fields: bioscience,
communications, computing, energy and a special "no boundaries"
category. This year we added a sixth award for social and economic
innovation, to acknowledge the way in which social-policy and
business-model innovations can have just as much impact as high
technology. The awards were presented at a ceremony in San Francisco
on September 14th. And the winners were:
o Bioscience: David Goeddel, chief executive of Tularik, for gene
cloning and the expression of human proteins. In 1978, Dr Goeddel
went
to work at Genentech as its staff scientist--making him the first
employee of the first biotech firm. His pioneering work in the field
of gene cloning and expression research made it possible to produce
insulin in the laboratory for the first time, and led to the first
drug produced using recombinant DNA technology. He is now chief
executive officer of Tularik, a firm he co-founded.
o Communications: Vic Hayes, former chair of the Institute of
Electrical and Electronics Engineers (IEEE) 802.11 working group,
for
the development and standardisation of Wi-Fi wireless networks.
Considered the father of Wi-Fi, Mr Hayes chaired the IEEE 802.11
committee, which was set up in 1990 to establish a wireless
networking
standard. Wi-Fi now enables wireless connectivity in millions of
homes, schools and offices, and an increasing number of hotels and
airports.
o Computing: Linus Torvalds, Open Source Development Labs fellow,
for
the development of the Linux operating system. Mr Torvalds released
the first version of the Linux kernel in 1991, when he was a
21-year-old computer-science student at the University of Helsinki,
Finland. He made the source code behind Linux freely available so
that
others could modify it to suit their needs, or contribute their own
improvements. Linux now runs on millions of devices from handhelds
to
mainframes, and has attracted wide industry support.
o Energy: Takeshi Uchiyamada, senior managing director, Toyota, for
developing the Prius hybrid car. In 1994, Mr Uchiyamada joined
Toyota's project to develop an eco-friendly car for the 21st
century.
He became chief engineer for the Prius, the world's first
mass-produced petrol-electric hybrid car, in 1996. Given a free hand
in the design, his team developed a continuously variable
transmission
system that allows the petrol engine and electric motor to work
separately or in tandem. The hybrid design improves fuel efficiency
and dramatically cuts emissions. By 2003, Prius sales had topped
150,000 units worldwide.
o No boundaries: Gerd Binnig, Heinrich Rohrer and Christoph Gerber,
researchers at IBM's Zurich Research Laboratory, for the development
of the scanning-tunnelling microscope (STM). In 1981 Dr Binnig, Dr
Rohrer and Dr Gerber developed the STM, a device that made it
possible
to image and study structures and processes on the atomic scale and
in
three dimensions (see [3]article). The STM, which now exists in
dozens
of variants, is a vital research tool in such fields as materials
science, nanotechnology and microbiology. In 1986, Dr Binnig and Dr
Rohrer shared half of the Nobel prize in physics for their work in
developing the STM.
o Social and economic innovation: Muhammad Yunus, founder, Grameen
Bank, for the development of microcredit. Dr Yunus is the managing
director of Grameen Bank, whose 1,300 branches serve more than 3.5m
people in 46,000 villages in Bangladesh. He devised the concept of
rural microcredit, the practice of making small loans to individuals
without collateral. Typical customers are women who borrow $30 to
start a small business by, for example, buying a sewing machine.
Grameen's repayment rate is 98%. The microcredit model has been
emulated in 50 countries around the world, including America.
We extend our congratulations to the winners, and our thanks to the
judges: Denise Caruso, executive director, The Hybrid Vigor
Institute;
Martin Cooper, chairman and chief executive, ArrayComm; Shereen El
Feki, bioscience correspondent, The Economist; Rodney Ferguson,
managing director, J.P. Morgan Partners; Hugh Grant, president and
chief executive, Monsanto; François Grey, head of IT communications,
CERN; Leroy Hood, president and director, Institute for Systems
Biology; Louis Monier, director of advanced technologies, eBay;
Shuji
Nakamura, director, Centre for Solid State Lighting and Displays,
University of California, Santa Barbara; Andrew Odlyzko, professor
of
mathematics and director, Digital Technology Centre, University of
Minnesota; Tim O'Reilly, founder and chief executive, O'Reilly &
Associates; Rinaldo Rinolfi, executive vice-president, Fiat
Research;
Paul Romer, professor of economics, Graduate School of Business,
Stanford University; Paul Saffo, director, Institute for the Future;
Vijay Vaitheeswaran, global environment and energy correspondent,
The
Economist; CarlJochen Winter, professor of energy and engineering,
University of Stuttgart.
Televisions go flat
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171478
RATIONAL CONSUMER
Sep 16th 2004
Consumer electronics: TVs based on bulky cathode-ray tubes are
giving
way to flat-panel models. How will the market evolve?
TELEVISIONS, it seems, can never be too wide or too thin--and
increasingly, they are wide and thin at the same time, thanks to the
growing popularity of flat-panel televisions based on plasma and
liquid-crystal display (LCD) technology. Flat-panel TVs are stylish,
do not take up much room, and do justice to the crystal-clear images
produced by DVD players, digital-cable boxes and games consoles.
Sales
of LCD TVs in particular are expected to account for an ever larger
portion of the market (see chart) as consumers embrace these new
technologies at the expense of bulky models based on old-fashioned
cathode-ray tubes (CRTs). LCD-based models are expected to account
for
18% of televisions sold in 2008, up from just 2.2% in 2003,
according
to iSuppli, a market-research firm.
LCD TVs are the latest example of a technology from the computer
industry causing a stir in consumer electronics. For years, anyone
who
wanted to buy a flat-panel television had to buy a plasma screen, a
large and expensive (a 42-inch model costs around $3,500) option.
LCD
technology, already used in flat-panel computer monitors and laptop
displays, makes possible smaller, more affordable flat-panel TVs: a
17-inch model costs around $800, for example.
The prospect of a much bigger market has prompted new entrants,
including PC-makers such as Dell and HP, and established
consumer-electronics firms, such as Motorola and Westinghouse (both
of
which stopped making TVs decades ago) to start selling televisions
alongside the established television-set manufacturers. For
PC-makers,
which already sell flat-panel monitors, diversifying into TVs is no
big leap. For consumer-electronics firms, the appeal of flat-panel
TVs
is that they offer much higher margins than conventional
televisions.
During the late-2003 holiday season, makers of flat-panel TVs, both
LCD and plasma, succeeded in creating a tremendous buzz around their
products, says Riddhi Patel, an analyst at iSuppli.
But it did not translate into sales to the extent that the
manufacturers had hoped. Although more people are now aware of
flat-panel TVs, many are still deterred by their high prices. The
expense is difficult to justify, particularly since a 30-inch LCD
television can cost up to four times as much as a comparable
CRT-based
model, with no real difference in picture quality.
Flat-panel TV-makers have now, says Ms Patel, begun to cut their
prices. For one thing, they are sitting on a lot of unsold
inventory:
the panel-makers made too many panels, the TV-makers built too many
TVs, and the retailers ordered more than they could sell.
Prices are also expected to fall as production capacity is stepped
up.
Sharp opened a new "sixth generation" LCD factory in January. In
May,
Matsushita, the Japanese firm behind the Panasonic brand, announced
that it would build the world's biggest plasma-display factory. And
in
July, Sony and Samsung announced that their joint-venture, a
"seventh-generation" LCD factory at Tangjung in South Korea, would
start operating next year. There is concern that this year's record
investment in LCD plants could lead to overcapacity next year. For
consumers, however, this is all good news: a glut will mean lower
prices.
The prospect of sharp price declines over the next few years means
the
flat-panel TV market is on the cusp of change. At the moment, LCD is
more expensive than plasma on a per-inch basis: a 30-inch LCD TV
costs
around the same as a 40-inch plasma model. The vast majority of LCD
TVs sold are currently 20 inches or smaller; larger sizes cannot yet
compete with plasma on price. So plasma has the upper hand at larger
sizes for the time being, while LCDs dominate at the low end.
For anyone looking to buy a flat-panel TV, this makes the choice
relatively simple: if you want anything smaller than a 30-inch
screen,
you have to choose LCD; and if you are thinking of buying bigger,
plasma offers better value. (Above 55 inches, TVs based on
rear-projection are proving popular, having benefited from the buzz
around flat-panel displays.)
Watch out plasma, here comes LCD
As the new LCD plants start running, however, LCD TVs will
increasingly be able to compete with plasma at sizes as large as 45
inches. The new seventh-generation LCD plants will crank out screens
on glass sheets measuring 1.9 by 2.2 metres, big enough for twelve
32-inch or eight 40-inch panels. LCD could thus push plasma
upmarket,
unless makers of plasma TVs drop their prices too.
The result is expected to be a fierce battle around the 42-inch
mark.
This may prompt buyers to look more closely at the relative merits
of
the two technologies, each of which has its pros and cons. Plasma
offers higher contrast, which means deeper blacks. But although the
longevity of plasma panels has improved in recent years, from 10,000
hours to 30,000 hours, LCD panels have a lifetime of 60,000 hours.
LCD
TVs also have the advantage that they can also be used as computer
monitors. But their response is slower than plasma, so they are less
suitable for watching sports.
With prices about to tumble, when is the right time to buy? There
will
be some good deals around in the next few months, says Ms Patel, as
manufacturers start hyping their products again during the holiday
season. Some prices will have fallen by as much as 40% compared with
the same time last year. That may prompt many more people to take
the
plunge.
You're hired
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171466
REPORTS
Sep 16th 2004
Computing: By unloading work on to their customers, firms can grant
them more control--and save money in the process
MEET your airline's latest employee: you. You may not have noticed,
but you are also now working for your phone company and your bank.
Why? Because of the growth of the self-service economy, in which
companies are offloading work on to their own customers. It is, you
could say, the ultimate in outsourcing. Self-service can have
benefits
both for companies and customers alike. It is already changing
business practices in many industries, and seems likely to become
even
more widespread in future.
The idea is not new, of course. Self-service has been around for
decades, ever since Clarence Saunders, an American entrepreneur,
opened the first Piggly Wiggly supermarket in 1916 in Memphis,
Tennessee. Saunders's idea was simple, but revolutionary: shoppers
would enter the store, help themselves to whatever they needed and
then carry their purchases to the check-out counter to pay for them.
Previously, store clerks had been responsible for picking items off
the shelves; but with the advent of the supermarket, the shoppers
instead took on that job themselves.
On the heels of supermarkets came laundromats, cafeterias and
self-service car washes, all of which were variations on the same
theme. But now, with the rise of the web, the falling cost of
computing power, and the proliferation of computerised kiosks, voice
recognition and mobile phones, companies are taking self-service to
new levels. Millions of people now manage their finances, refinance
their home loans, track packages and buy cinema and theatre tickets
while sitting in front of their computers. Some install their own
broadband connections using boxes and instructions sent through the
post; others switch mobile-phone pricing plans to get better deals.
They plan their own travel itineraries and make their own hotel and
airline bookings: later, at the airport, they may even check
themselves in. And they do all of this with mouse in hand and no
human
employees in sight.
Self-service appeals to companies for an obvious reason: it saves
money. The hallmark of all of these self-service transactions is
that
they take place with little or no human contact. The customer does
the
work once done by an employee, and does not expect to be paid. So to
work well, self-service requires the marriage of customers with
machines and software. That union, says Esteban Kolsky of Gartner, a
consultancy, is now doing for the service sector what mass
production
once did for manufacturing: automating processes and significantly
cutting costs.
"From the corporate side you hear, `Well, we want to make sure the
customer gets what he wants,' or whatever, but, bottom line, it does
reduce costs," says Mr Kolsky. Francie Mendelsohn of Summit
Research,
a consultancy based in Rockville, Maryland, agrees. "People don't
like
to admit it, but self-service is used to reduce head count and
therefore improve the bottom line," she says. "It's not politically
correct, but it's the truth."
Netonomy, a firm that provides self-service software to telecoms
operators, reckons online self-service can cut the cost of a
transaction to as little as $0.10, compared with around $7 to handle
the same transaction at a call centre. As operators offer new
services, from gaming to music downloads, the logical way to manage
their customers' demands, says John Ball, Netonomy's co-founder, is
to
let customers do it themselves. There can be advantages for
customers,
too: convenience, speed and control, says Mr Kolsky. "Rather than
wonder if we're going to get good service, we'd much rather go to a
website or a kiosk or an ATM and just do it on our own," he says.
A win-win situation, then, in which companies reduce their costs and
customers gain more control? Not necessarily. If companies extend
self-service too far, or do it in the wrong way, they could alienate
their customers. In particular, consumers will embrace self-service
only if the systems are well designed and easy to use. Shopping
online, for example, with round-the-clock access and no crowds,
traffic or pesky salespeople, lends itself to self-service. But when
customers want a question answered or a problem with a transaction
resolved, automated systems often fail them--and may deter them from
doing business with that company again.
If companies are going to jump on the self-service bandwagon, says
Mr
Kolsky, they had better be prepared to do it right. "They have to
look
at self-service strategically, not just as a cost-cutter," he says.
Most airlines, for example, are simply using online self-service to
cut costs, rather than to cater to their customers' needs, he
suggests. Booking a complex itinerary online is often difficult or
impossible. And, says Mr Kolsky, "you can book a ticket on the web,
but how many times have you tried to cancel a ticket online?"
Help yourself
Airlines are having more success with another form of self-service:
kiosks. Automated teller machines (ATMs) and self-service petrol
pumps
have been around for years, but other kinds of kiosk now seem to be
proliferating like rabbits. Most airports and large railway stations
in America, Europe and Japan are lined with touch-screen machines
that
will sell you a ticket or spit out a boarding pass in far less time
than it takes to queue up and deal with a human being. According to
this year's Airline IT Trends Survey, 33% of airlines expect that by
the end of the year, more than half of their domestic customers will
buy their tickets from kiosks.
Kiosks are also showing up in cinemas, shops and car-rental centres,
and moving into hotels, amusement parks and malls, allowing
customers
to buy what they want with the swipe of a credit card and then
quickly
move on. According to Ms Mendelsohn, the number of retail kiosks
worldwide will grow by 63% over the next three years, to 750,000.
This is partly because a new generation of customers is more
comfortable with using computers, keyboards and screens, whether at
home or in the mall. The technology has improved in recent years,
too.
"Kiosks have been around for decades, but the technology wasn't
always
up to the job and people were far more fearful of using them," says
Ms
Mendelsohn. But the main reason for kiosks' growing popularity, she
says, is that they let users jump the queue.
Kiosks are even proliferating at the birthplace of self-service
itself. Some retailers are experimenting with automated check-out
counters that allow shoppers to scan their own groceries. The most
sophisticated systems actually "talk" to customers, telling them
what
each item costs as it is scanned and walking them through the
process
step-by-step. Less fancy kiosks simply let shoppers scan purchases,
pay and move on. Either way, the customer is doing all the work. But
shoppers do not seem to mind. "People tell me, `This is faster. This
is fun'," says Ms Mendelsohn. "Actually, it is not faster, but when
was the last time you applied the word `fun' to shopping in a
supermarket?"
In a study commissioned by NCR (a maker of ATMs and other kiosks),
IDC, a market-research firm, found that nearly 70% of customers in
five different countries said they were willing to use
self-check-out.
In America, the figure was 78%. That would suit the supermarket
chains
just fine, since a kiosk can handle the workload of two-and-a-half
employees at a fraction of the cost.
Photo kiosks, which can make prints from digital-camera memory
cards,
are now popping up in many shops. After that, kiosks could start to
colonise fast-food restaurants. McDonald's is trying out several
systems with varying degrees of success. And Subway, a sandwich
chain,
is installing kiosks to free employees who make sandwiches from the
job of having to take orders and handle payments (though it has, so
far, stopped short of simply asking customers to make their own
sandwiches). Despite their growing popularity, however, kiosks have
not been universally embraced. Some, it seems, talk too much. "They
get vandalised," says Ms Mendelsohn--not by customers, but by people
who work in the vicinity, and who cannot stand to listen to their
incessant babbling.
Self-service need not involve websites or kiosks. It can also be
delivered over the phone. The latest systems do away with endlessly
branching touch-tone menus in favour of interactive
voice-recognition
(IVR) technology, which supposedly allows customers to talk directly
to machines. IVR systems greet callers with a recorded human voice
and
then use voice-recognition software to engage in something like a
human conversation.
The talking cure?
In 2001, America's perennially cash-strapped rail system, Amtrak,
introduced a perky IVR system called "Julie" (after the human owner
of
the service's voice), created by SpeechWorks, a software firm based
in
Boston. Julie greets callers in a lively but businesslike manner,
and
then, very informally, explains how the system works. The same old
branching system is there, but since callers are answering "Yes" or
"No" or providing other simple one-word answers to Julie's
questions,
it does not feel quite as tedious.
If you say "reservation", for example, Julie walks you through the
process, asking for your starting point and destination, and filling
you in on schedules and costs. By keeping the "conversation" simple,
the software reduces misunderstandings and moves the process along
pretty smoothly. If you get stuck, you can still reach a human
simply
by asking for one. (Julie tells you how to do that, too.)
Amtrak says the system now handles a third of the rail system's
bookings, and surveys show 80% of callers are happy with the
service.
In its first two years of operation, Julie saved Amtrak $13m. Last
year Julie was given the ability to handle credit-card transactions
directly, without passing the call on to a human agent, which should
lead to further savings.
Phone companies, brokerage firms, utility companies and insurance
firms are all now replacing old touch-tone systems with IVR. In
Britain, the Royal Mail installed an IVR system in 2003 that
combines
technologies from two software companies, Aspect and Nuance. Last
year
it handled 1m customer inquiries, reducing customer-service costs by
25%.
While a carefully designed IVR system can work well, a recent study
by
Forrester, a consultancy, suggests that not all of the kinks have
been
entirely worked out. The firm surveyed 110 large companies and found
that IVR systems met the needs of their customers a paltry 18% of
the
time, less than any other form of customer contact. "Clearly," says
Navi Radjou of Forrester, "usability needs to be improved."
And that seems to be the ultimate self-service challenge. Machines
are
fast, reliable workers with prodigious memories. But they are more
inflexible than even the rudest salesperson. "As customers realise
they can't get everything they need, they give up and then you have
dissatisfied customers coming through other channels," says Mr
Kolsky.
But when done correctly, self-service systems have proved that they
can both save money and make customers happy. This suggests that
they
could indeed transform the service economy in much the same way that
mass production transformed manufacturing, by allowing services to
be
delivered at low cost in large volumes. Though it may take five
years
before most transactions are conducted via self-service, says Mr
Kolsky, "we're definitely moving in that direction." In other words,
you never know who you might be working for next.
How Google works
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171440
CASE HISTORY
Sep 16th 2004
Internet searching: With all the fuss over Google's IPO, it is easy
to
overlook its broader social significance. For many people, Google
made
the internet truly useful. How did it do it?
ONE thing that distinguishes the online world from the real one is
that it is very easy to find things. To find a copy of The Economist
in print, one has to go to a news-stand, which may or may not carry
it. Finding it online, though, is a different proposition. Just go
to
Google, type in "economist" and you will be instantly directed to
economist.com. Though it is difficult to remember now, this was not
always the case. Indeed, until Google, now the world's most popular
search engine, came on to the scene in September 1998, it was not
the
case at all. As in the physical world, searching online was a
hit-or-miss affair.
Google was vastly better than anything that had come before: so much
better, in fact, that it changed the way many people use the web.
Almost overnight, it made the web far more useful, particularly for
non-specialist users, many of whom now regard Google as the
internet's
front door. The recent fuss over Google's stockmarket flotation
obscures its far wider social significance: few technologies, after
all, are so influential that their names become used as verbs.
Google began in 1998 as an academic research project by Sergey Brin
and Lawrence Page, who were then graduate students at Stanford
University in Palo Alto, California. It was not the first search
engine, of course. Existing search engines were able to scan or
"crawl" a large portion of the web, build an index, and then find
pages that matched particular words. But they were less good at
presenting those pages, which might number in the hundreds of
thousands, in a useful way.
Mr Brin's and Mr Page's accomplishment was to devise a way to sort
the
results by determining which pages were likely to be most relevant.
They did so using a mathematical recipe, or algorithm, called
PageRank. This algorithm is at the heart of Google's success,
distinguishing it from all previous search engines and accounting
for
its apparently magical ability to find the most useful web pages.
Untangling the web
PageRank works by analysing the structure of the web itself. Each of
its billions of pages can link to other pages, and can also, in
turn,
be linked to. Mr Brin and Mr Page reasoned that if a page was linked
to many other pages, it was likely to be important. Furthermore, if
the pages that linked to a page were important, then that page was
even more likely to be important. There is, of course, an inherent
circularity to this formula--the importance of one page depends on
the
importance of pages that link to it, the importance of which depends
in turn on the importance of pages that link to them. But using some
mathematical tricks, this circularity can be resolved, and each page
can be given a score that reflects its importance.
The simplest way to calculate the score for each page is to perform
a
repeating or "iterative" calculation (see [3]article). To start
with,
all pages are given the same score. Then each link from one page to
another is counted as a "vote" for the destination page. Each page's
score is recalculated by adding up the contribution from each
incoming
link, which is simply the score of the linking page divided by the
number of outgoing links on that page. (Each page's score is thus
shared out among the pages it links to.)
Once all the scores have been recalculated, the process is repeated
using the new scores, until the scores settle down and stop changing
(in mathematical jargon, the calculation "converges"). The final
scores can then be used to rank search results: pages that match a
particular set of search terms are displayed in order of descending
score, so that the page deemed most important appears at the top of
the list.
While this is the simplest way to perform the PageRank calculation,
however, it is not the fastest. Google actually uses sophisticated
techniques from a branch of mathematics known as linear algebra to
perform the calculation in a single step. (And the actual PageRank
formula, still visible on a [4]Stanford web page includes an extra
"damping factor" to prevent pages' scores increasing indefinitely.)
Furthermore, the PageRank algorithm has been repeatedly modified
from
its original form to prevent people from gaming the system. Since
Google's debut in 1998, the importance of a page's Google ranking,
particularly for businesses that rely on search engines to send
customers their way, has increased dramatically: Google is now
responsible for one in three searches on the web. For this reason,
an
entire industry of "search-engine optimisers" has sprung up. For a
fee, they will try to manipulate your page's ranking on Google and
other search engines.
The original PageRank algorithm could be manipulated in a fairly
straightforward fashion, by creating a "link farm" of web pages that
link to one another and to a target page, and thus give an inflated
impression of its importance. So Google's original ranking algorithm
has grown considerably more complicated, and is now able to identify
and blacklist pages that try to exploit such tricks.
Mr Page and Mr Brin made another important innovation early on. This
was to consider the "anchor text"--the bit of text that is
traditionally blue and underlined and forms a link from one page to
another--as a part of the web page it referred to, as well as part
of
the page it was actually on. They reasoned that the anchor text
served
as an extremely succinct, if imprecise, summary of the page it
referred to. This further helps to ensure that when searching for
the
name of a person or company, the appropriate website appears at the
top of the list of results.
Ranking the order in which results are returned was the area in
which
Google made the most improvement, but it is only one element of
search--and it is useless unless the rest of the search engine works
efficiently. In practice, that means compiling a comprehensive and
up-to-date index of the web's ever-changing pages. PageRank sits on
top of Google's extremely powerful and efficient search
infrastructure--one that draws on the lessons learned from previous,
and now mostly forgotten, search engines.
As the web grew in the early 1990s, a number of search engines, most
of them academic research projects, started crawling and indexing
its
pages. The first of these, the World Wide Web Wanderer and the World
Wide Web Worm, used very simple techniques, and did not even index
entire web pages, but only their titles, addresses and headers. A
number of commercial engines followed, springing out of academic
projects (as Google later did). WebCrawler, the first to index
entire
pages, emerged in 1994 at the University of Washington and was later
bought by America Online. It was followed by Lycos and InfoSeek. But
the first really capable search engine was AltaVista, unveiled by
Louis Monier of Digital Equipment Corporation in December of 1995.
The day before the site opened for business, on December 15th, it
already had 200,000 visitors trying to use it. That was because
AltaVista successfully met two of the three requirements that later
led to Google's success. First, it indexed a much larger portion of
the web than anything that had come before. This, says Dr Monier,
was
because AltaVista used several hundred "spiders" in parallel to
index
the web, where earlier search engines had used only one. Second,
AltaVista was fast, delivering results from its huge index almost
instantly. According to Dr Monier, all earlier search engines had
been
overwhelmed as soon as they became popular. But the AltaVista team
had
used a modular design right from the start, which enabled them to
add
computing power as the site's popularity increased. Among some
geeks,
at least, AltaVista came into use as a verb.
Seek, and Google shall find
Even so, AltaVista still lacked Google's uncanny ability to separate
the wheat from the chaff. Experienced users could use its various
query options (borrowed from the world of database programming) to
find what they were looking for, but most users could not. Although
AltaVista's unprecedented reach and speed made it an important step
forward, Google's combination of reach, speed and PageRank added up
to
a giant leap.
When you perform a Google search, you are not actually searching the
web, but rather an index of the copy of the web stored on Google's
servers. (Google is thought to have several complete copies of the
web
distributed across servers in California and Virginia.) The index is
compiled from all the pages that have been returned by a multitude
of
spiders that crawl the web, gathering pages, extracting all the
links
from each page, putting them in a list, sorting the links in the
list
in order of priority (thus balancing breadth and depth) and then
gathering the next page from the list.
When a user types in a query, the search terms are looked up in the
index (using a variety of techniques to distribute the work across
tens of thousands of computers) and the results are then returned
from
a separate set of document servers (which provide preview "snippets"
of matching pages from Google's copies of the web), along with
advertisements, which are returned from yet another set of servers.
All of these bits are assembled, with the help of PageRank, into the
page of search results. Google manages to do this cheaply, in less
than a second, using computers built from cheap, off-the-shelf
components and linked together in a reliable and speedy way using
Google's own clever software. Together, its thousands of machines
form
an enormous supercomputer, optimised to do one thing--find, sort and
extract web-based information--extremely well.
Mr Page and Mr Brin created the prototype of Google on Stanford's
computer systems. However, as visionaries do, they thought ahead
clearly, and from the beginning had sound ideas both for searching
and
for creating the system of servers capable of handling the millions
of
queries a day that now pass through Google. It was the clarity of
their ideas for scaling the server architecture, and their ability
to
think big, that made it so easy for them to turn their research
project into a business. Andy Bechtolsheim, one of the founders of
Sun
Microsystems and an early investor in Google, did not even wait to
hear all the details: when Mr Page and Mr Brin approached him, he
reputedly said, "Why don't I just write you a cheque for $100,000?"
He
wrote the cheque to "Google Inc."--a firm which did not yet exist.
So
Mr Page and Mr Brin were forced to incorporate a business very
quickly, and the company was born.
What was still missing, though it was unfashionable to worry about
it
in the early days of the dotcom boom, was a way of making money.
Initially, Google sold targeted banner advertisements and also made
money by providing search services to other websites, including
Yahoo!
and a number of other, smaller portals. But, says John Battelle, a
professor at the University of California, Berkeley, who is writing
a
book about search engines, Google's revenues did not really take off
until 2000, when it launched AdWords--a system for automatically
selling and displaying advertisements alongside search results.
Advertisers bid for particular search terms, and those who bid the
highest for a particular term--"digital cameras", say--have their
text
advertisements displayed next to Google's search results when a user
searches for that term. Google does not simply put the highest
bidder's advertisement at the top of the list, however. It also
ranks
the advertisements according to their popularity, so that if more
people click on an advertisement halfway down the list, it will be
moved up, even if other advertisers are paying more. Google's
philosophy of ranking results according to their usefulness is thus
applied to advertisements too.
The only fly in the ointment, from Google's point of view, was that
Overture, a rival firm, claimed to have patented the idea for
AdWords-style sponsored links. Overture filed a lawsuit against
Google
in 2002: it was settled out of court last month when Google agreed
to
give Yahoo! (which acquired Overture last year) 2.7m shares, worth
around $230m, to resolve the matter. Google was eager to settle the
AdWords dispute before its initial public offering, which took place
on August 19th.
Google now faces a three-way fight with Yahoo! and Microsoft, which
have both vowed to dethrone it as the dominant internet search
engine.
Yahoo!'s strategy is to interconnect its various online services,
from
search to dating to maps, in increasingly clever ways, while
Microsoft's plan is to integrate desktop and internet searching in a
seamless manner, so that search facilities will be embedded in all
its
software, thus doing away (the company hopes) with the need to use
Google. Both firms are also working to improve their basic search
technology in order to compete with Google.
Beyond searching?
In response, Google has gradually diversified itself, adding
specialist discussion groups, news and shopping-related search
services, and a free e-mail service, Gmail, which is currently being
tested by thousands of volunteers. It has also developed "toolbar"
software that can be permanently installed on a PC, allowing web
searches to be performed without having to visit the Google website,
and establishing a toe-hold on its users'PCs.
Google's technical credentials are not in doubt. The question is
whether it can maintain its position, as search, the activity where
it
is strongest, moves from centre stage to being just part of a bundle
of services. Yet the example of Gmail shows how search can form the
foundation of other services: rather than sorting mail into separate
folders, Gmail users can simply use Google's lightning-fast search
facility to find a specific message. So the technology that made
Google great could yet prove to be its greatest asset in the fight
ahead. Let battle commence.
How PageRank Works
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3172188
CASE HISTORY
Sep 16th 2004
1. Google's PageRank algorithm is a mathematical recipe that uses
the
structure of the links between web pages to assign a score to each
page that reflects its importance. In effect, each link from one
page
to another is counted as a "vote" for the destination page, and each
page's score depends on the scores of the pages that link to it. But
those pages' scores, in turn, depend on the scores of the pages that
link to them, and so on. As a result, calculating the scores is a
complicated business.
2. Initially, all pages are given the same score (in this case, 100
points).
3. Each page's score is recalculated by adding up the score from
each
incoming link, which is simply the score of the linking page divided
by the number of outgoing links. The "About us" page, for example,
has
one incoming link, from the "Home" page. The "Home" page has two
outgoing links, so its score of 100 is shared equally between them.
The "About us" page therefore ends up with a score of 50. Similarly,
the "Our products" page has three incoming links. Each comes from a
page with two outgoing links, and therefore contributes 50 to the
"Our
products" page's total score of 150.
4. Once all the scores have been recalculated, the process is
repeated
using the new scores, until the scores stop changing. In fact,
Google
uses sophisticated mathematical techniques to speed up the
calculation, rather than performing multiple calculations across the
entire web.
5. The final scores are used to rank the results of a search, which
are displayed in order of descending score. The "Home" page ends up
with the highest score, so that searching for "Widgets.com", which
appears on every page, produces a list with the "Home" page at the
top. Similarly, searching for "Product A" or "Product B" produces a
list with the "Our products" page at the top, since this page has a
higher score than either of the individual product pages.
Touching the atom
http://www.economist.com/science/tq/PrinterFriendly.cfm?Story_ID=3171516
BRAIN SCAN
Sep 16th 2004
Scientists' ability to see individual atoms, and manipulate matter
one
atom at a time, is due in large part to Gerd Binnig, co-inventor of
the scanning-tunnelling microscope
THE mad scientist, with his white coat and frizzy hair, is a
familiar
figure from countless Hollywood movies. Gerd Binnig, by contrast, is
a
much rarer beast: a gleeful scientist. He is renowned for
co-inventing
the scanning-tunnelling microscope (STM), a device that allows
researchers to examine and manipulate matter at the atomic scale.
This
invention, made by Dr Binnig in 1981 with his colleagues Heinrich
Rohrer and Christoph Gerber, laid the groundwork for nanotechnology,
enabled new methods of semiconductor production and generally
broadened the understanding of the nature of matter. Yet to Dr
Binnig,
it was just an opportunity to play around in the laboratory. Indeed,
visit the site of his seminal work, IBM's Zurich Research
Laboratory,
and you will find a cartoon posted on the office wall. It depicts a
smiling Dr Binnig, surrounded by equipment and holding up a hand
clad
in an iron glove, with the caption: "Now I can really feel the
atoms!"
The characteristic playfulness evident in this cartoon is a hallmark
of Dr Binnig's career and interests. His life's work has taken him
down some unusual paths, but he has had a lot of fun along the way.
The ideas that led to the STM, and thence to the 1986 Nobel prize in
Physics, came mere months into Dr Binnig's work at IBM, which he
joined under Dr Rohrer in 1978. Good-natured cartoons are usually a
sign of being held in fond esteem, and Dr Binnig is indeed well
liked,
not least because he is quick to deflect individual praise and cite
the work and help of others. It was Dr Rohrer who set him the task
of
building a device that could detect tiny defects in thin films of
material--a problem that IBM was trying to overcome in order to
build
faster computers. But while Dr Binnig collaborated closely with
other
physicists at IBM to conceive and build the STM, the key ideas were
his, says Dr Gerber.
In particular, Dr Binnig solved a couple of significant problems
that
had plagued previous efforts to see atoms. He devised a cunningly
simple mechanical approach that involves scanning the fine tip of
the
microscope's probe--just a ten-billionth of a metre wide, or about
the
width of an atom--across the surface being studied. Even though the
tip does not touch the surface, a quantum-mechanical effect called
"tunnelling" causes an electric current to pass between them. By
measuring this current and adjusting the tip's position as it
travels
across the surface, the distance between the tip and the surface can
be kept constant. The record of the tip's movements can then be
turned
into an image of the surface's contours--an image so detailed that
the
lumps and bumps of individual atoms are visible. Dr Binnig's second
innovation was the clever method he devised to keep the probe
stable,
using the ingenious combination of a vacuum chamber, superconducting
levitation and Scotch tape.
To be sure, the STM was not the beginning of atomic-scale
microscopy.
Ernst Ruska, who would share the Nobel with Dr Binnig and Dr Rohrer,
invented the first electron microscope in 1931, while still a
graduate
student at the Technical College of Berlin. Erwin Mueller of
Pennsylvania State University, who invented the field-ion microscope
in 1951, became the first person to "see" atoms. But Dr Mueller was
secretive and his results were difficult to reproduce. In the late
1960s one of his students, Russell Young, working at America's
National Bureau of Standards, developed a device called the
Topografiner, which has been called a precursor to the STM. And in
the
1970s, Albert Crewe of the University of Chicago built a "scanning
transmission electron microscope" and used it to create an image of
a
single uranium atom.
But compared with the STM, previous atomic-resolution microscopes
were
difficult to use and had particular problems with surface atoms,
which
tended to interact with the tip of the probe, thus distorting the
results. Dr Gerber says only a "genius" like Dr Binnig could have
had
the idea for the STM: he puts his colleague on a par with Einstein,
Schrödinger or Feynman, with a talent for experimentation that
matches
those great scientists' ability as theorists. The STM was also
significant because other scientists were able to build their own
relatively easily, recalls Don Eigler, now also a research fellow at
IBM but a graduate student at the time of the STM's invention. It is
also quite an inexpensive device: atomic resolution can be achieved
for just $20,000.
Dr Binnig's talent almost went unrealised. Born in 1947, he studied
physics, but found the conventional approach to teaching the subject
tedious. It was dry, textbook stuff, revealing none of the
mis-steps,
mystery and mess that constitute the drama and delight of scientific
discovery. And it was only after serious reflection, anxious
discussions with his wife and some time on the excellent football
field near the Zurich laboratory that Dr Binnig finally decided to
go
to work for IBM. But once there, he thrived in its free-wheeling
research environment.
That Dr Binnig's design made sense, let alone that it would become
so
successful, was far from obvious: some people said it could not
possibly work. Indeed, even Dr Binnig's fellow physicists at IBM
were
initially dubious. "People thought I was crazy," Dr Binnig recalls.
There were whispers that Dr Rohrer might have hired the wrong man.
Initially, Dr Binnig had to steal time to work on the tunnelling
idea
from his other research work. But he managed to convince both Dr
Rohrer, who gave the project his support, and Dr Gerber, who
immediately saw that Dr Binnig's counterintuitive approach had the
potential to generate extremely high-resolution images.
Some might argue that the STM is a mere tool, and is therefore less
significant than a theoretical breakthrough in advancing the
understanding of nature. But George Whitesides, a distinguished
chemist and nanotechnologist at Harvard University, asserts that
tools
can have enormous influence in shaping scientific progress. Such is
the importance of the STM, he suggests, that in the past 50 years
only
recombinant DNA has had a similar impact. He estimates that some 50
variations on the basic idea of the STM have been devised in various
scientific and industrial fields.
Both the STM and its successor, the atomic-force microscope (AFM),
have become essential laboratory workhorses for researchers in
fields
such as lithography, nanotechnology, polymer science and
microbiology.
Even so, Steve Jurvetson, a Silicon Valley venture capitalist who
specialises in nanotechnology, thinks the STM's greatest impact has
been as a motivational tool: it has, he says, spurred an entire
generation of scientists to think about controlling matter at the
atomic scale. Just as the discovery of the structure of DNA
transformed biology into an information science, he suggests, the
ability to manipulate individual atoms could do the same for
physics.
Making the breakthrough of the STM in 1981, at the age of 34, was
the
start of a difficult time for Dr Binnig. He knew, he says, that he
"would never experience anything like it again". In his Nobel
lecture
he confessed to a period of disillusionment, during which he
wondered
what he could possibly do next. Dr Rohrer helped him through this
difficult time by encouraging him to produce a stream of papers.
Together they published the so-called "7x7" paper, which used the
STM
to reveal the atomic structure of a silicon surface for the first
time. Their controversial result was not immediately accepted by
other
scientists, but its publication generated a flurry of interest in
the
IBM team's new instrument.
Dr Binnig, Dr Rohrer and Dr Gerber became evangelists for their
microscope, opening their laboratory to other scientists and
travelling widely to promote their ideas. Their first convert was
Calvin Quate, an electrical engineer at Stanford University, who
flew
to Zurich to see the STM, and later joined with Dr Binnig and Dr
Gerber to create the AFM. (Unlike the STM, which can only produce
images of conducting or semiconducting materials, the AFM can
produce
atomic-scale images of non-conducting materials too.)
Whatever next?
Dr Binnig and other IBM researchers subsequently adapted ideas from
the AFM to create a nano-mechanical storage device called the
Millipede, which uses individual atoms to store and retrieve digital
information. IBM is currently considering how to make this into a
commercial product. Now partly retired, Dr Binnig remains an adviser
to IBM, and is still very much a creative force. One young
researcher
working on the Millipede project sighs to a visitor that any session
with Dr Binnig means hearing far more ideas than one could possibly
pursue.
Dr Binnig thinks nanotechnology will be very important, and must be
pursued despite its supposed environmental risks. "There is always a
danger any time you do something new. But if you don't do it,
there's
also a danger," he says. For instance, he notes that in the very
long
term, the sun will burn out, dooming humanity--unless some clever,
playful scientist can figure out a way to manage without it, that
is.
Another of Dr Binnig's interests, and something that he believes
could
end up being even more important than the STM, is fostering
creativity
in computers. His aim is not to create artificial intelligence, a
term
Dr Binnig dislikes, but systems capable of creativity and deduction.
In 1989 he published a book on creativity which was well received,
and
in 1994 he helped to found a start-up, now called Definiens, to
develop software to emulate human thought processes. The company's
first products, which are designed to spot patterns in large volumes
of data, are being applied in the field of bioinformatics. With
characteristic playfulness, Dr Binnig says his ultimate hope is that
someday, two quantum computers will be chatting to each other--and
one
will say: "Binnig? Yeah, he's one of the guys who made us possible."
Down on the pharm
http://www.economist.com/research/articlesBySubject/PrinterFriendly.cfm?
Story_ID=3171546&subjectID=526354
TECHNOLOGY QUARTERLY
REPORTS
Sep 16th 2004
Biotechnology: Will genetically engineered goats, rabbits and flies
be
the low-cost drug factories of the future?
EARLIER this year, the regulators at the European Medicines Agency
(EMEA) agreed to consider an unusual new drug, called ATryn, for
approval. It was developed to treat patients with hereditary
antithrombin deficiency, a condition that leaves them vulnerable to
deep-vein thrombosis. What makes ATryn so unusual is that it is a
therapeutic protein derived from the milk of a transgenic goat: in
other words, an animal that, genetically speaking, is not all goat.
The human gene for the protein in question is inserted into a goat's
egg, and to ensure that it is activated only in udder cells, an
extra
piece of DNA, known as a beta-caseine promoter, is added alongside
it.
Since beta caseine is made only in udders, so is the protein. Once
extracted from the goat's milk, the protein is indistinguishable
from
the antithrombin produced in healthy humans. The goats have been
carefully bred to maximise milk production, so that they produce as
much of the drug as possible. They are, in other words, living drug
factories.
ATryn is merely the first of many potential animal-derived drugs
being
developed by GTC Biotherapeutics of Framingham, Massachusetts. The
company's boss, Geoffrey Cox, says his firm has created 65
potentially
therapeutic proteins in the milk of its transgenic goats and cows,
45
of which occurred in concentrations of one gram per litre or higher.
Female goats are ideal transgenic "biofactories", GTC claims,
because
they are cheap, easy to look after and can produce as much as a
kilogram of human protein per year. All told, Dr Cox reckons the
barn,
feed, milking station and other investments required to make
proteins
using transgenic goats cost less than $10m--around 5% of the cost of
a
conventional protein-making facility. GTC estimates that it may be
able to produce drugs for as little as $1-2 per gram, compared with
around $150 using conventional methods. Goats' short gestation
period--roughly five months--and the fact that they reach maturity
within a year means that a new production line can be developed
within
18 months. And increasing production is as simple as breeding more
animals. So if ATryn is granted approval, GTC should be able to
undercut producers of a similar treatment, produced using
conventional
methods, sales of which amount to $250m a year.
GTC is not the only game in town, however. Nexia, based in Montreal,
is breeding transgenic goats to produce proteins that protect
against
chemical weapons. TransOva, a biotech company based in Iowa, is
experimenting with transgenic cows to produce proteins capable of
neutralising anthrax, plague and smallpox. Pharming, based in the
Netherlands, is using transgenic cows and rabbits to produce
therapeutic proteins, as is Minos BioSystems, a Greek-Dutch start-up
which is also exploring the drugmaking potential of fly larvae.
It all sounds promising, but the fact remains that medicines derived
from transgenic animals are commercially untested, and could yet run
into regulatory, safety or political problems. At the same time,
with
biotechnology firms becoming increasingly risk-averse in response to
pressure from investors and threats of price controls from
politicians, transgenic animal-derived medicines might be exactly
what
the pharmaceuticals industry is lacking: a scalable, cost-effective
way to make drugs that can bring products to market within a decade
or
so, which is relatively quick by industry standards.
Just say no to Frankendrugs?
So a great deal depends on the EMEA's decision, particularly given
previous European scepticism towards genetically modified crops. But
as far as anyone can tell, the signs look promising. In a conference
call in August, Dr Cox told analysts that the EMEA had so far raised
no concerns about the transgenic nature of his firm's product.
[spacer.gif]
[gray.gif]
[spacer.gif]
"Transgenic animals could be just what the pharmaceuticals
industry
needs: a fast, scalable and cost-effective way to make
drugs."
[spacer.gif]
[gray.gif]
But as the fuss over genetically modified crops showed, public
opinion
is also important. While some people may regard the use of animals
as
drug factories as unethical, however, the use of genetic engineering
to treat the sick might be regarded as more acceptable than its use
to
increase yields and profits in agriculture. Conversely, tinkering
with
animal genes may be deemed to be less acceptable than tinkering with
plant genes. A poll conducted in America in 2003 by the Pew
Initiative
on Food and Biotechnology found that 81% of those interviewed
supported the use of transgenic crops to manufacture affordable
drugs,
but only 49% supported the use of transgenic animals to make
medicines.
Even some biotech industry executives are unconvinced that medicines
made primarily from animal-derived proteins will ever be safe enough
to trust. Donald Drakeman of Medarex, a firm based in Princeton, New
Jersey, is among the sceptics. His firm creates human antibodies in
transgenic mice, clones the antibodies and then uses conventional
processes to churn out copies of the antibodies by the thousand.
"With
goat and cow milk, especially, I worry about the risk of animal
viruses and prions being transferred in some minute way," he says.
(Bovine spongiform encephalitis, or "mad cow disease", is thought to
be transmitted by a rogue form of protein called a prion.)
Another concern, raised by lobby groups such as Greenpeace and the
Union of Concerned Scientists, is that transgenic animals might
escape
into the wild and contaminate the gene pool, triggering all kinds of
unintended consequences. There is also concern that an animal from
the
wild could find its way into GTC's pens, make contact with one of
the
transgenic animals, and then escape to "expose" other animals in the
wild. Or what if the transgenic animals somehow got into the human
food chain?
Short of sabotage, none of these scenarios seems very likely,
however.
Since transgenic goats, for example, are living factories whose
worth
depends on their producing as much milk as possible, every measure
is
taken to keep them happy, healthy, well fed and sequestered from
non-transgenic animals. As animals go, goats and cows are relatively
unadventurous creatures of habit, are more easily hemmed in than
horses, and are usually in no mood to run away when pregnant--which
they are for much of the time at places like GTC and TransOva.
The uncertainty over regulatory and public reactions is one of the
reasons why, over the past four years, at least two dozen firms
working to create drugs from transgenic animals have gone bust. Most
were in Europe. GTC, which leads the field, has nothing to worry
about, however, since it is sitting on around $34m in cash. Also
sitting pretty is Nexia, particularly since it began to focus on the
use of transgenic animals to make medicines that can protect against
nerve agents.
Nexia became known as the spider-silk company, after it created
transgenic goats capable of producing spider silk (which is, in
fact,
a form of protein) in their milk. It is now working to apply the
material, which it calls BioSteel, in medical applications. Using
the
same approach, the company has now developed goats whose milk
contains
proteins called bioscavengers, which seek out and bind to nerve
agents
such as sarin and VX. Nexia has been contracted by the US Army
Medical
Research Institute of Chemical Defence and DRDC Suffield, a Canadian
biodefence institute, to develop both prophylactic and therapeutic
treatments. Nexia believes it can produce up to 5m doses within two
years.
Today, the most common defence against nerve agents is a
post-exposure
"chem-pack" of atropine, which works if the subject has genuinely
been
exposed to a nerve agent, but produces side-effects if they have
not.
"You do not want to take this drug if you haven't been exposed,"
says
Nexia's chief executive, Jeff Turner. The problem is that it is not
always possible to tell if someone has been exposed or not. But
Nexia's treatment, says Dr Turner, "won't hurt you, no matter what."
The buzz around flies
But perhaps the most curious approach to making
transgenic-animal-derived medicines is that being taken by Minos
BioSystems. It is the creation of Roger Craig, the former head of
biotechnology at ICI, a British chemical firm, and his colleagues
Frank Grosveld of Erasmus University in the Netherlands and Babis
Savakis of the Institute of Molecular Biology and Biotechnology in
Crete. While others concentrate on goats, Minos is using flies.
"Mice
won't hit scale, cows take too damn long to prepare for research, GM
plants produce GM pollen that drifts in the wind, chickens have
long-term stability of germ-line expression issues, and they carry
viruses and new strains of 'flu--I quite like flies, myself," says
Dr
Craig.
A small handful of common house flies, he says, can produce billions
of offspring. A single fly can lay 500 eggs that hatch into larvae,
a
biomass factory capable of expressing growth hormone, say, or
antibodies, which can then be extracted from the larval serum. The
set-up cost of producing antibodies using flies would, Dr Craig
estimates, be $20m-40m, compared with $200m to $1 billion using
conventional methods. "In addition to getting some investors, the
key
here is gaining regulatory and pharma acceptance of the idea that
flies have to be good for something," he says. This will take time,
he
admits, and could be a hard sell. But if the idea of using
transgenic
goats to make drugs takes hold, flies might not be such a leap.
For the time being, then, everything hinges on GTC's goats. The
EMEA's
verdict is expected before the end of the year. Yet even if Dr Cox
wins final approval to launch ATryn next year, he too faces a
difficult task convincing the sceptics that transgenic animals are a
safe, effective and economical way to make drugs. As Monsanto and
other proponents of genetically modified crops have learned in
recent
years, it takes more than just scientific data to convince biotech's
critics that their fear and loathing are misplaced.
More information about the paleopsych
mailing list