News this Week

Science  11 Apr 2003:
Vol. 300, Issue 5617, pp. 224

    Deferring Competition, Global Net Closes In on SARS

    1. Martin Enserink,
    2. Gretchen Vogel

    Four weeks after the sudden appearance of severe acute respiratory syndrome (SARS), researchers still aren't completely sure what causes the potentially fatal disease that has sickened thousands around the world. But they have scored at least one victory: From the chaos of the widening epidemic has emerged a globe-spanning team effort dedicated to finding the culprit as fast as possible.

    With a little help from modern communication technology, the World Health Organization (WHO) has set up a global network of labs that has largely survived the fierce rivalries traditionally dominating the competitive field of virology. “I would say this is historic,” notes James Hughes, director of the U.S. National Center for Infectious Diseases in Atlanta. By pooling resources and intellect, “we're marching two or three times faster,” adds virologist Ab Osterhaus of Erasmus University in Rotterdam, the Netherlands.

    This week, members of the network published an article in The Lancet and submitted four papers to The New England Journal of Medicine (NEJM) to document their work. Most scientists now agree that a new coronavirus is the most likely cause of SARS, which had caused 2671 cases in 19 countries by Tuesday, including 103 fatalities. But some believe the virus may have an accomplice, such as the recently discovered human metapneumovirus that Frank Plummer and colleagues at the National Microbiology Laboratory in Winnipeg isolated from several SARS patients.

    Growing concern.

    Children in Hong Kong don masks to shield against viral infection. By Tuesday, nearly 1000 cases had been reported in the city.


    The heart of the worldwide scientific onslaught on SARS is the office of German-born virologist Klaus Stöhr on the fourth floor of the communicable diseases building at WHO headquarters in Geneva. Stöhr has no lab of his own at WHO, but shortly after SARS was detected, he decided to try to weld together research groups into “a global virtual laboratory.” Even he was skeptical that it would work. During past outbreaks, labs often fought fiercely to be the first to finger a culprit, and sharing data or samples was often out of the question. “Scientists by nature are very competitive,” Stöhr says.

    But all 11 labs he invited to participate accepted, and since 17 March, Stöhr has chaired daily teleconferences during which researchers share their findings. His standard greeting—“good morning, good day, good evening”—has come to symbolize the network's global reach. Genetic sequences, photos, and other data are posted on a secure Web site, and reagents are shipped around the world within hours of a collaborator's request.

    The first hints about the probable culprit came on 21 March, just 4 days after the initial teleconference. Late that evening, Malik Peiris of the University of Hong Kong e-mailed group members that he had isolated a virus from patient tissues. It grew more slowly when exposed to blood serum from patients recovering from a SARS infection, he said, suggesting they had developed antibodies to the virus. Serum from healthy controls had no effect on the virus. An initial electron microscope image suggested a coronavirus, Peiris reported.

    Soon the findings were replicated in other labs, and on 24 March, Julie Gerberding, director of the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, announced the finding to the world. Scientists have detected the virus or antibodies that target it in many infected patients but not in more than 800 healthy controls, Stöhr says.

    To obtain more definitive evidence, Osterhaus has infected monkeys with the virus to see if they develop a SARS-like disease. As Science went to press, a similar study was awaiting approval at the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in Fort Detrick, Maryland.


    A team in Beijing is the 12th group to join the unprecedented “global virtual laboratory” tracking the SARS epidemic.

    Still, some suspect a role for human metapneumovirus, which has been isolated from SARS patients in some parts of the world—including Canada and Hong Kong—but not others. Last month, an independent group from the University of Liverpool, U.K., published a study showing that this virus appears to exacerbate symptoms in infants infected with respiratory syncitial virus. The Canadian researchers suspect it may do the same in SARS patients. “I'm convinced that they are right, that there are metapneumoviruses in their patient samples” that may be playing a role in the disease, says Christian Drosten, a virologist at the Bernhard Nocht Institute for Tropical Medicine in Hamburg.

    But attention is focusing on the coronavirus. A subset of the group is working to develop a test sensitive enough to detect infections early in the course of the disease, says Stöhr. Meanwhile, USAMRIID is testing thousands of antiviral compounds against the virus in cell culture and plans to slog through all drugs currently approved for any condition by the Food and Drug Administration, says Army virologist Peter Jahrling. If an approved drug works against SARS, it could be available much faster than a new one. CDC and other labs are comparing the virus's genome sequence to those of other coronaviruses—which can infect a range of avian and mammalian species—to determine its likely origin.

    For network scientists, Stöhr has tried to orchestrate the fair distribution of a key commodity: scientific credit. He initially proposed that they submit three papers to NEJM: one produced by three groups in Hong Kong, one co-authored by German researchers and CDC, and one by groups that found the metapneumovirus. That plan fell apart when CDC, which had been invited by NEJM to write a paper, decided it preferred to go it alone. Fortunately, NEJM editors said they would consider publishing all four. Drosten has teamed with colleagues across Germany as well as Osterhaus and a group from the Pasteur Institute in Paris to describe the methods they used to track the coronavirus. “It appears there's enough flesh on the bones for everybody,” says Osterhaus.

    Meanwhile, Stöhr is also compiling a paper for The Lancet chronicling the current collaboration. He concedes to being slightly taken aback last week when, after each lab had submitted 250 words about its own role, Gerberding stole some of the network's thunder in an NEJM editorial that was posted online 2 April. But his hope is that the example set by the SARS network will long outlast any debate over who came first.


    Misguided Chromosomes Foil Primate Cloning

    1. Gretchen Vogel

    While governments around the world debate how to prevent human reproductive cloning, it seems that nature has put a few hurdles of its own in the way. On page 297, a team reports that in rhesus monkeys, cloning robs an embryo of key proteins that allow a cell to divvy up chromosomes and divide properly. Unpublished data from this and other groups suggest that the same problem may also thwart attempts to clone humans.

    There are potential ways around the newfound obstacle, but for now, groups that made controversial claims that they would use the techniques that produced Dolly the sheep to create human babies are unlikely to succeed.

    It is almost as if someone “drew a sharp line between old-world primates—including people—and other animals, saying, ‘I'll let you clone cattle, mice, sheep, even rabbits and cats, but monkeys and humans require something more,’” says Gerald Schatten of the University of Pittsburgh School of Medicine, a leader of the rhesus monkey study.

    Schatten and his colleagues have tried hundreds of times to clone monkeys, only to fail. Indeed, although several groups have attempted it, no one has yet produced a monkey through somatic cell nuclear transfer, the process by which a nucleus from one cell is extracted and injected into an egg whose own nucleus has been removed. “The failure to clone any primate has so far been startling,” says Rudolf Jaenisch of the Massachusetts Institute of Technology in Cambridge, who studies cloning in mice.

    The scientists had suspected for several years that something was disturbing cell division in cloned embryos. The embryos seemed normal at their earliest stages, but none developed into a pregnancy when implanted. When the researchers looked more closely, they realized why: Many of the cells in a given embryo had the wrong number of chromosomes. Some had just a few, whereas others had twice as many as they should. Although embryos can survive for a few cell divisions with such defects, soon the developmental program becomes hopelessly derailed.

    To find out what was interfering with proper cell division, the team fluorescently labeled the cell-division machinery. The cells' mitotic spindles, which guide chromosomes to the right place during cell division, were completely disorganized. And two proteins that help organize the spindles, called NuMA and HSET, were missing.

    A look at unfertilized rhesus oocytes explained why. The team found that the spindle proteins are concentrated near the chromosomes of unfertilized egg cells—the same chromosomes that are removed during the first step of nuclear transfer. In most other mammals, Schatten says, the proteins are scattered throughout the egg, and removing the egg's chromosomes seems to leave enough of the key proteins behind for cell division to proceed.

    Missing in action.

    In human embryos, as in this egg fertilized by two sperm (red lines), proteins from egg and sperm combine (yellow) to guide cell division. Embryos formed by nuclear transfer lack these proteins.


    The work “explains why no one has yet succeed in achieving normally developing embryos from human nuclear transfer,” says Roger Pedersen of the University of Cambridge, U.K., who attempted human nuclear transfer experiments at his previous laboratory at the University of California, San Francisco. “Primate eggs are biologically different.” Schatten says preliminary data suggest the proteins are also concentrated near the nuclear material in unfertilized human eggs.

    A cloning lab might surmount the hurdle, says Schatten, by reversing the order of the traditional nuclear transfer procedure: First add an extra nucleus, then activate cell division, and finally remove the egg's DNA. The find “will make people think differently about the optimum sequence of nuclear transfer procedures,” says Ian Wilmut of the Roslin Institute in Midlothian, Scotland, a leader of the team that cloned Dolly.

    Even if scientists could overcome the obstacles, however, another study suggests that further developmental problems threaten clones of all species. Jaenisch and his colleagues report in the 15 April issue of Development that genes important to early development frequently fail to turn on in mouse embryos cloned from adult cells. That failure helps explain the low survival rate of such embryos, Jaenisch says. But he notes that the team's work—which examined the expression of just 11 genes—is only the tip of the iceberg. In other experiments, the researchers have found that even apparently healthy cloned mice show abnormal levels of gene expression. “There may be no normal clones,” Jaenisch says.

    Although revising the nuclear transfer procedure might help solve the cell-division problem, it is harder to imagine a solution for the faulty gene regulation that Jaenisch and his colleagues see. “We're looking at a more fundamental problem,” he says.

    The biological roadblocks would seem to be good news for those worried about the ethical implications of human cloning, says Schatten. “This reinforces the fact that the charlatans who claim to have cloned humans have never understood enough cell or developmental biology” to succeed, he says. The debate will go on, but nature already seems to have imposed its own limits on cloning.


    Cannibalism and Prion Disease May Have Been Rampant in Ancient Humans

    1. Elizabeth Pennisi

    Some call it the laughing disease; others, kuru. This neurodegenerative disorder is universally fatal and 40 years ago killed almost 10% of a small New Guinea tribe called the Fore. Now molecular biologists propose that similar epidemics plagued prehistoric humans. Both then and more recently, kuru, a prion disease, was transmitted through cannibalism, Simon Mead and John Collinge of University College London and their colleagues claim in a report online in Science this week ( They base their conclusions on the worldwide distribution of variants of the prion gene.

    The work lends support to the idea that ancient people once regularly munched on their peers. This conclusion will be controversial, says John Hardy, a geneticist at the National Institute on Aging in Bethesda, Maryland. Nonetheless, “I think [Collinge and colleagues] might be right.”

    Until 50 years ago, the Fore reportedly had a tradition of eating the dead. In the 1960s, Carleton Gajdusek of the National Institute of Neurological Diseases and Stroke in Bethesda demonstrated that kuru was an infectious disease: Once cannibalism was banned, kuru disappeared.

    Gajdusek blamed a slow-growing virus for the disease, but now the prime suspect in kuru is a malformed miniature protein called a prion. Contorted prions cause other, native prions to misfold, clump together, and kill brain cells. A similar process is believed to cause Creutzfeldt-Jakob disease (CJD) in humans and scrapie in sheep. Although some prion diseases occur spontaneously, in many cases, humans or other animals contract them by eating infected tissue.

    A decade ago, Collinge showed that people carrying two identical copies of the gene for the prion protein are more susceptible to developing CJD than people who carry two unmatched gene variants. Although the variants create proteins that differ by only one amino acid, the mismatch somehow protects people against the disease.

    To understand the history of the prion gene, Collinge's team looked at DNA from the Fore and also from 1000 people representing other groups around the world. All ethnic groups examined carried two versions of the prion gene.

    The variants' widespread existence suggests that they have been conserved throughout human history, the team claims. Based on additional comparisons across cultures and with chimp DNA, the group estimates that the variants arose 500,000 years ago. For most genes, one variant improves an organism's fitness more than the other; eventually the less beneficial one becomes rare or disappears. Because both prion variants are so prevalent, they “must have undergone strong natural selection pressure,” Collinge claims.

    Among the Fore, the number of women carrying one copy of each variant—so-called heterozygotes—instead of two copies of one or the other, was particularly high. His group found that 23 of the 30 women over age 50—those old enough to have practiced cannibalism—were heterozygotes, whereas statistically only 15 were expected.

    The high percentage of heterozygotes suggests that having one copy of each variant protected the Fore against kuru. Early research had pointed in this direction, but this is the first demonstration that the variants underwent “balancing selection,” Collinge notes. Only a few genes are known to have been selected in this way; one is the hemoglobin gene. One version causes sickle cell anemia if someone inherits two copies, but the variant persists because it protects against malaria if paired with another variant.

    The distribution of prion gene variants in human populations suggests that prion diseases were widespread early in human history. Less clear, however, is what made the diseases so common. There is a chance that epidemics of prion diseases arose from eating tainted meats. But the cannibal's diet may have been one of the most important sources of prion-tainted tissue. Kuru is the only prion disease known to reach epidemic proportions. Regardless of their source, “prion diseases may have ravaged human populations in the distant past,” says Adriano Aguzzi of the University Hospital in Zürich.

    William Arens, a social anthropologist at the State University of New York, Stony Brook, thinks Collinge and his colleagues are on the wrong track. Gajdusek never witnessed cannibalism, he insists. Nor has “cannibalism as a custom ever been seen,” he points out. “I have a basic problem with [Collinge's] hypothesis.”

    But Tim White, an integrative biologist at the University of California, Berkeley, favors the cannibalism theory. “Their conclusion is consistent with findings from archaeology,” he says. Evidence includes the discovery of bones with human teeth marks and fossilized human feces containing human proteins. White says the implications of the new line of evidence should be taken seriously: “Modern genetics have been structured by the past.”


    NASA Calls Russia's Bluff

    1. Elena Savelyeva,
    2. Andrei Allakhverdov,
    3. Andrew Lawler*
    1. Andrei Allakhverdov and Elena Savelyeva are writers in Moscow.

    MOSCOW—Prospects for the $95 billion international space station looked a little dicey for a few days last week. Yuri Koptev, director of the Russian space agency Rosaviakosmos (RAKA), warned that Russia would be unable to fly some critical missions to ferry people and supplies to and from the station unless NASA paid for them. The flights are needed to make up for the grounding of the U.S. shuttle fleet. When NASA refused, Koptev said the station would have to be mothballed for 6 months. By the end of the week, however, the impasse was resolved when Russian Prime Minister Mikhail Kasyanov announced that his government would come up with some funds.

    The immediate fate of the space station lies in Russia's hands. Its Soyuz spacecraft are the only ticket home for the three astronauts on the station, and its Progress cargo spacecraft are the only means for ferrying vital supplies such as fuel, food, and materials and equipment for continuing the station's construction and carrying out experiments.

    The issues came to a head when NASA and RAKA officials met from 25 to 27 March in the Netherlands to negotiate how to keep the station running in the wake of the Columbia disaster. NASA hopes to accelerate the launch of several Progress resupply missions to ensure enough food, water, and supplies for the crews. But as a prelude to the talks, Russia had asked NASA for $100 million in extra funding for additional flights. NASA balked, on the grounds that Russia's main spacecraft builder is on a list of entities suspected of having transferred sensitive technologies, such as ballistic missile equipment or know-how, to Iran; U.S. law prohibits the government from funding blacklisted organizations. In the end, NASA declined to pony up any extra funds, prompting Koptev's threat.

    Empty threat?

    RAKA chief Yuri Koptev warned that the station would have to be mothballed.


    Kasyanov came to the rescue on 3 April, when he announced that his government had found $38 million to fund an extra cargo mission later this year. RAKA was already committed to launching a Soyuz on 26 April to bring in a two-person replacement crew and to add one extra Progress mission this year to those already scheduled.

    Kasyanov also promised to find an additional $130 million by the end of the year to pay for two more Progress flights needed to replace shuttle flights in 2004. At the moment, there is enough water—the most critical issue—to last until August, even without the next Progress flight scheduled for 8 June. But Bill Gerstenmaier, space station chief at Johnson Space Center in Houston, acknowledged NASA worries that a major component failure might force planners to ship equipment, which could take up precious cargo space.

    NASA is evidently pleased with the outcome. “The partners have all responded well” to the Columbia crisis, Gerstenmaier said in a 7 April press conference. “We're really pulling together.”

    The ultimate losers in this game of musical space flights, RAKA officials claim, are Russia's space researchers. “NASA's refusal to fund the [extra space station] missions … will severely damage Russia's own space program,” says RAKAspokesperson Sergey Gorbunov. Work will proceed with Russian space station experiments such as saliva sampling and other medical research, greenhouse experiments, and crystal growth experiments, says science specialist Alexandr Spirin of RAKA's Mission Control Center. However, Gorbunov says, several nonstation missions to launch geodesic, navigational, and mapping satellites, among others, are now facing the ax.

    Meanwhile, NASA officials say they see little chance that the White House—at odds with Russia over its opposition to war with Iraq—will move to soften the nonproliferation act that forbids it from funneling cash RAKA's way. And they hope that the September 2004 launch of a new European robotic supply spacecraft will ease the pressure on Russia—and ease the doubts preying on their own minds.


    Dates Boost Conventional Wisdom About Solomon's Splendor

    1. Constance Holden

    Carbon-14 dates from Israel may help settle a scholarly debate that has raged over the past decade: whether David and his son Solomon, founders of the ancient kingdom of Israel, were the larger-than-life nation builders the Bible describes or largely mythical figures, as some recent historians have claimed. The new dates from Tel Rehov, a major Iron Age site in northern Israel, favor the traditional view that King Solomon was both real and powerful. “The implications are enormous for recreating the history of ancient Israel,” says archaeologist Lawrence Stager of Harvard University.

    Researchers led by Amihai Mazar of the Hebrew University of Jerusalem based their conclusions on olive pits and charred grain from one of Tel Rehov's three “destruction layers”—strata marking times when Rehov was ravaged before being rebuilt. The results, reported on page 315, place the layer between 940 and 900 B.C. Mazar and colleagues say the dates peg the devastation to a whirlwind plundering tour of Palestine by the Egyptian Pharoah Shoshenq, a well documented historical event that both Egyptian records and biblical writings date at about 925 B.C. The Bible adds another key detail: According to the books of I Kings and II Chronicles, the pharoah (whom the Israelites called Shishak) launched his invasion 5 years after Solomon's death.

    If Mazar and colleagues are right, the destruction layer at Rehov—along with contemporary layers that archaeologists have identified at other sites—gives a definitive glimpse of Solomon's realm. That information may make clear which of two radically different versions of Solomon fits the facts. The mainstream view, Stager says, holds that the great leader Solomon transformed the “rather rustic” early 10th century B.C. Israel of his father David into a sophisticated kingdom, with architecture and material culture to match. In the mid-1990s, a handful of “revisionist” scholars rocked the establishment with an audacious alternative: that biblical accounts of Solomonic splendor were mostly hype.

    Royal city?

    Excavations at Tel Rehov may undermine a revisionist view of biblical history.


    According to archaeologist Israel Finkelstein of Tel Aviv University, the temples, palaces, and political structures usually attributed to David and Solomon had nothing to do with those rulers at all (Science, 7 January 2000, p. 31). Finkelstein says his excavations of Megiddo, 40 kilometers west of Rehov, show that the so-called Solomonic palaces and gates there actually belonged to later, 9th century B.C. rulers known as the Omrides. In his alternative “Low Chronology,” Solomon and his family were at best minor chieftains, not the great kings of biblical fame, and at worst myths.

    The status of 3000-year-old monarchs is a politically charged issue in modern Israel, says archaeologist William Dever of the University of Arizona in Tucson. “In the current Israeli-Palestinian conflict, people are increasingly invoking archaeology in support of their cause,” he explains. “Some Israelis think that the very foundations of Zionism's claim to the land have been undermined” by the Low Chronology. For that and other reasons, he says, “we've all been waiting for science to come to our aid with carbon-14 dates.”

    That's where Tel Rehov comes in. If Finkelstein is right, Dever says, the 10th century B.C. Shoshenq was “laying waste to ephemeral … settlements and not to the royal cities.” Instead, the Rehov of Mazar's grain and olives was a well-planned 10-hectare urban center whose material culture connects it with sites of Solomonic ruins. The carbon-14 date, Dever says, strongly bolsters the case for “a historical Solomon and a real ‘United Monarchy’ in the 10th century.” Stager says Mazar's study “puts the nail in the coffin” of Finkelstein's theory.

    Undeterred, Finkelstein says he is collaborating with Tel Aviv University physicist Eliezer Piasetsky to reinterpret Mazar's new data. New, unpublished carbon-14 readings from Megiddo and elsewhere, he says, contradict the Rehov readings and show that the “Solomonic” layer was destroyed a century later by someone other than Shoshenq. “The diehards of the conventional dating are in a desperate situation,” Finkelstein concludes. “The Low Chronology is as solid as ever, if not stronger now.”

    But most biblical archaeologists say Finkelstein is swimming against the tide. The Bible may well have overexalted Solomon, says archaeologist Steven Rosen of Ben Gurion University, a specialist in radiocarbon dating, but radical revisionism is not the answer. “Demythologizing Davidic and Solomonic glories does not require that we redate everything,” Rosen says, “only that we put it in better perspective.”


    Academy Report Proposes Network of State-Based Centers

    1. Jeffrey Mervis

    The National Academy of Sciences (NAS) wants the nation to invest $500 million in research on how to improve student learning. In a report issued last week, an academy panel proposes a novel, state-based structure to funnel new research findings into the classroom—and to make them readily available to educators all over the country. If adopted, the initiative would also give a major shot in the arm to educational research, which has traditionally been a scientific stepchild.

    “Academics tend not to value applied research on education,” says NAS President Bruce Alberts. “As a consequence, the current level of spending isn't up to the task, nor to the importance of the issue.” In a foreword to the report, Alberts says he hopes the proposal to create what the report calls Strategic Education Research Partnerships (SERPs) triggers as much enthusiasm as the academy's 1988 report that helped launch a project to map and sequence the human genome.

    States would play the key role, says Joe Wyatt, chancellor emeritus of Vanderbilt University and chair of the SERP panel: “Education is a state-centric activity, and it's clear that anything we do will have to be state friendly.” He envisions a network of a half-dozen or more centers distributed around the country tackling the most important issues shaping student achievement, from new curricula to improved teaching training, and then helping implement what works best. One model, he says, is the nation's system of teaching hospitals, which provide a way to continually incorporate advances in biomedical research into patient care. “While education researchers now have the best intentions,” he says, “there is no systematic mechanism for applying what they learn.”

    Enthusiastic educator.

    NAS President Bruce Alberts calls SERP “a great idea.”


    Several federal agencies fund education research. “But there is no such thing as a state research budget for education,” notes Judith Ramaley, head of the education directorate at the National Science Foundation, which among its many projects runs a $127-million-a-year math and science partnerships program that links university researchers with local school districts. “I'm happy to hear about any effort that focuses attention on education research.”

    Federal officials could also create problems, warns the chair of the academy panel that oversaw the SERP committee, psychologist James Pellegrino of the University of Illinois, Chicago. “If the various federal agencies choose to see SERP as competition—usurp rather than SERP—then it will have a hard time getting off the ground.” Wyatt agrees, naming “turf wars” as the single biggest hurdle to be cleared.

    Wyatt hopes to raise $3 million to $5 million from private foundations and companies by the end of the year to launch the partnerships. The projected $500 million budget over 7 years—drawn from public as well as private sources—includes the cost of in-house research staff and a large database as well as support for a dozen or so projects involving researchers and local schools. Even that sum, notes Ramaley, amounts to less than “half of 1%” of the nation's annual spending on education.


    Ebola, Hunting Push Ape Populations to the Brink

    1. Jocelyn Kaiser

    Researchers are warning that a relentless epidemic of Ebola hemorrhagic fever in central Africa, combined with hunting, could push Africa's apes close to extinction within the next decade. A new analysis published online in Nature this week by ecologist Peter Walsh of Princeton University in New Jersey and others estimates the toll in Gabon, one of the last ape strongholds: More than half of the lowland gorillas and chimpanzees have disappeared since 1983. The concurrent human outbreak of Ebola in the neighboring Democratic Republic of Congo, where more than 100 people have died this year, has raised questions about the role of apes in the disease's spread.

    Conservationists sounded the alarm about the role of Ebola in February, after a different group of researchers found that up to two-thirds of the gorillas—600 to 800 animals—in the Lossi sanctuary in Congo had likely fallen to Ebola since late 2002, based on positive tests for the virus on several carcasses. “It is a disaster,” says primatologist Magdalena Bermejo of the University of Barcelona, Spain, who is also a co-author on the Walsh paper. Only seven of the 143 gorillas she has been studying since 1994 have survived, says Bermejo. She fears a worse outcome if the virus jumps 20 kilometers to Odzala National Park, which holds the highest known density of apes in the world.

    Until recently, Walsh says, some experts believed that ape populations in Gabon and Congo, home to 80% of the world's gorillas and most common chimps, were stable because these countries retain much of their original forest cover. But Walsh and 22 U.S., European, and Gabonese co-authors suspected that hunting and Ebola were having a heavy impact. To quantify the losses in Gabon, they compared a survey of ape nest sites in the early 1980s with survey results from 1998 to 2002.

    Remains of the day.

    Veterinary scientist Annelisa Kilbourn (who died last November in a plane crash) holds a femur from a lowland gorilla that died in an Ebola outbreak.


    The team found that the number of nest sites fell drastically, especially close to towns, where demand for bushmeat is high. But apes were also sparser close to sites of human Ebola outbreaks, suggesting that the ape population was infected and was spreading the disease to humans. (People are thought to contract the virus from eating infected animals.) Across Gabon, ape populations have declined by 56% since 1983, says Walsh, who predicts they could fall another 80% within 1 to 3 decades.

    Heavy Ebola outbreaks in Congo and civil unrest in the Democratic Republic of Congo suggest major losses of apes there as well (Science, 31 March 2000, p. 2386). In Congo and Gabon, international wildlife disease experts have suspected that Ebola caused some die-offs since 1994, although firm data have been lacking, says William Karesh of the Wildlife Conservation Society in New York City. “It looks more serious than people had been thinking,” says anthropologist Alexander Harcourt of the University of California, Davis.

    The Walsh-led team urges that the status of lowland gorillas and chimps be upgraded from endangered to critically endangered. The Nature authors argue that the only near-term means of stemming bushmeat hunting is “massive investment” in law enforcement in parks. Battling the Ebola epidemic is less straightforward, partly because experts disagree about what's driving it. Some believe that logging is sparking outbreaks by disturbing the apes' habitat and putting them in closer contact with the as-yet-unknown Ebola reservoir—which could be bats, mice, or birds. If so, there might be little scientists can do except monitor.

    Others, including Walsh, suspect that the sporadic human outbreaks in the 1990s resulted from a single, spreading epidemic transmitted ape to ape, as well as via other species. If so, Walsh thinks an experimental Ebola vaccine that has been shown to work on monkeys could help stem the epidemic in apes. But Ebola vaccine researcher Gary Nabel of the National Institutes of Health in Bethesda, Maryland, cautions that the vaccine could not be used on wild apes without more evidence that it would work, which could take 1 to 2 years.

    Conservation groups, scientists, and African officials met in Brazzaville, Congo, in March to discuss options and appeal to donors; they plan to meet again in May in Washington, D.C. Any action plan will first have to gain villagers' trust, Karesh says. Partly because of Ebola and the resulting social breakdown, unrest is high in Congo; vehicles are frequently attacked, and doctors investigating an Ebola outbreak were turned away from one village last year. Says Karesh: “It's just this downward spiral.”


    Ontario Plans Major Cancer Institute

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    OTTAWA—Aiming to secure its place in biomedicine's major leagues, the government of Canada's largest province has announced plans to spend $680 million over the next decade to create the Cancer Research Institute of Ontario. Officials hope to attract hundreds of the best researchers in the world to an enterprise that so far exists mostly in the mind of Calvin Stiller, president of the Canadian Medical Discoveries Fund and chair of the Ontario Research and Development Challenge Fund.

    “This is an extraordinary initiative. I'm not aware of any jurisdiction that has made this kind of a commitment to a disease,” says Stiller, an immunologist at the University of Western Ontario in London. “It's like $100 [Canadian] for every man, woman, and child.” Stiller says he and others hatched the idea of a cancer institute 3 years ago as a way to marshal the government's resources behind “an important human problem” and expand the province's existing scientific base.

    Although the institute is mentioned in the provincial budget released late last month, the details are scanty. Stiller and Bette Stephenson, chair of the Ontario Innovation Trust and former provincial education minister, have been asked to craft a business plan over the next few months that includes infrastructure and staffing needs. Stiller expects that hundreds of scientists will eventually be hired, but first he plans to assemble a board of directors and a presidential search committee. “We want the very best individual in the world,” says Stiller.

    That shouldn't be a problem, says Harvard University cancer researcher Judah Folkman. Given the institute's projected budget, he predicts, it “will be able to attract more than one big name.”


    Singapore Trial Halted, British Scientist Fired

    1. Martin Enserink*
    1. With reporting by Dennis Normile in Tokyo.

    A scathing report has led to the sudden removal of a well-known British epilepsy researcher as the director of the National Neuroscience Institute (NNI) in Singapore. Simon Shorvon, 54, was fired from the institute on 4 April and left the next day for the United Kingdom after an investigation found he had compromised patients' safety and well-being during a clinical trial involving patients with Parkinson's disease.

    “This shows that people can't get away with shortcuts in Singapore,” says Lim Pin, a member of the investigative panel and chair of Singapore's Bioethics Advisory Committee. “We're very protective and jealous about our reputation.”

    A four-member panel began an investigation in January after patients had complained about their treatment; its 180-page report was released on 3 April. Shorvon, while acknowledging he made mistakes, says the panel used some extraordinary tactics, such as locking him out of his office and going through years of e-mails, and that its overall conclusion was too harsh.

    The $5.6 million study was funded by the Singaporean government and aimed at elucidating the genetic basis of Parkinson's disease, epilepsy, and tardive dyskinesia. In the study, Parkinson's patients donated blood and underwent an extensive investigation. Among other things, they were asked to stop taking any Parkinson's medication at least 12 hours before their assessment. Then they were given L-dopa, a common Parkinson's drug. A positive response would confirm they had Parkinson's rather than other disorders.

    When recruitment for the trial at NNI was lagging, the panel says, Shorvon and his colleague, Ramachandran Viswanathan, obtained lists of Parkinson's patients from two hospitals and started contacting patients directly. That was a breach of confidentiality, the panel concluded. Equally serious was Shorvon's failure to inform the ethical oversight committee and the patients themselves that participation would require them not only to donate blood but also to halt their medication and undergo extensive tests. Neither step was mentioned in the consent forms signed by patients. Although the procedures weren't life-threatening, the panel says, the assessment caused severe discomfort in some patients and put them at risk of complications. The 127 patients involved “were treated like experimental subjects, without any rights,” the panel concludes.

    Shorvon concedes that it “might be wrong” not to seek ethical approval for the testing procedure and not to fully inform patients ahead of time. But he says that it's not uncommon for researchers to obtain medical records and approach patients directly. The panel also produced a “supplementary report” implicating him in alleged tampering with patient files during the investigation, and it filed a police report. Shorvon “absolutely” denies the tampering and says that he accepted the main report only to forestall a further inquiry and to be allowed to leave Singapore.

    Shorvon has returned to his position at University College London (UCL), where he led an epilepsy study group before coming to Singapore 3 years ago in what was considered a major scientific coup. Robert Walker, the secretary of UCL's Institute of Neurology, says the university will do its own investigation of the allegations.


    Do Some Galaxies Lack Shrouds of Dark Matter?

    1. Robert Irion

    In a challenge to the idea that all galaxies contain far more mass than meets the eye, a novel survey has turned up three galaxies that seem barren of cocoons of dark matter. “This is surprising, and we're a little puzzled about it,” says astronomer Aaron Romanowsky of the University of Nottingham, U.K. But other researchers say it will take stronger evidence to change their minds about how massive galaxies form.

    For decades, astronomers have gauged the heft of galaxies by examining how fast they spin. In spiral galaxies like our Milky Way, gas clouds far from the galactic center orbit at the same rapid pace as those in the inner sections. That points to a strong gravitational pull in the outskirts—far stronger than stars and gas alone can produce. Astronomers explained the motions by invoking massive shrouds of dark matter, containing perhaps 10 times more mass than we can see.

    That technique fails for the giant, featureless blobs of stars called elliptical galaxies. Ellipticals have little gas, so astronomers must try to track the motions of their stars. That's hard to do when starlight grows faint in a galaxy's outskirts. An alternate method, as Romanowsky reported this week at the U.K./Ireland National Astronomy Meeting in Dublin, Ireland, relies on locating planetary nebulae. These puffs of gas, which middleweight stars like our sun eject at the ends of their lives, are full of excited oxygen atoms that make them shine like beacons at a single green wavelength. By measuring the motions of the nebulae, astronomers can trace the overall distributions of mass far from the galaxies' centers.


    Slow-moving planetary nebulae (smaller dots) on the outskirts of M105 suggest the galaxy has no dark matter.


    Using a specialized spectrograph at the 4.2-meter William Herschel Telescope in La Palma, Canary Islands, Romanowsky and colleagues have discovered three galaxies that contain scores of surprisingly slow-moving nebulae in their remote outer regions. That dawdling pace suggests that no hidden mass tugs the nebulae along. “There's nowhere near as much dark matter as one would expect, and the motions are consistent with no dark matter at all,” says team member Michael Merrifield, also at the University of Nottingham.

    Other scientists are intrigued but skeptical. Only ironclad data will make astrophysicists believe that ellipticals are naked, says Joshua Barnes of the University of Hawaii, Manoa. Many planetary nebulae, he notes, may swoop through space on highly elongated orbits that only make it appear as if their galaxies lack dark-matter shrouds.

    Still, astronomers are taking note—especially those who control access to the Canary Islands telescopes. “They were sufficiently shocked and horrified by what we were finding,” says Merrifield, that they gave his group's proposal the highest ranking for more observing time. The team expects to study 22 more ellipticals within 2 years.


    Iceball Mars?

    1. Richard A. Kerr

    Incessant wobbling of Mars seems to trigger intermittent ice ages that could explain everything from watery gullies that have been seen as seeps to geologic layering taken to be signs of lakes

    HOUSTON, TEXAS—Planetary scientists poring over the latest data returned by Mars-orbiting spacecraft have reached a startling conclusion: Half the Red Planet appears to have been encrusted with ice in the relatively recent past. A layer of dirty ice still covers Mars poleward of latitudes that, on Earth, would encompass Anchorage, Moscow, and South America's tip. But several lines of evidence suggest that, within the past million years or so, a now-vanished ice layer cloaked Mars down to the latitudes of Buenos Aires, New Orleans, and Baghdad.

    This icy coating would not have been the first to cover large areas of Mars, according to new climate modeling reported last month at a workshop here.* Mars has a tendency to wobble back and forth on its axis, and the new modeling suggests that this instability would have triggered a succession of ice ages throughout the planet's history. The tilting would have shifted polar climes to lower latitudes, vaporizing the polar ice caps and layering dirty ice toward the equator. That would help explain much geology that has puzzled researchers for as long as 30 years: swaths of “softened” martian terrain that look like they're made of ice cream scooped on a hot day, slopes that ooze like wet paint dripping down a wall, and even the enigmatic gullies where water seems to have flowed on a frozen planet.

    More speculatively—and farther back in Mars history—extreme tilting of the planet may have repeatedly driven ice sheets right down to the equator. If so, that ice is gone now, but the dust left behind could have formed the mysterious layered sediments of the equatorial region.

    An unraveling sheath

    The notion that half of today's dry, dusty planet was recently covered by ice may seem a bit far-fetched at first glance. But it begins to look more plausible in the light of new data on the extent of current polar ice.

    Ice was finally confirmed in the polar regions last year, after the Mars Odyssey spacecraft turned its gamma ray- and neutron-detecting instruments toward the martian surface. By measuring the gamma rays and neutrons given off when cosmic rays strike the surface, these instruments can gauge the amount and depth of water to a meter beneath the surface. At the Houston workshop, Odyssey team members William Feldman and Robert Tokar of Los Alamos National Laboratory (LANL) in New Mexico reported that the latest measurements show that ice is buried beneath several centimeters of dry martian soil in both hemispheres from the poles to a latitude of 60°. And there's a lot of it: In the high southern hemisphere, ice is 40% to 73% of the soil by volume, averaged over hundreds of kilometers, according to Odyssey data.

    Icy, dirty Mars.

    Mars Odyssey found ice buried a few centimeters deep above 60° latitude (bottom, white). It looks as if it fell as dirty snow, softening and smoothing the terrain (top).


    These data mesh neatly with the view from the Mars Global Surveyor, laid out at the workshop by researchers from Brown University in Providence, Rhode Island—including James Head, John Mustard, Mikhail Kreslavsky, and Ralph Milliken. The spacecraft's Mars Orbiter Camera (MOC) provides images of isolated, narrow strips of the planet's surface at the highest resolution ever, resolving features that measure as small as a couple of meters. Kreslavsky reported that poleward of about 60° latitude, wherever the camera looks, the surface seems smooth when viewed at low resolution; it looks as if meters of heavy snow lie upon the land.

    But when viewed at the highest resolution, this smooth mantling has a texture to it. The most typical texture is a pattern of closely spaced, meter-scale knobs that give a stippled or “basketball” look to the surface. The equatorward edge of the ice mapped by Odyssey is “amazingly” coincident with the equatorward edge of the smooth-looking mantling, according to Tokar. The texturing of this high-latitude mantling could well be the result of partial loss of the very shallowest ice to the atmosphere, says Mustard. Permafrost on Earth can likewise get lumpy with warming.

    Equatorward, between 30° and 60° in both hemispheres, Mustard has found what appear to be scraps of this thin mantling scattered across the landscape (Science, 19 January 2001, p. 422). In the latest analysis by Mustard and Milliken, they find a progressive disruption of the complete mantling seen at higher latitudes as they approach the martian tropics. In places, the mantling has been partially stripped away, revealing multiple layers that total a few to 10 meters in thickness. This dissected mantling is most abundant at about 40° latitude. It looks as if the ice has been stripped from a mantling layer or layers, says Mustard, leaving scraps of crusty layering where winds haven't just blown it all away.

    Head and Kreslavsky's analysis of topography returned by the Global Surveyor's Mars Orbiter Laser Altimeter (MOLA) reinforces this picture. It shows high-latitude terrain, smooth at the scale of tens to hundreds of meters, extending to 60°, where it roughens just as the dissected, scrappy mid-latitude terrain appears in the high-resolution images. The topography and imagery therefore both give the impression of a thin layer or layers of dirty ice that were once continuous above 30° latitude but now look different because warming has driven out the ice up to a latitude of 60°.

    Cosmetic alterations

    A dirty ice sheet that once extended over Mars's mid-latitudes would explain several enigmatic features there. Some of these have puzzled planetary scientists ever since the two Viking orbiters returned images in the late 1970s that looked for all the world like ice-related surface features on Earth. Most suggestive of ice, perhaps, were places where the surface seemed to be sloughing off the land and oozing downhill. Generically termed “viscous flow features,” they looked like the work of Earth's rock glaciers, streams of flowing rock-laden ice. Many more have since turned up in close association with the dissected mantling spotted by Mars Global Surveyor. In the 13,000 MOC images they have inspected, Milliken and his Brown colleagues reported at the workshop, viscous flow features are restricted to the same 30° to 60° mid-latitude bands as the dissected mantling. The flows peak in abundance at the same 40° latitude as the dissected mantling.

    Furthermore, the mysterious gullies—where liquid water seems to have flowed down steep slopes in the recent past—follow the same latitudinal distribution as viscous flow features and dissected terrain; they even tend to cluster in the same three or four places as the viscous flow features do. The currently favored explanation for gullies is that lingering patches of dirty snow melted there (Science, 28 February, p. 1294).

    Taken together, the Brown researchers say, these features could be explained by the warming of an ice-rich mantling. That could have produced meltwater that formed gullies, ice softening that gave rise to viscous flow, and ice loss through sublimation that weakened the mantling and allowed dissection. “That's consistent with what I've seen” in images, says planetary geologist James Zimbleman of the Smithsonian Institution's National Air and Space Museum in Washington, D.C. “I'd call it a working hypothesis.”

    Wobbly ice ages

    Those pieces all fit together neatly, but one big part of the puzzle remains: What drove the ice so far toward the equator and later caused it to retreat? Mustard believes the answer lies in the planet's wobbling.

    Planetary dynamicists long ago realized that Mars is more than a bit tipsy on its pins. It lacks a large moon like Earth's to steady it against the gravitational tug of nearby Jupiter, so it wobbles over and back every 100,000 years or so. For the past few hundred thousand years, according to various calculations, it has been gently nodding a few degrees about its present tilt of 25.2°. But half a million years ago and more, Mars was swinging between a modest 15° tilt up to an inclination of 35°.

    Falling apart.

    Warm, dirty ice may flow like a glacier (top) or vaporize and let some of the dirt blow away (bottom).


    Wobbling doesn't change how much solar energy Mars as a whole receives from the sun, but it does drive huge swings in the temperatures of the seasons. When Mars was tilted 35° on its side, martian summers at the poles would have been much warmer, driving water frozen in the ice caps into the atmosphere. While the summer poles warmed, the equatorial region would have cooled. Researchers have pointed out that during those relatively warm polar summers, water in the caps would have sublimated into the atmosphere, been carried toward the equator, and gotten trapped at lower latitudes as snow and frost by the intensified cold there. Along with the new ice would have come dust from the intensified dust storms expected with heightened seasonal extremes on an already dusty planet. Once the tilt had eased, in this scenario, the water would have begun to go back where it came from. Water would have sublimed from the warming dirty ice and returned to the ice caps, leaving behind a crusty dust layer in the mid-latitudes subject to dissection by martian winds.

    In a test of this wobbling climate scenario, Michael Mischna and Mark Richardson of the California Institute of Technology in Pasadena and their colleagues ran a Mars climate model under varying degrees of planetary tilt. At a higher-than-present tilt, water did migrate from ice caps to mid-latitudes in the model, fast enough to accumulate a few tens of meters of ice in the 10,000 years of a cycle's maximum tilt. On returning to a lower tilt, the mid-latitudes began to lose their ice. However, Mischna expects that the dust left behind, which was not included in the model, would have built a layer thick enough to insulate and protect some lingering ice. If the process repeated, layers of dust and possibly dirty ice could have built up. “It all works out,” says Mischna.

    Equatorial ice?

    Having half the planet covered with ice may seem surprising, but Richardson and his colleagues are raising an even more dramatic possibility: that the whole of Mars, right down to the equator, has been mantled at one time or another. When they crank up the tilt in their model into the 45° to 60° range—the upper limit for Mars—ice sheets grow down to the equator. And they note that ancient, sometimes rhythmically layered deposits have been found throughout the equatorial region, although their origins are much debated (Science, 8 December 2000, p. 1879).

    As a hint that ice has in fact been laid down in the martian tropics, Head and David Shean of Brown and glacial geologist David Marchant of Boston University make a case that the odd flowlike deposits on each of the three giant volcanoes of Tharsis Montes may be the remains of glaciers. Straddling the equator, the three volcanoes each have fan-shaped deposits off their northwest flanks. Taking a close look at Arsia Mons, the southernmost of the three, Head, Shean, and Marchant see a strong resemblance between its deposits and those left by the withdrawal of the so-called cold-based glaciers of the Dry Valleys of Antarctica, which are too cold to lubricate their flow across the land with melted water. It's as if prevailing northwesterly winds dropped so much snow on the volcanoes' upwind flanks, they say, that thick ice deposits formed and began to flow.

    That a wobbling Mars periodically coats itself with dirty ice “is a very interesting scenario,” says planetary scientists Bruce Jakosky of the University of Colorado, Boulder, “but I have questions.” The polar and mid-latitude layers may well be related, he says, but “there are a lot of ways to emplace ice at low and mid-latitudes.” Planetary scientist Stephen Clifford of the Lunar and Planetary Institute in Houston allows that the ice and dust may well have been deposited together as dirty snow or frost. But the dirt could just as easily have formed first and the water diffused in from the atmosphere and frozen in the soil, he cautions. Then temperature variations might have concentrated the ice to the high abundances seen by Odyssey.

    Confirmation of a pivotal geologic role for large amounts of ice may come with some of the next missions to Mars. In December, the Mars Express orbiter arrives at the Red Planet. The European spacecraft carries a deep-penetrating radar capable of detecting ice lingering below the reach of Odyssey. NASA's Mars Reconnaissance Orbiter, slated for launch in August 2005, will carry a similar instrument.

    • *Microsymposium 37, “Mars: Formation and Evolution of the Late Amazonian Latitude-Dependent Ice-Rich Mantling Layer,” 15 to 16 March, Houston, Texas; Brown University and the V. I. Vernadsky Institute of Geochemistry and Analytical Chemistry, Moscow.


    Recruiting Genes, Proteins for a Revolution in Diagnostics

    1. Robert F. Service

    As companies create medicines for special conditions that require molecular testing, they are helping change the way common diseases are diagnosed

    If you want a glimpse of medicine's future, take a look at Herceptin. This new drug from Genentech homes in on a specific cell receptor to block an aggressive form of breast cancer. Approved by the U.S. Food and Drug Administration (FDA) in September 1998, it racked up sales last year of $385 million, making it one of the most successful new anticancer compounds ever released.

    But what makes Herceptin (also known as trastuzumab) noteworthy is not just its precise targeting and impressive sales. It is a highly specialized medicine, intended only for 25% to 30% of women with breast cancer—those whose cancer cells overexpress a growth-related protein receptor called her-2/neu. Designed for patients who can be identified by this molecular marker, the therapy has been hailed as one of the first examples of personalized medicine. Lost in the fanfare, however, is one of the keys to Herceptin's success: Like a cruise missile, its effectiveness depends on sound intelligence. Patients who can benefit from the drug must be identified with a diagnostic test that flags the presence of the cell receptor.

    Several companies already offer her-2 tests to pinpoint women likely to benefit from the drug. In an industry in which a product with $100 million in sales qualifies as a blockbuster, this test is off to a fast start, generating more than $22 million. “Soon there will be many more examples like this,” predicts Jonathan Peck, vice president of the Institute for Alternative Futures, a think tank in Alexandria, Virginia. Herceptin, in other words, could be a model for a new wave of diagnostics.

    Early warning.

    FDA's Emanuel Petricoin (left) and NCI's Lance Liotta have teamed up on methods of testing blood to identify complex protein patterns associated with cancer.


    Drug companies are creating a host of new drugs linked to molecular indicators of disease, while medical diagnostic companies and academic teams are pursuing a bevy of new testing strategies. Long viewed as the drug industry's stepchild, medical diagnostics are starting to benefit from the molecular technology boom. Researchers aim to use gene- and protein-sensing devices to flag patients with genetic susceptibilities, identify those who might suffer side effects of drugs, and spot diseases even before they produce symptoms.

    “This is going to have a really dramatic impact on cancer,” predicts Lee Hartwell, who heads the Fred Hutchinson Cancer Research Center in Seattle, Washington. Even mainstream cancer therapies—surgery, radiation, and chemotherapy—that have changed little in recent decades are expected to benefit from better diagnostics. Their success rates have already improved dramatically because cancers are being spotted earlier, Hartwell notes. According to one recent diagnostics market survey, the technology boom could generate $9 billion in new revenues for diagnostic companies over the next decade, a growth of almost 50%.

    But the transition isn't likely to be straightforward. Many factors—including the insurance industry's reluctance to reimburse for anything new and physicians' conservatism about tests—could throw a wet towel on corporate interest in novel diagnostics, and possibly on personalized medicine. And even when new diagnostics do make it to market, they're likely to face a host of ethical questions. Nevertheless, researchers are betting their academic careers—and, in some cases, fortunes—on the conviction that molecular diagnostic tools will soon revolutionize medicine.

    Kits and drugs

    The first wave of products to reach the clinic, many expect, will be similar to her-2: test kits for use in combination with targeted therapies. They might be designed to get drugs to the patients most likely to benefit from them or to avoid giving drugs to those with a specific risk of having an adverse reaction. Both uses could lead to wider interest in molecular testing.

    DiaDexus, a start-up company in South San Francisco, for example, has a test that helps guide a new treatment for heart disease. The company has developed a diagnostic test for lipoprotein-associated phospholipase A2 (Lp-PLA2), a marker associated with inflammation, fingered 3 years ago as a strong indicator of heart attack risk in men with high cholesterol levels. Ronald Lindsay, the company's chief scientist, says diaDexus expects to release the new test next year. But it may not see an immediate surge in sales; that will likely have to wait for results from GlaxoSmithKline. The pharmaceutical giant is currently developing a drug to reduce the buildup of potentially deadly plaque deposits in arteries by targeting Lp-PLA2. If the drug pans out, it could mean a windfall for diaDexus.

    The hope of a drug-linked payoff is driving other diagnostic companies as well, including Millennium Pharmaceuticals of Cambridge, Massachusetts. Geoff Ginsburg, vice president of molecular medicine, says, “Our goal is not necessarily to develop diagnostics for diagnostics' sake but … to give drugs to the right patients.” He thinks tests that are used to monitor treatment will help sell the treatment: “Patients are more likely to stick with a drug if you have a biomarker showing it works.”

    Both Ginsburg and Lindsay say they believe that for the foreseeable future the best bet is to stick with diagnostics that track just single biomarkers, such as the Lp-PLA2 test. Many others in the industry, however, are beginning to think otherwise. They note that existing single-marker tests are problematic because they don't necessarily indicate that disease is present. Take the test for prostate-specific antigen (PSA), which is used to detect prostate cancer. Very high or low scores are relatively easy to interpret, but those in the middle leave the doctor guessing whether the growth is benign or malignant. Far better would be a diagnostic more tightly associated with disease, one whose level indicates the severity of disease.

    Finding a good single marker is difficult, though. The multitude of differences among patients—age, gender, diet, genes—mean that results rarely look the same. “There is so much disease heterogeneity that no one marker is good for covering everything,” says Samir Hanash, a proteomics expert at the University of Michigan, Ann Arbor. “This is what has plagued diagnostics in recent years: too much emphasis on one thing. It just does not work.”

    Scientists continue to do research on single biomarkers associated with disease—or sometimes groups of three or four. But the biggest buzz in the industry is at the other end of the spectrum: looking at dozens or hundreds of molecular markers simultaneously. “We need to start thinking about this very differently than looking for the smoking gun” of one or a few proteins indicative of disease, says Emanuel Petricoin, a proteomics expert with FDA's Center for Biologics Evaluation and Research in Bethesda, Maryland.

    Proteins in profile

    In a novel approach, Petricoin and his colleague Lance Liotta of the National Cancer Institute (NCI) in Bethesda are now trying to use molecular data to spot telltale signs of cancer. The experiment is remarkable because it uses dozens of proteins and because the functions of these proteins are largely unknown. Yet this trial is showing great potential. The FDA-NCI duo published a paper in The Lancet last year reporting that a test that uses an advanced computer algorithm to look for patterns of proteins in a blood sample was able to identify women with ovarian cancer 95% of the time.

    High definition.

    A DNA microarray is changing the way lymphoma is diagnosed.


    Liotta and Petricoin have dubbed the technique “clinical proteomics.” They take blood samples from cancer patients and feed them into a mass spectrometer that weighs the constituent proteins and spits out the results as a spectra of peaks. An artificial intelligence software program—based on algorithms originally developed for classified military signal discrimination—then looks for patterns common to patients with benign and malignant disease. Once these templates are set, it tries to match each tested sample to a template, a scan that takes just seconds, says Liotta. The method doesn't have the same weaknesses that older, single-biomarker tests have, says Petricoin: “We're looking at a whole pattern of biomarkers,” and the sheer quantity “can transcend heterogeneity of cancer and disease.”

    The initial Lancet paper had a powerful impact, says Leigh Anderson, a diagnostics and proteomics expert who now heads the Plasma Proteome Institute in Washington, D.C., because it suggested that, contrary to current practice, researchers could identify disease without knowing how the molecular markers are linked to that disease. But the technique is anything but perfect. The initial test for ovarian cancer gave false positives 5% of the time—a very high rate. Ovarian cancer is a rare disease, occurring in 20 out of 100,000 women. If a clinical proteomics test with a 5% false positive rate were used to screen women in the general population, it would needlessly alarm about 250 women for every real case it found. “It would never see the light of day,” concedes Petricoin. “It has to work every time.”

    The test may prove useful as a secondary screen for women who already have some indication of disease. And because the computer algorithm improves its predictive power the more cases it reviews, Petricoin says, it will get better. NCI and FDA will begin a clinical trial later this year. And in November, Correlogic Systems, the Bethesda company that designed the pattern matching software, licensed its software to two major reference labs for commercial development of an ovarian cancer diagnostic test. Even U.S. Health and Human Services Secretary Tommy Thompson jumped on the bandwagon last July, announcing a new NCI-FDA research program “to revolutionize cancer detection and care.”

    The NCI-FDA team aims to diagnose a wide variety of cancers, vascular diseases, heart disease, inflammation, and transplantation rejection, say Liotta and Petricoin. And they, along with an independent team led by John Semmes of the Eastern Virginia Medical School in Norfolk, published results last year showing that the technique also works for flagging patients with prostate cancer. “The sky is the limit,” Petricoin says.

    Still, diagnostics experts remain cautious. The data are “very provocative,” says Samuel Wells, an oncologist at Duke University in Durham, North Carolina, who is looking at protein profiling to diagnose breast and lung cancer. “But there are people out there who feel it's not ready for prime time.” The major issue, he and others say, is that researchers have no idea what proteins they are tracking. Most doctors may feel far more comfortable if they know there is a cause-and-effect relationship between the proteins they are tracking and the disease. “I think ultimately you will have to identify those [proteins],” says Lindsay. But Liotta, Petricoin, and others fire back that doctors often use techniques before they are fully understood.

    Some researchers have adopted a different protein-based strategy that identifies a large variety of proteins associated with disease and then creates arrays that detect them (Science, 7 December 2001, p. 2080). Called protein biochips or microarrays, these devices use antibodies or other agents to capture proteins of interest on a glass slide or slab of silicon. Test samples are then washed over the array, and fluorescent dyes or other probes are used to highlight the spots on the array where proteins are bound, identifying disease markers that may be present. Although many groups are working on this idea, they have not yet produced a diagnostic tool that's ready for clinical use.

    A chip-based future?

    Clinical researchers say that a related technology—gene chips—is much further along toward guiding the treatment of cancer and other diseases (see sidebar, p. 236). These chips, say proponents, are well suited to sifting through complex genetic diseases such as cancer to spot specific illnesses' subtypes. But for now such tests remain more invasive than protein profiles, since they require taking a biopsy from the disease tissue itself rather than working with blood or another easily sampled body fluid. The field of gene chip diagnostics got a jumpstart 3 years ago, when a team led by NCI oncologist Louis Staudt developed the Lymphochip, which contains more than 18,000 snippets of genes associated with normal and abnormal lymphocyte development. The researchers used it to diagnose distinct types of a blood cancer called diffuse large B-cell lymphoma (DLBCL), the most common type of non-Hodgkin's lymphoma. Today, only 40% of people diagnosed with the disease survive past 5 years. And conventional methods don't pinpoint patients likely to benefit from therapy.

    Gene chipper.

    Louis Staudt leads the NCI group that worked on the Lymphochip.


    Staudt and his colleagues reported that a computer algorithm was capable of spotting patterns in the gene expression of 40 DLBCL patients that effectively split them into two groups with good and poor prognoses. They followed this work with studies showing that patterns of gene expression in tumor cells could predict whether chemotherapy was likely to be effective in DLBCL patients. Ultimately, tracking just 17 genes allowed them to divide their patients into four different groups whose 5-year survival rates in response to chemotherapy were 73%, 71%, 34%, and 15%, respectively. “What we have found so far is that the profiles of tumors can predict what is going to happen years later,” Staudt says. Other reports suggest that gene chips may spot cancers most likely to metastasize or require aggressive therapy.

    Running the gauntlet

    Despite the early progress of protein and gene profiling, both techniques have a long way to go before they are ready to be approved as clinical diagnostics. And the track record in the diagnostics business is “pretty dismal,” Petricoin says. “The highway is littered with the remains of biomarkers that looked promising early on,” he adds. Even if they do get approved, they still face an uphill battle to change the minds and practices of doctors and laboratories. “It's a very conservative business,” says Lindsay. “Changing a physician's approach to how they treat people is very slow.”

    Poor profits and cycles of hype and disappointment have long plagued medical diagnostics. Today that industry rakes in some $20 billion a year in revenues, 80% of which goes for lab tests at doctors' offices and hospitals to measure things such as cholesterol and electrolyte levels. Although the total sounds impressive, it's only half the revenue that a single drug company—Merck—racked up in the year 2000.

    This low-wattage status has made life difficult for people trying to create new medical diagnostics, says Anderson of the Plasma Proteome Institute. A single blockbuster drug, he explains, can bring in billions, justifying the hundreds of millions of dollars risked on research and development. Diagnostics, by contrast, typically bring in only a few tens of millions. “It changes the amount of research money diagnostic companies are willing to spend,” says Anderson.

    The financial picture gets worse when the way insurance companies pay for tests is considered. “There are huge price pressures on clinical labs to keep the lowest possible cost,” says Anderson. That, Anderson adds, has tended to make the diagnostics industry very conservative, unwilling to introduce anything doctors might view as unfamiliar.

    The industry's own statistics seem to bear that out. Anderson notes in a recent review that over the past decade the number of new medical diagnostics approved has declined to just one per year on average. While novel genomic and proteomic technologies pour out of research labs, so far innovation in the diagnostics business has been stagnant.

    Equally challenging are the ethical questions raised by these technologies. If new diagnostics make it possible to distinguish which patients are least likely to benefit from a drug, asks Peck of the Institute for Alternative Futures, will insurers stop paying for their treatment? And if insurers don't pay to treat otherwise incurable diseases, will rich patients be in a position to give themselves a slim chance at recovery by taking a modestly effective drug while poor patients are left to die of their illness? Another tough question: If someone has a gene or protein profile suggesting a predisposition to cancer or heart disease, will insurers reject the applicant? For now, such considerations are largely academic, says Peck. But they loom as the dark side of personalized medicine. “New diagnostics will give patients more information for their therapeutic decisions. They won't necessarily make those decisions easier,” Peck says.

    Easy or not, such decisions are coming, says Staudt. “The big picture is that the diagnosis of disease will be based on molecules. That will change the way we teach medical students and practice medicine. That will happen over the next 5 years,” he says. If he's right, novel diagnostics could bring changes in preventing, diagnosing, and treating disease that drugs by themselves could never accomplish.


    Putting Gene Arrays to the Test

    1. Malorye Branca*
    1. Malorye Branca is an editor at Bio-IT World in Framingham, Massachusetts.

    European cancer specialists will soon start recruiting several thousand women for an unusual clinical trial. The focus is common enough—treating breast cancer—but the study will be one of the first ever to assign patients to standard or aggressive therapy based on a gene scan. Half the women will be assessed by clinicians using the conventional European St. Gallen tumor classification. The other half will be evaluated by comparing gene activity in their tumors with patterns seen in deadly tumors. If the signatures are similar, the patient will be steered into more aggressive therapy.

    It is a “courageous study,” says oncologist Emiel Rutgers of the Netherlands Cancer Institute (NKI), one of the trial designers. Researchers are betting that they understand gene expression patterns well enough now to use them in life-altering treatment decisions.

    The trial will use DNA microarrays—devices that can scan the activity of thousands of DNA segments simultaneously. The Dutch team is using a 70-gene signature identified by researchers at NKI and Rosetta Inpharmatics, a bioinformatics firm in Seattle, Washington, that's now part of Merck in Whitehouse Station, New Jersey. The researchers culled the prognostic set from a much larger collection of 25,000 genes whose expression patterns they first profiled in about 80 tumor specimens, as described in a January 2002 letter to Nature. In a further test of almost 300 samples, the signature outperformed any other prognostic indicator of breast cancer, the team reported in the December 2002 issue of The New England Journal of Medicine.

    The NKI-Rosetta work is remarkable in part because this is one of few examples in which a prognostic gene set has been applied to two different collections of tumor and performed equally well. “I've only seen one other set like this described in a paper,” says Paul Meltzer of the U.S. National Cancer Institute (NCI), citing a pair of studies looking at diffuse large B-cell lymphoma, led by NCI's Louis Staudt.

    The two sets of studies have “galvanized” the community, says Marc Lippman, an oncologist at the University of Michigan, Ann Arbor. The result has been a surge of interest using microarrays in clinical trials. “Everyone is talking about it and hoping to do it soon,” says Staudt, who is planning a trial himself.

    A group led by Daniel Haber of Massachusetts General Hospital in Boston is also doing a large-scale breast cancer study looking at the NKI-Rosetta signature, but the U.S. group won't use the test results to decide treatment. “The European trial is based on the premise that this works, but we felt we needed a couple more steps,” says Haber.

    Companies think DNA microarrays hold clinical promise, too. Affymetrix in Santa Clara, California, is developing a chip expressly for clinical use. Millennium Pharmaceuticals in Cambridge, Massachusetts, is investing in clinical trials of DNA scans for a number of cancers. U.S. Food and Drug Administration (FDA) spokesperson Steve Guttman estimates that about a dozen clinical trials and hundreds of investigational or feasibility trials using microarrays may be under way. He says, “I would be astounded if microarrays don't become diagnostics.”

    But the case for gene signatures is not ironclad. Replicating the studies has been difficult, for example. One reason, says Todd Golub of the Dana-Farber Cancer Institute and the Whitehead Institute for Biomedical Research in Massachusetts, is that the technology is moving rapidly in a “natural evolution.” People have been improving, not replicating, results, he adds. “The arrays are getting better, the analysis is improving, and more samples are being done.”

    Some groups have, however, tried to replicate the pivotal studies but failed. One of the biggest challenges has been that, until a couple of years ago, many labs had different sets of genes on their chips. But as the human genome is polished up, gene sets are becoming more homogeneous; this should make it easier to replicate findings. Indeed, a fresh crop of studies evaluating known signatures is on the way, researchers say. “We're testing the [NKI-Rosetta] signature, and we're not the only ones,” says Charles Perou, a geneticist at the University of North Carolina, Chapel Hill (see table).

    View this table:

    The NKI group and the Dutch authorities, meanwhile, are planning an “implementation” study, which they hope will lead to this test's becoming a standard clinical tool in Dutch hospitals in 2 to 3 years. The NKI researchers have even spun off a company called Agendia (for Amsterdam Genome Diagnostics), which will distribute and process such chips as well as develop new DNA microarray-based signatures.

    The chief hurdle will be convincing health authorities that the test is practical and worthwhile. The arrays, says Guttman, offer immense “potential—and headaches.” The main problem is the relatively large number of data points, which dwarfs what FDA is used to seeing and requires expert statistical handling. Challenges abound, particularly getting enough good samples. “Study design is everything,” warns Eric Hoffman of Children's National Medical Center in Washington, D.C. Many researchers continue to look at microarray studies with skepticism. “I see a lot of bad data,” says Michael Liebman of the Abramson Cancer Center in Philadelphia. “And people still publish it.”

    DNA arrays likely will get the traditional bumpy biotech ride. Says oncologist Larry Norton of New York City's Memorial Sloan-Kettering Cancer Center: “This is the Wright Brothers' airplane. … It's just the beginning of the era of molecular characterization of cancer.”


    Emerging From the Shadow of Abdus Salam

    1. Daniel Clery

    Once a scientific oasis during the Cold War, a unique center for theoretical physics and its scientific neighbors in Trieste want to update their missions for today's political climate

    TRIESTE, ITALY—Arbab Ibrahim Arbab had reached a dead end. He'd earned a bachelor's degree at the University of Khartoum, Sudan, in the early 1990s but was unable to continue his studies in astrophysics. “There is hardly any money for research in Sudan,” he explains. Then one of his former professors suggested that he apply for a new diploma course at the United Nations-backed Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste.

    He got in but was in for a shock. “From the first moment, I had to do lots of very new stuff,” Arbab says. Taking open-book exams and defending his views in seminars were a far cry from the rote learning in Khartoum. But Arbab considers his time in Trieste as a lucky break. “We had the best teachers from all over the world,” he says, and the friends he made in this academic bastion in northeastern Italy helped him on his path to a faculty position in Saudi Arabia.

    Arbab's experience resonates across the developing world. For the past 4 decades, ICTP has played host to more than 80,000 visits from Third World physicists who have immersed themselves in the center's intellectual melting pot. Its open-door policy made it one of the few venues for Soviet and U.S. physicists to meet during the Cold War. Nowadays, Indians and Pakistanis sip espressos together while Iranians and Libyans rub shoulders with Americans. This cosmopolitan spirit inspired the creation of other science centers in Trieste that have turned the city into a kind of scientific Shangri-la. Together, these institutions refer to themselves as the Trieste System.

    But some fear that the missionary zeal that established and nourishes this utopia may be fading. Many of the leading lights of ICTP's early years are retiring, and the “new people don't have the same fervor,” says Katepalli Sreenivasan, an Indian-born physicist who took over as ICTP director last month and is cautiously exploring ways to reinvigorate it.

    Einstein's dream

    After World War II, several prominent physicists, including Albert Einstein, Robert Oppenheimer, and Niels Bohr, floated the idea of a physics institute under a U.N. flag where researchers from across the globe could work free of military interference. The idea went nowhere until the early 1960s, when celebrated Pakistani physicist Abdus Salam made a formal proposal to the International Atomic Energy Agency.

    Salam mobilized developing countries to support the center concept, which at first was opposed by big U.N. players wary of losing top talent. After 3 years of wrangling, it was “a rare case of the poor winning against the rich,” says Paolo Budinich, then head of physics at the University of Trieste. In 1964, at Budinich's suggestion, the center was sited in Trieste, which at the southern end of the Iron Curtain seemed symbolically ideal.

    Melting pot.

    New director Katepalli Sreenivasan wants ICTP to remain a refuge for researchers away from the divisions of world politics.


    Embraced by the physics community, ICTP quickly established itself as a first-class research and training center where dozens of Nobel laureates came to teach. It received a further boost when founding director Salam shared a Nobel Prize in 1979 with Sheldon Glashow and Steven Weinberg for the unification of electromagnetism and the weak nuclear force. For 30 years, Salam's inspiring personality dominated ICTP, which was renamed in his honor after his death in 1996. Salam remains a formidable presence. His former office is something of a shrine: Bookshelves are glassed in to keep the volumes as he left them, while other glass cases preserve his walking sticks, prayer mat, and scores of awards and honors from across the globe. “Salam became a demigod,” says Sreenivasan. “He cared for what he was doing. But he has made it difficult for his successors.”

    Despite the U.N. badge, 85% of ICTP's $22.5 million annual budget comes from the Italian government. Italy views the center as an important foreign policy tool and, in 1983, was happy to add the International Centre for Genetic Engineering and Biotechnology (ICGEB) and the Third World Academy of Sciences to the Trieste roster. The International Centre for Science and High Technology was established in 1988 and, 3 years ago, the InterAcademy Panel, a network of 80 national science academies, took up residence.

    A grander role

    Some think that the Trieste System should capitalize on this drawing power and seek official status as a U.N. field office devoted to promoting science in the developing world. Budinich, in particular, is pushing hard for a grander U.N. role for the Trieste System, seeing it as the fulfillment of Salam's dream. Others are equivocal. “Recognition by the U.N. or the E.U. is something we strive for,” says ICGEB director Arturo Falaschi, “but we do not yet have the critical mass.” Sreenivasan has other doubts. “It's a slippery slope. I don't know where we'd end up,” he says. “I would like us to be lean and mean, not grow too much, and stay effective.”

    Sreenivasan, who was director of the Institute for Physical Science and Technology at the University of Maryland, College Park, before coming to Trieste, is feeling his way on how to achieve those goals. One possibility is to create affiliated centers in countries such as India, Brazil, and South Korea whose physicists profited greatly from the ICTP experience and could now support colleagues in poorer neighboring countries. He also hopes to tap such transitional countries for financial contributions that could build up a reserve for expansion or new programs.

    The one thing that Sreenivasan does not want to see is an erosion of ICTP's unique role as a scientific crossroads, a meeting place for researchers separated by politics. “So many people think ICTP belongs to them. I don't want to fail them.”


    Synthetic Chemists Cast DNA as Molecular Dance Master

    1. Robert F. Service

    NEW ORLEANS, LOUISIANA—Approximately 11,000 chemists, materials scientists, and physicists swung into the Big Easy from 23 to 27 March for the 225th National Meeting of the American Chemical Society.

    Over the past century, organic chemists have devised hundreds of reactions to tweak molecules into just the right shape. But they've never mastered biology's way of sculpting molecules: evolution. New molecules evolve when natural mutations in DNA alter proteins that help synthesize needed small organic molecules. If the change helps an organism survive, natural selection makes it standard issue for the organism's descendants. Now, Harvard University chemist David Liu and colleagues have adapted this strategy to evolve a diverse set of small organic compounds as well.

    At the meeting, Liu reported that by linking organic reactants to a series of DNA molecules preprogrammed to bind to one another, his team coaxed the organic molecules to react together in multiple steps to form desired compounds. The molecules can then be exposed to another reactant to alter their shape and function. By starting with an assortment of DNA molecules, the researchers can choreograph reactants to assemble themselves into a wide variety of products in the same beaker. Liu's team used the technique to create a collection of 65 cyclic compounds closely related to known protease inhibitors, drugs used to fight AIDS and other diseases.

    “This is something other chemists will want to know about,” says Craig Townsend, an organic chemist at Johns Hopkins University in Baltimore, Maryland. “It's potentially quite useful.” But Townsend notes that for now the technique relies on reactions that take place in water, whereas most organic reactions are designed to work in organic solvents.

    Find your partner.

    Short DNA tags (colored) bind to complementary sequences on a long DNA “template,” causing linked organic molecules (shapes) to react.

    Liu's team began working on using DNA to control organic reactions 3 years ago. The researchers started with organics that react with one another only at high concentrations. By harnessing the ability of precise sequences of single-stranded DNA to bind to one another, Liu hoped to bring the reactants close enough together to increase their effective concentration. The scheme worked, and Liu's team has since used the approach to forge more than a dozen different types of organic reactions.

    Last year, the team took another key step by making an organic compound undergo a series of preprogrammed chemical reactions, a prerequisite for making complex synthetic molecules. The researchers started by attaching an initial organic compound to a long “template” strand of single-stranded DNA. Then, one after another, they introduced three different organic complexes, each attached to a short length of single-stranded DNA programmed to bind to a different region on the template strand. Liu's team found that the scheme created three sequential modifications to the initial compound (see figure).

    In their most recent work, the researchers extended their multistep scheme to create a collection of 65 related compounds—a “library,” in chemical jargon. The researchers followed standard combinatorial chemistry techniques to link a series of building blocks together in all their possible combinations. But they added their own twist: They used 65 different template DNAs to direct the organic groups to their appropriate reaction partners. To prove that the technique had worked, they pulled a single member of the library out of solution and verified that it was the compound they desired.

    Right now, a library of 65 compounds pales beside the legions of chemicals created by other combinatorial techniques. But unlike other approaches, Liu notes, in this case molecules found to be reactive can easily be identified, selected, and put to work, just by borrowing tools nature has used for billions of years.


    Tiny Transistors Scout for Cancer

    1. Robert F. Service

    NEW ORLEANS, LOUISIANA—Approximately 11,000 chemists, materials scientists, and physicists swung into the Big Easy from 23 to 27 March for the 225th National Meeting of the American Chemical Society.

    Nanoscale-electronics experts are often ribbed for creating circuitry that microchip companies already make millions of times better. But at the meeting, researchers described a nanoscale device that does something microchips can't touch: tracking proteins indicative of two different forms of cancer. Down the road, arrays of such sensors could enable doctors to screen patients instantly for multiple diseases.

    Researchers have been experimenting with electronic diagnostics for years, but with mixed success. The detectors typically use field effect transistors (FETs), devices in which a voltage applied to one electrode, called a gate, changes the current flow between two other electrodes. To make a diagnostic, researchers replace the gate with a material coated with proteins or other compounds designed to capture molecules of interest. As charged target molecules latch on to the coating, they change the conductance between the transistor's two electrodes, creating a current that betrays the molecules' presence. Such devices usually are not very sensitive, however, because it takes a huge number of target molecules to trigger the current.

    In hopes of improving the sensitivity, Harvard University chemist Charles Lieber and his students tried the same setup with nanoscale transistors. Lieber's group has made such transistors for years, typically out of silicon wires as small as 1 to 2 nanometers across. For this study the researchers fashioned gates from silicon nanowires coated with antibodies for prostate-specific antigen (PSA), a marker for prostate cancer. They then capped the transistors with a plastic pad patterned with tiny channels that carried fluid over the devices.

    When the team injected a dilute solution containing PSA, the negatively charged protein bound to the antibody and altered the conductivity of the FET, Lieber's graduate student Wayne Wang reported at the meeting. Wang says he and his colleagues detected PSA down to levels of 0.025 picograms per milliliter, making their device the most sensitive PSA detector yet created—more than 100 times as sensitive as commercial PSA tests. A similar sensor picked up the presence of CEA, a marker for colorectal cancer, and a simple array of nanosensors detected both cancer markers simultaneously.

    “They have made really beautiful progress,” says Jie Liu, a chemist at Duke University in Durham, North Carolina. Nanowire sensors still have a long way to go before reaching the market, Liu says. But if they can be mass-produced at a reasonable price, they might someday be as ubiquitous as the stethoscope.


    Nanomaterials Show Signs of Toxicity

    1. Robert F. Service

    NEW ORLEANS, LOUISIANA—Approximately 11,000 chemists, materials scientists, and physicists swung into the Big Easy from 23 to 27 March for the 225th National Meeting of the American Chemical Society.

    Nanosized materials are often hailed for their extraordinary electronic, light-emitting, and catalytic properties. But their unique physical characteristics raise concerns that tiny clumps of metals, ceramics, and organics could prove uniquely toxic as well. In New Orleans, three groups of researchers described early efforts to test whether that is the case.

    Two studies focused on single-walled carbon nanotubes (SWNTs), tiny tubes of rolled-up graphite that hold enormous promise for everything from nanoscale electronics and sensors to lightweight materials. At the moment, SWNTs are made in gram-scale batches that pose little threat to the general public. But chemistry Nobel laureate Richard Smalley of Rice University in Houston, Texas, has predicted that if a cheap, large-scale production method could be devised, SWNTs could be sold by the metric ton, undoubtedly increasing exposures.

    At the meeting, groups led by Chiu-Wing Lam of NASA's Johnson Space Center in Houston and David Warheit of DuPont in Wilmington, Delaware, reported that nanotubes can damage lung tissue in mice. Lam's team exposed groups of mice to one of four substances: newly made SWNTs mixed with tiny grains of the metal catalyst used in making the nanotubes; SWNTs treated to remove the metals; carbon black—another all-carbon material shaped like amorphous globs; or nanosized quartz particles, which have well-characterized toxicity.

    Small concern?

    Carbon nanotubes could hold the key to molecular electronics but might be bad for your health.


    They then spritzed the mice's lungs with a solution containing either a moderate or large concentration (0.1 or 0.5 micrograms of material suspended in inactivated mouse serum) of the materials and left the animals alone for 7 or 90 days. Standard histological tests showed that all the particles made their way into the alveoli, the tiny air sacs in the lung, and most remained there intact even after 90 days. The carbon black particles triggered little inflammation. But even in lower concentrations, the nanotubes—with or without metal particles—triggered the formation of granulomas, a combination of dead and live tissue surrounding the material that's a significant sign of toxicity, Lam says. Warheit reported seeing granuloma formation in a similar study but noted that the inflammation seemed to tail off after 3 months. Both researchers cautioned that conclusions about nanotubes' toxicity must wait until researchers learn how the animals' lung tissue reacts to airborne particles. Such studies are likely to begin soon, Lam says.

    Nanotubes weren't the only material to raise red flags. Nanoparticles made from polytetrafluoroethylene, or PTFE, showed even more dramatic effects. Toxicologist Günter Oberdörster of the University of Rochester School of Medicine and Dentistry in New York reported that when his lab exposed rats to air containing 20-nanometer-diameter PTFE nanoparticles for 15 minutes, most of the animals died within 4 hours. By contrast, those exposed to air with PTFE particles 130 nanometers in diameter (the size of a small virus) suffered no ill effects. Oberdörster cautioned against reading too much into the comparison, as the technique for making the smaller particles could have altered them chemically, he says. But histology studies showed that macrophage cells that clear out foreign material had trouble ridding tissue of the smaller particles. In another study, Oberdörster found that inhaled carbon-13 and manganese nanoparticles reached rats' olfactory bulbs and then migrated throughout the brain.

    Most researchers present cautioned that such preliminary toxicity studies don't warrant drastic action, such as regulating nanoparticles. But they agree that further work is essential. “There's issues of risk with every new technology,” says Vicki Colvin, a chemist at Rice University. “It would be silly to think we have nothing to consider.” Concerns about nanoparticles' toxicity must be addressed while the field is still young and exposures limited, Colvin says: “I'd much rather face this now than after nanotechology becomes widespread.”


    Snapshots From the Meeting

    Preferred Painkiller. Morphine can be powerful medicine, but it's addictive and carries intestinal side effects. At the meeting, Robin Polt, a chemist at the University of Arizona in Tucson, reported devising a novel peptide that in animal studies appears even more potent than morphine and is less addictive. Polt's team found that linking a small sugar molecule to the peptide helped it find its way into the brain. You can expect to see similar sugars decorate other peptide-based brain drugs in the future.

    Heads Up. Sure, a massive meteor impact killed off the dinosaurs 65 million years ago. But new calculations by John Birks of the University of Colorado, Boulder, suggest that smaller, more frequent meteor impacts may wipe out global ozone as often as once every 40,000 years. Look for future ice core measurements to see if paleoclimatologists can spot the expected chemical signature.

    DNA Computing Twist. Conventional DNA computers are prized for their ability to solve complex tasks, such as the classic traveling salesrep problem. But they don't use conventional computer-based logic based on 1's and 0's. At the meeting, Reza Ghadiri of the Scripps Research Institute in La Jolla, California, reported creating a scheme to coax DNA molecules to work as a standard logic circuit. So far, the circuits carry out only three operations, but efforts are under way to boost their complexity.

  19. A Hothouse of Molecular Biology

    1. Elizabeth Pennisi

    Green thumbs at a British lab helped cultivate the achievements of the much-feted Watson and Crick and a slew of other luminaries. Can its success be duplicated, or even sustained?

    Shortly after Sydney Brenner learned that he and two former labmates had won the 2002 Nobel Prize, he received this e-mail from a Chinese researcher: “I wish also to win a Nobel Prize. Please tell me how to do it.” The answer, Brenner announced at the award ceremony last December, is simple. “First you must choose the right place … with generous sponsors to support you.” In addition, he urged, “choose excellent colleagues.” For Brenner and a dozen other Nobel laureates, the right place was Cambridge, U.K., and the right people were their peers at one of the world's first laboratories devoted to molecular biology.

    What started about 55 years ago as a pilot program in biophysics at the University of Cambridge eventually became the Laboratory of Molecular Biology (LMB), now home to about 300 researchers and alma mater to hundreds of molecular biology's most influential. Among the dozen Nobelists the lab has spawned are James Watson and Francis Crick, who co-discovered the structure of DNA there 50 years ago. Watson has called LMB “the most productive center for biology in the history of science.”

    The lab's recipe for success dates back to its early days, when leaders such as Max Perutz had the luck and insight to pick the best and the brightest (among them some quite unorthodox choices) and secure them almost unlimited support, both financial and collegial. In the budding field of molecular biology, “his operation became known as the place to be,” says John Sulston, who shared last year's Nobel Prize in physiology or medicine with Brenner and H. Robert Horvitz.

    The lab welcomed researchers who wandered across disciplines and then encouraged them to interact closely. Even today, when interdisciplinary work has become de rigueur, LMB stands out for its cross-fertilization and community spirit. Lab groups there remain small and flexible, sharing equipment and often budgets. The fiefdoms that plague university departments are absent. “All these elements add up to a strong formula for doing good science,” says LMB molecular biologist Matthew Freeman.

    Although Watson and Crick are perhaps the most famous, the list of 750 or so alumni reads like a Fortune 500 of biology. Scientists there essentially created the field of structural biology. Over the past 5 decades, they invented key technologies such as DNA sequencing. And they have helped to elucidate some of the most fundamental questions in biology: how genes carry the instructions for proteins, for instance, and how a single cell develops into an animal.

    One great mind begat another, as the lineage of Nobel laureates makes clear (see graphic). Prize winner William Lawrence Bragg, director of the physics lab where LMB was conceived, brought in Max Perutz, who in turn recruited John Kendrew and Crick. Crick attracted Watson and Brenner. Brenner's protégés included Sulston and Horvitz.

    When then-director Perutz moved the lab to its own building in 1962, Frederick Sanger and Aaron Klug came on board. Sanger brought in César Milstein and John Walker. Perutz, Kendrew, Crick, and Watson won their prizes in 1962. Sanger claimed one in 1958 and a second in 1980; Klug garnered the prize in 1982; Milstein and his student, Georges Köhler, in 1984; and Walker in 1997. The most recent winners are Brenner, Sulston, and Horvitz.

    Can such stellar results continue? Big labs that churn out lots of data already threaten the lab's preeminence. And the lab's own exponential growth threatens to dilute the intense interactions that have characterized the place. But LMB's current director, structural biologist Richard Henderson, feels confident it will still provide fertile soil. As Perutz once wrote, “Well-run laboratories can foster [creativity in science]. But discoveries cannot be planned; they pop up, like Puck, in unexpected corners.”

    Prized moment.

    Francis Crick, Maurice Wilkins, John Steinbeck (Nobel laureate in literature), James Watson, Max Perutz, and John Kendrew (left to right) all left Stockholm with Nobel Prizes in hand.


    Roots in physics

    LMB had it origins in the illustrious 19th century Cavendish Laboratory, part of the University of Cambridge. Cavendish scientists excelled in physics. J. J. Thomson discovered the electron there, and Ernest Rutherford smashed the atom.

    Nobel lineage.

    Many scientists at the Laboratory of Molecular Biology (orange) and its predecessor (green), have received honors at Stockholm and, over the decades, attracted new talent destined to win prizes. (Dates reflect years spent at the lab.)

    In 1915, Bragg, working with his father at the Cavendish, became the youngest person to win a Nobel Prize. The father-son team's success set the stage for a new direction for the lab: biophysics. They had figured out how to use x-ray crystallography to probe the inner nature of crystals. In so doing, they created a window into the molecular structure of biological materials as well.

    After World War II, Bragg was finally able to sneak biology through the lab's back door. Knowing that the Medical Research Council (MRC)—even then the United Kingdom's biggest research supporter—was keen on melding physics and biology, he convinced it to create the MRC Unit for Research on the Molecular Structure of Biological Systems in 1947. Its two members were Perutz, a chemist who wanted to try x-ray crystallography on proteins, and his student, physical chemist Kendrew. For the next 10 years, Perutz and Kendrew raced to identify the molecular makeup of two key blood proteins, myoglobin and hemoglobin. They devised better ways of doing x-ray crystallography and faster ways of analyzing the reams of data generated—harnessing the mathematics department's primitive electronic digital computer for their calculations.

    Perutz recruited allies from diverse backgrounds: Crick was a physicist, Watson a zoologist, and Brenner a physician. Space was tight. Crick and Watson—and, later, Crick and Brenner—sat back to back in one office. “By 1956,” Perutz wrote, “the Unit had grown so large, I spent my time scrounging for a little bench space in a butterfly museum here or the abandoned cyclotron room there.” Together—closely—they began to turn biology on its ear.

    Crowded spaces.

    When Max Perutz and his budding molecular biologists outgrew their space in the Cavendish lab (bottom), they moved into a hut behind this famous physics center (top).


    Life's secret

    Watson and Crick shared a common passion: to figure out what genes were made of. Crick, like many of his contemporaries, thought genes were proteins; Watson believed they consisted of DNA.

    Soon after Watson arrived in Cambridge in 1951, the brash young American—he was 23 at the time—convinced Crick that DNA was the stuff of genes, and they became obsessed with determining DNA's structure. They assumed that the structure would reveal how genetic information was passed from one generation to the next. Their first attempts were a flop, however, and Perutz instructed the pair to leave DNA to Rosalind Franklin and Maurice Wilkins, who were using x-ray crystallography to study this molecule at King's College in London. Ostensibly working on other projects, Watson and Crick continued debating possible structures during almost daily lunches, walks, teas, dinners, and even outings on the Cam River.

    Only after Linus Pauling of the California Institute of Technology in Pasadena, seen as the lab's fiercest rival, came up with an incorrect view of DNA did they get the go-ahead to continue their work. With the help of one of Franklin's prize x-ray diffraction images, they finally figured out how the components of DNA fit together.

    On 28 February 1953, they began to build a paper-metal model demonstrating the pairing of the bases in this double helix. As the story goes, at lunch that day in the Eagle Pub, Crick couldn't contain his excitement, announcing, “We've found the secret of life.”

    Breaking through.

    John Kendrew stands over a model of the structure of myoglobin, the first protein structure to be solved and the beginning of a new era in protein science.

    But beyond the Eagle, their colleagues and the world didn't take much notice. At the time, “there was much more excitement about the Slinky wire-frame spring walking down the stairs,” recalls Michael Fuller, then a lab assistant and now the lab's special projects coordinator. Unraveling the mathematics of how this new toy worked “seemed to excite [lab scientists] a lot more than the DNA model itself.”

    It took another decade for Crick and others to work out some of the basic principles underlying gene function, information that gradually helped confirm his boast. Watson left the lab for Harvard in 1958, where he sought out the secrets of a different nucleic acid, RNA. Brenner became Crick's closest collaborator, working with bacterial viruses, or phages, to verify Crick's ideas. Together they helped crack the triplet nature of the DNA code and track the path of information from gene to protein. Molecular biology was gathering steam, and Perutz's crew was stoking its engines.

    Dynamic duo.

    César Milstein (left) and his student Georges Köhler developed a method for making monoclonal antibodies, now used all over the world.


    A very good year

    1962 was a vintage year for LMB. By then, Kendrew and Perutz had gotten their first real look at the structures of myoglobin and hemoglobin. Kendrew's student Hugh Huxley of University College London had begun his groundbreaking work demonstrating that sliding filaments powered muscle contraction. And Watson and Crick's model had inspired a slew of work on the gene-to-protein transition and the replication of DNA.

    That was also the year that Perutz's crew left the Cavendish and set up their own shop: the official Laboratory of Molecular Biology. They moved into a new six-story building on the outskirts of town. Gone were the buttoned-up lab coats, required attire at the Cavendish. “Things became much freer, lighter,” says Fuller. Per Perutz's order, there were no doors, no locked cabinets—no secrets among scientists.


    While working on virus structures (modeled in photo), Aaron Klug developed new crystallographic methods.


    Sanger, whose work at the University of Cambridge on insulin had earned him a Nobel in 1958, helped open the doors on the new lab. Klug came, too, attracted to LMB's shiny new space and colleagues interested in the structure and function of proteins.

    Champagne corks flew that fall when telegrams arrived telling not just Watson and Crick but also Perutz and Kendrew that they had won Nobel Prizes, the latter pair for chemistry. The reputation of the lab soared. By the late 1960s, American postdocs “were the engines that powered the lab,” says John White, one of those postdocs, who is now at the University of Wisconsin, Madison. White calls the group “Jim Watson wannabees.” They were a stark contrast to their British colleagues, who liked to solve problems at tea instead of at the lab bench. The synergy worked well.

    One transatlantic transplant from the Salk Institute for Biological Studies in California was actually a Brit: Sulston. He was one of the first dozens of young scientists who would eventually come to work with Brenner on his fondly named “worm project.” In 1974, Horvitz made the trip from Harvard for the same reason. What lured them was Brenner's goal of using the nematode to help figure out how genes affect development. “Most people thought it was rather a joke,” Sulston recalls.

    True to the lab's tradition, Sulston wasn't given much space, not even a desk. As far as Brenner was concerned, “desks encouraged time-wasting,” says Sulston. Perched at a lab bench, Sulston began the painstaking task of watching the cells of the nematode embryo multiply under a microscope and then drawing what he saw. “When [Horvitz] came and found me looking through the microscope, he didn't think it was very scientific,” Sulston recalls. Before long, “he started doing it with me.”

    Together, they tracked how individual cells divide and specialize to make the adult worm. They noticed that cells that had fulfilled their functions sometimes died, a phenomenon they dubbed programmed cell death. By working out the genetics of this process, Sulston and Horvitz opened up a new field in cell biology, earning the 2002 Nobel.

    One floor away, Klug was chasing after the structure of viruses. The new imaging method he and his colleagues developed, which used electron micrographs to work out three-dimensional structures, earned Klug a Nobel in 1982. But he, too, was drawn to the unfinished business of RNA and DNA. Klug and his colleagues eventually worked out the structures of transfer RNA, which shuttles amino acids to where they can be added to the nascent protein. In addition, Klug determined the structure of RNAs that act as enzymes to cut up yet other RNA; he also helped show how DNA is packaged as a chromosome (see p. 282).

    New generation.

    Laboratory of Molecular Biology director Richard Henderson (center) hopes his young researchers will follow in their forerunners' footsteps.


    Quiet tenacity

    Sanger and some of his protégés were as quiet and unassuming as Watson, Brenner, and Crick were outspoken. Sanger “was not known to be spouting ideas at a 100 miles per hour like Francis Crick or Sidney Brenner,” says Alan Coulson, a nematode biologist who began his career as a technician for Sanger in the 1960s. Like Perutz and Kendrew before him, however, Sanger had tenacity. He spent day after day for years trying to devise a way to sequence DNA. Sanger eventually figured out a relatively efficient way to label the different bases and decipher their order, winning his second Nobel in 1980.

    For several soon-to-be laureates, including Walker and Milstein, Sanger was just the “right” person. At first, to check the accuracy of the sequencing, Walker determined the amino acid sequence of those proteins encoded by the genes that Sanger was deciphering. They began with the DNA of bacterial viruses but later moved on to the mitochondria, where Walker found the enzyme that became the focus of his career, ATP synthase.

    Creative energy.

    Milstein, Klug, Walker, and Sanger each pushed molecular biology in a new direction with their achievements.


    In keeping with the bare-bones bureaucracy of the place, Walker never needed to write a proposal about this new research direction. At the time, MRC relied on the lab chiefs to decide how to spend the money it allotted to the lab; they, in turn, trusted their colleagues to come up with good projects. Thus, Sanger merely asked a few questions before saying, “‘Why don't you get on with it?’” Walker recalls.

    At the outset, the project did not garner much support. “Quite a number said I was a fool and that I was going to wreck my career,” says Walker. Instead, he shared the 1997 Nobel for determining the structure of ATP synthase. He then went on to figure out how this key membrane protein works.

    Milstein also sought out Sanger's leadership, collaborating with him while a Cambridge Ph.D. student. Later, when political unrest forced him to leave his native Argentina, he joined Sanger at LMB. Even more unassuming than his mentor, he didn't strike his colleagues as Nobel material. LMB researcher K. J. Patel likens Milstein to the seemingly bumbling—yet effective—TV detective Columbo. Sometimes Milstein could be seen out in a nearby field pacing, tape recorder in hand. He'd walk through the halls distracted and oblivious. “But he was really quite a deep thinker,” says Klug.

    When Milstein joined LMB, Sanger suggested that he focus on antibodies instead of enzymes. It turned out to be sage advice. With Köhler, Milstein eventually developed a way to make unlimited amounts of monoclonal antibodies, a uniform set of proteins that all home in on the same target, paving the way for a Nobel Prize in physiology or medicine in 1984.

    Right place, right people

    A British newspaper once described LMB as a Nobel factory. But Klug takes issue with that characterization: “It's more like a plantation, where you plant the seed.” The fertilizer came in many forms—money, equipment, collegiality, to name a few.

    For about a decade following World War II, MRC's science budget grew about 17% annually. “Anything could be done. There were no limits,” says Hugh Pelham, an LMB cell biologist. And to get those funds, all the researchers had to do was ask. After 30 years with MRC, Walker boasts, “To this day, I have only ever written one grant.” The support has been consistent, although modest at times; there was no money for wood-paneled offices or elegant oak desks, for example.

    Money flowed even without clear “results.” Perutz, for instance, spent decades before his hemoglobin work panned out. Similarly, the worm researchers were far from prolific during the project's first 10 years. Even in today's “publish or perish” climate, that attitude still prevails: “It's not whether you have published a lot of papers, it's more whether you have done some fundamental work,” says LMB bioinformaticist Sarah Teichmann.

    When LMB researchers needed a new instrument, Perutz made sure technicians and engineers were there to build it, a model he learned at the Cavendish. “We were interested in topics that stretched the techniques,” says Walker, explaining how the lab developed technologies such as x-ray crystallography, DNA sequencing, and confocal microscopes.

    That left researchers free to concentrate on their work. “Your time was almost entirely devoted to research,” says Thomas Pollard, a cell biologist at Yale University and LMB alum. Even senior scientists worked at the bench—a tradition that continues today and goes far to explain the lab's vitality, says Gerald Rubin of the Howard Hughes Medical Institute in Chevy Chase, Maryland, who did his graduate work at LMB. Perutz spent 90% of his working time at the bench until his death last year, focusing most recently on neurodegenerative disease. Klug still maintains a lab, although he's officially retired.

    The tight space only intensified the camaraderie. Rubin recalls being assigned the lab bench between the pH meter and the balance—about a meter wide. Individual offices, even for the top scientists, were out of the question until more space was recently added. “I think [crowding] was a good thing,” says Brenner. Similarly, the lack of alternative dining options—then and now—meant that everyone ate together, and conversations and critiques were free-flowing.

    Ultimately, it was the people who made the place. “The LMB was able to concentrate in one place very exceptional scientists,” says Pollard. Today, some of that talent would probably not make the first cut for a university position, given the apparent discrepancy between the scientist's experience and the job description. Sulston, for instance, was a chemist working on the origins of life when he came to study the worm. “Max [Perutz] had this uncanny ability to see something special, not necessarily academic ability, and to home in on that,” says Fuller. “There's a history of people with no qualifications who are now senior.”

    Past as prologue

    At the end of April, hundreds of former LMB researchers will converge on Cambridge to celebrate the 50th anniversary of Watson and Crick's DNA paper. They include numerous Nobel laureates whose prizewinning research came after their time at LMB, as well as prominent department heads, institute directors, and journal editors. There is no doubt in their minds that LMB is unique. “I don't think if you had put the same people in a U.S. institution that they would have done as well,” says Rubin.

    But can it continue to be so special? Thirty years ago, “the field was much smaller. It was the place for U.S. postdocs to go, and the best went,” Rubin explains. “Now there are many good places.” Although funds still flow relatively freely, paperwork, regulations, and other constraints have crept in, Henderson notes. And while he and his colleagues pride themselves on their small labs, which range in size from 1 to 10 people, they worry that they will fall behind. “There's so much more you can do with more manpower,” says Pelham.

    Biological incubator.

    Hundreds of budding molecular biologists got their start at the Laboratory of Molecular Biology, opened in 1962.


    To keep pace with the burgeoning scientists and staff—about 400, more than twice the number 30 years ago—the building has doubled in size every decade since 1962. A new building is in the works. Says Klug, “I am worried that we will get too big and lose the ethos on which the lab has been built.”

    LMB now relies on a glossy annual report rather than word of mouth to publicize its accomplishments. Commercial considerations are also gaining prominence. For instance, 25 years ago MRC didn't bother to patent Milstein's technique for making monoclonal antibodies, now a fundamental tool in many industries. The same was true of Sanger's sequencing technology. Today, patenting is encouraged, says Henderson, and several companies, such as Celltech, are associated with the lab.

    Klug and Henderson suspect that the place is good for at least a couple of more Nobels. Even today, with universities, medical foundations, and other organizations working to create hotbeds of scientific creativity, LMB still earns strong kudos. Says Yale's Joan Steitz: “There have been very good research institutions that have tried to capture the flavor and spirit, but they haven't got it.”

  20. DNA's Cast of Thousands

    1. Elizabeth Pennisi

    Watson and Crick's discovery revealed much, suggested more, but left many details unanswered. Ever since, researchers have been discovering the proteins that unlock DNA and the genetic code

    When James Watson and Francis Crick elucidated the structure of DNA, they discovered an elegantly simple molecule. With cardboard cutouts, metal, and wire, they showed how DNA's two chains wound around each other, with the paired bases inside, one full rotation every 10 bases. Their model immediately suggested how DNA copied itself and enabled genetic information to flow from one generation to the next. They boasted that they had found the “secret of life”—essentially, biology's master molecule that controlled the fate of the cell and, consequently, of the organism.

    Fifty years of research since then has shown that, despite its precision design, this molecule can't dance without a team of choreographers. Like a puppet, DNA comes alive only when numerous proteins pull its “strings.” At the time of their discovery, Watson and Crick had only the haziest of ideas about how this double helix interacted with proteins. But rebuilt today, Watson and Crick's bare-bones model would be draped with proteins that kink and curl, repair, and otherwise animate DNA.


    Although the B-DNA is most common and the one first described, certain conditions force this molecule into A- or Z-DNA configurations.


    DNA ascendant

    The age of DNA began well before Crick and Watson were born. In the 1860s, Friedrich Miescher, a Swiss working in Tübingen, Germany, isolated a strange, phosphorus-rich material from the cell nucleus. Within decades, it was clear that this peculiar substance—later identified as nucleic acids—was fundamental to the cell's chemistry. Somehow.

    Throughout the early part of the 20th century, biochemists argued about DNA's role. Some postulated that it was the stuff of genes; others insisted that proteins carried the genetic code. Even though Oswald Avery, Colin MacLeod, and Maclyn McCarty of Rockefeller University in New York City demonstrated in 1944 that DNA and not proteins carried the genetic code, the debate continued; even Crick and Watson at first disagreed on this point.

    Soon after Watson joined him at the University of Cambridge, U.K., in 1951, Crick was persuaded. Across two continents, they and others set out to discover just what DNA looked like. Tapping a newly developed imaging technique called x-ray crystallography, Rosalind Franklin and Maurice Wilkins of King's College in London produced images that showed DNA was helical. Others were busy envisioning how DNA's bases were arranged to enable it to function. Watson and Crick's discovery settled once and for all that genes were made of DNA. But it took eight more years—and the efforts of many researchers—to crack the genetic code contained in the nucleotide bases.

    Duplicating DNA.

    The ring section of DNA polymerase helps this enzyme's core work more efficiently as it carries out DNA replication. The crystal structure shows the ring (red and yellow) sliding down the DNA.


    Watson and Crick, particularly Crick, had many ideas about how DNA worked, something their landmark 1953 paper hinted at in its last sentence: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” The idea was that, as the double helix uncoiled, each strand of an existing DNA molecule could act as a template for building another copy of the molecule. In the late 1950s, Matthew Meselson and Franklin Stahl of the California Institute of Technology in Pasadena showed this to be the case.

    First glimpse.

    This x-ray diffraction pattern hinted that DNA was helical, thereby helping Watson and Crick come up with the right structure.


    Shortly afterward, Arthur Kornberg of Stanford University and his colleagues demonstrated that an enzyme they had discovered several years earlier orchestrates the synthesis of each new DNA strand. The enzyme, DNA polymerase, adds just the right nucleotide base to the separated DNA strands, making sure the new one exactly matches its template. More than 20 additional proteins are also known to perform distinct functions in copying DNA. Some help unwind DNA, for example; others mark the starting point of replication. And since then researchers have found at least two more DNA polymerases: one specializes in making new DNA; another helps repair damaged DNA.

    Some mistakes are introduced long after DNA polymerase has finished its job—for instance, when radiation or toxic chemicals cause the wrong base or bases to be substituted or others to be deleted altogether. This necessitates a more extensive repair system. Faced with such errors, the cell calls in its molecular repairers. These enzymes mark the bad DNA, cut it out, and replace it with the right code. One of the best-studied examples is that of bacteria as they recover from exposure to ultraviolet light. First, a complex of UvrABC proteins recognizes the damage. Then the UvrABC enzyme cuts at two sites a few bases to either side of the defective DNA and whisks away that piece. DNA polymerase then fills in the gap with the correct bases.

    DNA's messenger

    Watson and Crick's discovery left wide open the question of how DNA specifies which proteins are made. It was more than a decade before the “code” itself was worked out, along with all the intricate details of the gene-to-protein transition.

    Simple to complex.

    Researchers first thought individual proteins latched onto and regulated DNA activity (W-shaped ribbon). But now they realize that these proteins rarely act alone. Varying combinations of transcription factors, along with protein activators or repressors, exert dynamic and finely tuned control over gene expression.


    The first clues that genes specified amino acids came from V. M. Ingram of the University of Cambridge. He studied the sickle cell trait, in which two defective genes can lead to severe anemia, while one causes just mild problems. In 1957, he tracked the defect down to a single amino acid change, a mistake in whatever specified the order of amino acids in the hemoglobin protein. The work suggested that DNA was that template. Then, in 1961, Marshall Nirenberg of the U.S. National Institutes of Health in Bethesda, Maryland, realized that a three-base sequence in DNA, UUU, specified the amino acid phenylalanine; soon the rest of the triplets were deciphered.

    Reading the code.

    To transfer a gene's coding data into protein production, an enzyme called RNA polymerase (peach) latches onto DNA, unwinds the double helix, and begins to build messenger RNA based on a template strand of the DNA (top). It moves along the DNA as the nascent RNA forms (bottom). The crystal structure shows the RNA polymerase (gray) with its clamp (orange) building the RNA based on the DNA strand (blue)


    We now know that DNA's triplet code is transmitted through an intermediary called messenger RNA (mRNA), a closely related, single-stranded nucleic acid. The mRNA ferries this information out of the nucleus to the ribosome, which builds the protein one amino acid at a time.

    Early on, there were clues that a short-lived molecule might be involved. The prime suspect was RNA, because it appears in large quantities outside the nucleus where proteins are being made by the ribosome. But researchers had no idea how the RNA got there or what it really did. Then, in 1960, four groups, Jerard Hurwitz of Memorial Sloan-Kettering Cancer Center in New York City and Sam Weiss of the University of Chicago among them, independently discovered that the enzyme RNA polymerase strings together the RNA's bases much as DNA polymerase does for DNA. A year later, Sol Spiegelman of the University of Illinois, Urbana-Champaign, showed that RNA polymerase reads this code from the DNA template, providing one of the strongest clues that RNA was somehow involved with DNA.

    At about the same time in Paris, the Pasteur Institute's François Jacob and Jacques Monod proposed that a short-lived messenger molecule shuttled DNA's coding information from the nucleus to the cytoplasm. Working with Meselson and Sydney Brenner at the Laboratory of Molecular Biology, Jacob then verified the idea. Thus, by the early 1960s, there was little doubt that mRNA linked the gene to a protein's production and that RNA polymerase was central to this process.

    Again, the process is proving to be even more complicated than researchers initially realized. It turns out that there are three RNA polymerases: one for protein-coding genes and two for genes that code for RNAs that are never translated into proteins. In addition, RNA polymerase has help. For instance, in bacteria, RNA polymerase works much more efficiently when it links up with a protein, called cAMP-CAP. And in the more complex eukaryotes, a whole series of steps precedes RNA polymerase activity. Other proteins and protein-RNA complexes are needed to process the RNA that peels off the DNA before it's ready to exit the nucleus. A key change is the removal of noncoding bases transcribed from the noncoding sections of DNA.

    Twirled around.

    Thanks to an eight-protein complex, DNA's double helix is twisted into a “beads on a string” array. Called histones, those proteins make up the nucleosome, and 200 bases wind around each nucleosome, then loop to the next. Once considered no more than scaffolding, nucleosomes help control gene activity when groups of enzymes such as deacetylation complexes chemically modify the histones (illustration). The crystal structure (top right) reveals DNA (white) encircling various histones, and the bird's-eye view (bottom left) shows those histones with DNA in peach and green.


    DNA's advisers

    The entire DNA code is not expressed all at once. The human's 31,000 or so genes are turned on and off, singly and in combination, depending on which suites of proteins are needed for specific cell functions. Few researchers gave much thought to the idea that proteins might regulate genetic activity until 1961. In the same paper in which Monod and Jacob suggested the existence of mRNA, they proposed that cells also have regulatory elements that affect gene expression. Then, in 1967, Walter Gilbert and Benno Müller-Hill of Harvard University isolated a protein that bound to DNA and repressed gene activity. Independently, Harvard's Mark Ptashne isolated another transcription factor, as these proteins are now called. Scores more have since been discovered. Some latch onto DNA where the synthesis of mRNA begins, to either suppress or stimulate gene activity; others work from afar.

    Over time, researchers have come to realize that multiple proteins, interacting in different ways, exert exquisite control over gene expression. The same factor can alternate between activating and repressing transcription, depending on its protein partners. For example, in normal human colon cells a protein called groucho links with a transcription factor called Lef/Fcf and keeps an oncogene called Myc quiet. But if -catenin takes the place of groucho, Myc is turned on.

    Transcription factors often make their mark by bending the DNA so that the enzymes that translate the DNA code into protein can position themselves at the right place on DNA and still interact with one another. For example, in a prerequisite for expression of most eukaryotic genes, the transcription factor TFIID causes the DNA molecule to bend, paving the way for other transcription factors.

    Often, transcription factors work in combination. A half-dozen factors link together to activate the -interferon gene, for instance. Their association is facilitated by several copies of yet another protein called HMGI, which causes DNA to bend sharply so that the various transcription factors align themselves elbow to elbow as they work. And four varieties of specific transcription factors together with more than a half-dozen other proteins are required to switch on the TTR genes in liver cells, so they make that blood protein.

    DNA's attire

    At the time of Watson and Crick's discovery, it was already clear that DNA was not really naked in the cell nucleus but was adorned with proteins. For decades, these proteins were considered mere dressing. Indeed, 50 years later researchers are still figuring out how they interact to regulate gene expression.

    In the cell nucleus, those proteins—primarily histones—together with DNA make up a complex called chromatin, so named because of how it stains in cells. In 1974, Stanford's Roger Kornberg (son of Arthur) proposed that chromatin was quite structured—made up of repeating units, each containing 200 base pairs of DNA wrapped around pairs of two of four different histones. Those units are now called nucleosomes, and they pack and organize DNA so that under an electron microscope, it looks like beads on a string.

    For the next 20 years, few researchers thought histones were anything more than structural supports. Because the nucleosomes remained intact during transcription, they didn't seem to be involved in gene regulation. But it turns out that although the nucleosome remains intact, the histones need to loosen their grip on DNA for transcription factors to gain access. Otherwise, RNA polymerase has a hard time getting into position, and transcription is hampered.

    As early as 1964, Vincent Allfrey of Rockefeller University realized that histones were often chemically modified by the addition of many acetyl side groups, which seemed to cause them to slacken their hold on DNA. The observation was all but forgotten, however, until 1996, when a slew of researchers discovered that histones, too, are also puppets: Various proteins cause them to change shape, which in turn alters gene activity. In a matter of months, the cast of puppeteers included four enzymes that add acetyl groups to histones and five that remove them. Acetylation prompts gene activity, whereas the removal of a histone's acetyl groups silences nearby genes.

    Today, that cast has expanded to include enzymes that add and remove methyl groups and others that do the same with phosphates. Histones are becoming such prominent players in DNA activity that two researchers—Thomas Jenuwein of the Research Institute of Molecular Pathology in Vienna, Austria, and C. David Allis of the University of Virginia Health Science Center in Charlottesville—now argue that there is a “histone code” as complex and important as the DNA code, one that fine-tunes gene activity and adds more depth to the information encoded in the genes. The idea is slowly catching on, leaving pioneering molecular biologists to shake their heads.

    Forty years ago, Brenner and others were convinced that the central questions in molecular biology would be answered well before the turn of the century. Now they know better. The nature of the histone code is just one of many problems whose complexities are left to be unraveled.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution