News this Week

Science  21 Apr 2000:
Vol. 288, Issue 5465, pp. 410
1. SCIENCE AND BUSINESS

Patent Prompts Rochester to Sue for Slice of Drug Profits

1. David Malakoff

The University of Rochester in New York has won what it calls the most lucrative patent ever awarded to an academic institution, triggering a major confrontation with the makers of several best-selling drugs.

Last week, after an 8-year review, the U.S. Patent and Trademark Office (PTO) gave the university a sweeping patent on the science underlying a new class of anti-inflammatory drugs known as COX-2 inhibitors. The market for such drugs is estimated at $10 billion by 2008. Within hours of the 11 April award, Rochester officials filed suit against the makers of a top-selling drug based on the discovery. The suit claims that the university is entitled to significant royalties on Celebrex, an antiarthritis drug made by G. D. Searle & Co. of Skokie, Illinois, and the Pfizer Co. of New York City, which generated nearly$1.5 billion in sales last year. Rochester's claim could eventually expand to cover Vioxx, a Merck product that earned the Whitehouse Station, New Jersey, company nearly $500 million in 1999. Searle attorneys argue that the patent is “overly broad.… Our preliminary look at their claim is that it will not be shown to be valid,” says Robert Bogomolny, a senior attorney with Pharmacia Co. of Peapack, New Jersey, Searle's parent corporation. Without providing specifics, Pharmacia attorneys claimed that the work is not “novel,” because other researchers published papers on similar COX-2 work prior to Rochester's 1992 patent application. The new patent is based on research conducted in the 1980s and 1990s by Rochester physician and cancer researcher Donald Young and two colleagues, Michael O'Banion and Virginia Winn. Along with teams at several other universities, Young's group was investigating enzymes, now called cyclooxygenase-1 (COX-1) and COX-2, that appeared to play a role in inflammation (Science, 22 May 1998, p. 1191). Young's team found and cloned the gene that produces COX-2, then engineered cells to produce one or both enzymes. Experiments showed that COX-2 appears to be primarily responsible for inflammation, while COX-1 helps protect the stomach lining and kidneys. Compounds that inhibit COX-2 appear to be free from the negative side effects—such as ulcers and gastrointestinal bleeding—caused by aspirin and similar drugs that also block COX-1. The new generation of drugs leaves COX-1 alone, reducing the risk of ulcers. Related studies were also done by teams led by Phil Needleman at Washington University in St. Louis, who is now Pharmacia's chief scientist, Harvey Herschman at the University of California, Los Angeles, and Danile Simmons at Brigham Young University in Provo, Utah. But in 1992, Rochester became the only school to move ahead with a patent application. Last September, patent examiners informed the university that they were ready to approve the claim. While such notifications often prompt universities to begin negotiating with a company on a licensing deal, Rochester officials instead hired attorney Gerald Dodson of Morrison and Foerster in San Francisco. Last year, Dodson led the legal team that won a$200 million settlement for the University of California, San Francisco, in a patent fight with the biotech company Genentech (Science, 26 November 1999, p. 1655). “It's not like this was some stealth effort,” says Rochester's attorney, Terrance O'Grady. “We just didn't want anything to jeopardize the patent.”

Pharmacia executives say that news of the patent and lawsuit caught them off guard. “We are surprised, dismayed, and irritated about how this all began,” says Bogomolny. “It is unusual, to say the least, for a patent dispute to begin with a lawsuit and press conference.” He says the company is still deciding on its strategy, but will “defend its interests.”

O'Grady believes that the PTO's 8-year review will help Rochester prevail. “The examiner examined every issue brought before [the PTO], so any challenger will have to raise a new concern,” he says. But the university's extensive paper trail to justify its claims could also prove to be a weak link, says an outside attorney. Challengers “like to see big boxes full of paper,” says Richard Aron Osman of Hillsborough, California. “The more there is, the more likely you are to find inconsistent statements” and other potentially damaging evidence.

Rochester officials say they want to resolve their claim at the bargaining table, not in the courtroom. “We're eager to negotiate,” says O'Grady. But the school's opening bid of a royalty in the 10% range—which could generate more than $1 billion over the patent's 17-year life—is well above the 2% to 4% that industry sources say is typical. However, even lower royalties could produce payments that dwarf a basic Cohen-Boyer gene engineering patent held by Stanford University and the University of California that has generated about$250 million since 1980. Some patent watchers are skeptical that the COX patent will prove so special. But they say the debate itself highlights the growing willingness of universities to seek a cut of the profits produced by taxpayer-funded research. “Universities are getting very active in protecting their intellectual property,” says James Sieverson, head of the Cornell Research Foundation in Ithaca, New York, and president of the Association of University Technology Managers, which monitors patenting activity on U.S. campuses. Still, he notes, “the kinds of patents that have this kind of potential financial impact are relatively rare, maybe one out of 1000.”

2. PALEOANTHROPOLOGY

Is Alexander the Great's Father Missing, Too?

1. Robert Koenig

The remains of Alexander the Great—the warrior who conquered much of the known civilized world in the fourth century B.C.—have been lost for more than 1500 years. But in 1977, Greek archaeologists unearthed a tomb in the town of Vergina, in northern Greece, that they claimed contained a worthy consolation prize: the remains of Alexander's powerful father, Philip II, who had started expanding the Macedonian empire and had enlisted Aristotle to tutor the precocious prince. A team of British scholars confirmed this identification in 1984. But on page 511, a paper by Greek paleoanthropologist Antonis Bartsiokas argues that close-up photographic analysis of the remains suggests that they are not those of battle-scarred Philip II, but belonged to a less important historical figure: Philip's son (and Alexander the Great's half-brother), Philip III Arrhidaeus.

The new report also offers the tantalizing possibility that some of the tomb's artifacts, including a helmet and a ceremonial shield, may have actually belonged to the great conqueror himself. “Are we lucky enough to have found the helmet of Alexander the Great?” wonders Eugene N. Borza, a leading expert on the ancient Macedonians and professor emeritus of ancient history at Pennsylvania State University, University Park. “It's too good to be true, but a tempting thought nonetheless.”

There is general agreement that the Great Tumulus of Vergina, originally excavated by Greek archaeologist Manolis Andronicos, was a burial ground for some members of Alexander's royal family. Of the four tombs, one almost certainly contains the remains of Alexander's only son—murdered at a young age—but questions have always surrounded the identity of the male and female remains in the richest tomb, Royal Tomb II. A team including University of Bristol anatomist Jonathan H. Musgrave concluded in 1984 that markings on the skull and other bones appeared to correlate with reported injuries suffered by Philip II, including an arrow wound to his right eye during the siege of Methone in 354 B.C., 18 years before his assassination. That identification forms the centerpiece of a new museum at Vergina.

But Bartsiokas, the director of Greece's Anaximandrian Institute of Human Evolution and an assistant professor at Democritus University of Thrace, says his new close-ups of the bones, taken in 1998, do not reveal such damage. He scrutinized the bones to see whether the previously reported “notch” and “bone pimple” on the skull were consistent with healed wounds from an arrow injury. He rejects this hypothesis, reporting that the marks “bear no evidence of healing or callus formation” and are simply normal anatomical features. “Despite the severe injuries suffered by Philip II, there is no skeletal evidence whatsoever of any injuries to the male occupant of Royal Tomb II,” he concludes.

He also notes that historical records show that Philip III Arrhidaeus—who had ruled for 6 years after Alexander's death and was murdered in 317 B.C.—had been buried for about 6 months before his exhumation and cremation by Cassander. And the condition of the skeletal remains were consistent with bones that had been “dry” of flesh before cremation. “Only the bones of Arrhidaeus would show these characteristics,” Bartsiokas contends. As for the Vergina museum's description of Tomb II as that of Philip II, Bartsiokas says: “I hope they change that.”

The later burial date leaves open the possibility that some of the artifacts in the tomb—including a helmet, a gilded silver diadem, an iron-and-gold cuirass, and the ceremonial shield—may have belonged to Alexander himself, who had died in 323 B.C. and whose remains were interred in Egypt before being lost. Borza says the Vergina tomb's items fit several historical descriptions of Alexander's paraphernalia.

Despite that exciting prospect, Musgrave—who made the original identification along with Manchester archaeologist John Prag and medical artist Richard Neave—stands by his group's findings. He insists that the facial bones he examined showed signs of healed wounds and suggests that the bones' condition may have degraded during the 15 years between his examination and that of Bartsiokas. He also contends that several details of Tomb II—including its apparent hasty construction—argue against it being the tomb of Philip III Arrhidaeus. Bartsiokas “has concentrated on evidence that is limited in the extreme to postulate a hypothesis that cannot be sustained,” Musgrave says.

But Borza and Bartsiokas dismiss those objections. Borza argues that two artifacts in the tomb—a piece of ceramic pottery and a Macedonian silver wine strainer—are clearly dated later than Philip II's death. The new analysis of the bones, along with the late artifacts, says Borza, “drive the final nail in the coffin of the Philip II identification. Clearly, this is not the tomb of Philip II, but of the next generation.”

3. GERMAN VOTE

Animal Rights Amendment Defeated

1. Robert Koenig

German scientists who experiment on laboratory animals can breathe a bit easier—for now. On 13 April Germany's lower house of parliament narrowly defeated an effort to amend the nation's constitution to guarantee animal welfare. Such an amendment could have led to court challenges of much of the country's lab-animal research.

The amendment, supported by the ruling coalition of Social Democrats and Greens, won a majority in the 669-member Bundestag. But it failed by 54 votes to garner the two-thirds margin necessary to alter the constitution, thanks to opposition from the center-right Christian Democrats. “It is frightening that what could have been a major disaster to science and research … was only prevented by a single party,” says Andreas Kreiter, a Bremen University neuroscientist whose brain research using macaques has been a high-profile target of animal rights activists.

Germany already has one of Europe's toughest laws requiring researchers to treat animals humanely by providing adequate caging and food while minimizing suffering. The amendment would have tacked onto the constitution a one-sentence guarantee of animal rights with no allowances for lab animals. If the amendment had been adopted, Kreiter asserts, activists would have brought “a huge number of court trials” to halt experiments involving animals. This, he says, “would have, in effect, stopped biomedical research in Germany.”

The close vote energized animal rights leaders, who have vowed to make the Christian Democrats pay, politically, for their stance. Eisenhart von Loeper, who heads the animal rights group Bundesverband der Tierversuchsgegner, says the battle is heating up in Germany's 16 states, half of which already have added animal rights provisions to their own constitutions. (These have far less impact than would a national amendment.) Adds Wolfgang Apel, president of the Deutscher Tierschutzbund: “We are not giving up.”

4. GERMANY

Panel Urges New Slots for Young Researchers

1. Robert Koenig

Four years after getting his Ph.D. from Cologne University, physicist Norbert Pietralla is on a fast track. In the rigid, tradition-bound German academic system, that also makes him a pioneer. Pietralla, a postdoc at Yale University's Wright Nuclear Structure Laboratory, is one of 100 young scientists chosen last year for a new fellowship program named after noted German mathematician Emmy Noether. When he returns to Germany after 2 years abroad, Pietralla will receive 3 years' funding for an independent research position—a step ahead of his contemporaries.

The Noether program is the most visible effort so far to loosen up the country's hidebound university research structure and speed young scientists' passage into independent academic research. But more sweeping changes may be on the way: Last week, a panel of German scientists and public officials recommended phasing out the notorious post-Ph.D. Habilitation requirement—a kind of extended postdoc that puts young researchers under the thumb of senior professors for 10 years or more—and replacing it with “junior professor” slots similar to assistant or associate professors at U.S. universities. And last month, the DFG, Germany's major granting body, added peer reviewers in part to speed up its review procedures.

German Research Minister Edelgard Bulmahn supports the “junior professor” concept as a way to “give more independence to bright young researchers.” So do Pietralla and other Noether grantees. “The Habilitation slows everything down and immobilizes you,” says Noether fellow Christine Thomas, an earth sciences postdoc at the U.K.'s Leeds University. But the proposals, which the German parliament may debate later this year, face some tough opposition. An influential organization of university professors objects to the idea, warning that such new posts would simply lead to a second tier of “cheap professors” who lack the rigorous training of the Habilitation degree.

Another weak link in the scientific chain, say critics, is the DFG's peer-review system, which sometimes stumbles over interdisciplinary projects and includes too few women and young scientists. Objections to the system came to a head last month when Mark Benecke, a 29-year-old forensic entomologist whose application for a Noether grant was rejected, wrote a scathing op-ed in Munich's main newspaper. His commentary led to a flood of comments on the newspaper's Web site, prompting a letter defending the DFG that has been signed by several hundred German scientists.

“We are moving as quickly as we can … to promote independent research by talented young scientists,” says DFG president Ernst-Ludwig Winnacker, who was a member of the Research Ministry panel that reported last week. Noting that Benecke's application had been voted down by four reviewers, Winnacker says “there are always unhappy researchers who think their grant applications should have been approved.” But he concedes that interdisciplinary research projects can pose challenges for reviewers, and says that the DFG has begun to assemble “study sections”—with reviewers from different fields—to listen to applicants' presentations. In an effort to shine light on the reviewing process, the DFG will soon publish the first-ever list of its outside reviewers.

Last month the DFG also expanded its roster of elected peer reviewers by 25%, to 650, tapping more representatives from specialized fields. The percentage of women reviewers increased slightly, from 4.4% to 7.7%, although the average age of reviewers remains fairly high, at 53. (Women comprise 6% of the country's full professors, few of whom are under age 40.) Although Winnacker defends the DFG review system, he agrees that the number “is still not high enough” and that it should contain “a greater percentage of women and younger scientists.”

5. EVOLUTION

When Fittest Survive, Do Other Animals Matter?

1. Richard A. Kerr

It was a classic confrontation. The main branch of the Bryozoa family—small, coral-like “moss animals” encrusting shells and other hard surfaces of the early Cretaceous seas—had been around for more than 300 million years when a new sort of bryozoan showed up, looking for a fight. Who would prevail? Given the newcomer's ability to grow over its rival and knock it out, a simple reading of Darwin would predict a speedy victory for the newcomer. But in recent years, some prominent paleontologists have questioned whether such competition among animals has all that much to do with who wins and who loses in the evolutionary wars. High school biology lessons notwithstanding, it's been difficult to find hard evidence that interactions among animals matter, they noted, so externalities, such as the meteorite that did in the dinosaurs, might be more important. Now, three paleontologists report in the latest issue of Paleobiology that at least in the case of the bryozoa, competition does appear to have mattered.

The new explication of how two branches, or clades, of the bryozoans interacted is “one of the most rigorous, comprehensive looks at what happens when clades collide,” says paleontologist David Jablonski of the University of Chicago. In the study, paleontologists John Sepkoski of the University of Chicago, who died last year at age 50, Frank McKinney of Appalachian State University in Boone, North Carolina, and Scott Lidgard of The Field Museum of Natural History in Chicago show how the newcomer group appears to have interacted with its rival group over 140 million years. The new arrivals did eventually rise to dominance, but failed to drive their rivals to extinction. Paleontologist Richard Bambach of Virginia Polytechnic Institute and State University in Blacksburg calls the work “truly consistent with competition being a major factor” in evolution.

The bryozoans are naturals for a starring role in the study of competition and evolution. The two groups—the original cyclostomes and the newcomer cheilostomes—have left a clear record of “battles frozen in time,” as Jablonski puts it. By surveying almost 3000 fossil examples of the two bryozoan groups growing on the same surface during the past 100 million years, McKinney found that the younger cheilostome group grew over the rival cyclostomes about 66% of the time on average. Credit the cheilostomes' higher growth rate, says Lidgard, or perhaps their ability to grow a thicker layer of zooids—the individual animals that form a colony—at their encroaching edges. Thicker edges give the cheilostomes a height advantage and presumably a better chance to become large enough to reproduce.

Given those advantages, says Jablonski, from a simple Darwinian perspective, “you might expect the superior group would wipe out the inferior group”—and quickly. But what the bryozoans actually did appears to have been more complicated. The cheilostomes languished for 40 million years after their first appearance, even as the number of genera of cyclostomes grew. Then about 100 million years ago, the cheilostomes took off, adding new genera far faster than the cyclostomes until the impact-induced mass extinction 65 million years ago knocked down the diversity of both groups. The cheilostomes, however, bounced back and reached new heights of diversity while the cyclostomes stagnated and slowly declined.

To tease out the role of competition, if any, in the rise and fall of these bryozoans, Sepkoski, McKinney, and Lidgard tried to predict, in hindsight, how the two clades would fare assuming competition mattered. They used two “coupled logistic equations” developed by Sepkoski. Assuming that the world can hold only so many genera of each clade—that is, each clade has its own diversity limit—the equations predict a clade's change of diversity over time given its current diversity, its innate rate of diversification in the absence of the competing clade, and a factor that includes the diversity of the competing clade. The higher the diversity of its competitor, the more a clade's diversification is damped, as might happen if members of the two clades were going head to head for the chance to grow large enough to reproduce and pass on their genes.

When Sepkoski and his colleagues extracted the required numbers from the fossil record and plugged them into their mathematical model, it produced “a remarkable fit between the model and the empirical data,” says Paul Taylor of the British Museum of Natural History in London. In the model, much as in life, the newcomer cheilostome clade expands slowly at first under the burden of the more diverse cyclostomes, which were already occupying many ecological niches and therefore denying them to the cheilostomes. But the cheilostomes eventually win out as the clade's diversity rises toward its natural limit, which the fossil record suggests is larger than that of the cyclostomes. The mass extinction hits both groups hard, but the cheilostomes bounce back thanks to their innate ability to diversify faster. The cyclostomes can't rebound under the weight of their more diverse competitor, but neither do they collapse, thanks to their lower rate of extinction, a handy attribute of uncertain origin. This match between model and fossil record, plus the obvious competition recorded in bryozoan overgrowths, is consistent with “competition as a significant influence on diversity histories” of bryozoans, the group concludes.

The study represents “the most persuasive analysis yet of an apparent competitive displacement” of one clade by another, says paleontologist Alan Cheetham of the Smithsonian Institution's National Museum of Natural History in Washington, D.C. However, “they don't claim they've proven what happened.” Indeed, “the model is a description rather than an explanation,” notes Bambach. Although competition looks like a promising explanation, he says, others are possible. Taylor agrees, offering the possibility that a new type of cheilostome larval stage, rather than overgrowth of the competition, may have given cheilostomes an edge.

To strengthen the case for competition in evolution, paleontologists agree, researchers must learn more about all the ways bryozoa compete today. In the meantime, although Sepkoski's “death was hard for a lot of us,” says paleontologist Arnold Miller of the University of Cincinnati, “he left us some things to think about.”

6. COLLABORATIVE RESEARCH

Plans for Mars Unite Cancer, Space Agencies

1. Andrew Lawler

Cancer research and sending humans to Mars may seem light-years apart, but technological advances have put them on the same flight path. Last week NASA and the National Cancer Institute (NCI) announced that each intends to spend $10 million a year for the next 5 years in a coordinated effort to develop devices that could both speed detection of cancer on Earth and keep astronauts healthy during long sojourns from home. To drum up enthusiasm for the idea, NASA Administrator Dan Goldin and NCI director Richard Klausner brought together two dozen molecular biologists, geneticists, pharmacologists, and chemists to discuss how nanotechnology and bioengineering can revolutionize health care on Earth and in space. “We're bringing medicine out of the hospital,” says David Baltimore, president of the California Institute of Technology in Pasadena and chair of the NASA-NCI working group on biomolecular systems and technology. “It's a terrific opportunity.” However, he warned that interagency efforts are “the most cumbersome activity on Earth.” Goldin and Klausner hatched the idea for a collaboration 3 years ago at a dinner party hosted by Bruce Alberts, president of the National Academy of Sciences, and shepherded it through their agencies. Last year, as part of a pilot program in unconventional innovations, NCI awarded five grants, worth$11 million, for technologies to detect and diagnose cancer that involved nanoscience, near-infrared optical techniques, and new polymers. One went to a scientist at NASA's Ames Research Center in Mountain View, California. Last June, a joint NASA-NCI meeting at Ames drew 150 investigators to examine advances in sensors to detect the signature of specific biomolecules.

Under the new agreement, the agencies will disburse grants separately but be free to supplement one another's projects. Klausner gains access to the space agency's expertise in building small and lightweight hardware, while Goldin bolsters the scientific credibility of NASA's human space flight program and strengthens ties to the burgeoning biological community. Astronauts bound for Mars may be in space for more than 4 years, bombarded by dangerous radiation and facing situations where even minor accidents—such as a rip in a space suit—could prove disastrous. Combating such threats may call for machines that can screen for genetic damage at a very early stage, robotic sensors injected into astronauts that continuously monitor their health, and a self-repairing space suit. Such innovations have revolutionary implications for improving health care on Earth, adds Klausner.

Of course, the health issue could be moot if humans don't take any long trips in space. “Why not learn to build robots to do business on Mars?” asked Stanford geneticist David Botstein at a 13 April public panel discussing the new collaboration. Even Baltimore noted that the “radiation problem is very severe” and predicted it will be hard to overcome. But Goldin says both robots and humans ultimately will be needed for Mars exploration.

The working group has been asked to set near- and long-term goals for the kinds of technology and research infrastructure needed. A self-described “cheerleader” for the program, Baltimore says that the venture “is a real opportunity for thinking in novel ways about programs that cut across two agencies. … Interdisciplinary science is on everyone's tongue. Today, it's a reality.”

7. PSYCHIATRY

Are Placebo-Controlled Drug Trials Ethical?

1. Martin Enserink

Houston—The psychiatric research community is increasingly polarized by a seemingly simple question: Is it ethical to use placebos in drug trials? Specifically, can you give some of your patients a dummy pill, if it means suspending their regular medication and possibly worsening their symptoms? Critics argue that doing so makes patients suffer unnecessarily and may drive some to suicide. But the Food and Drug Administration (FDA) has insisted that placebo-controlled trials are the only scientifically sound way to test the efficacy of most psychiatric drugs. Drug companies say they have no choice but to comply, and most researchers have gone along—some of them reluctantly.

Now, the FDA has two large meta-analyses to defend its policy. Both show that being in a placebo group does not increase the risk of suicide. But those studies—the most definitive to date—seem unlikely to quell the controversy. At a meeting* earlier this month where one of the studies was presented, critics lashed out again at placebo-controlled trials, calling them “unethical” and “immoral.” Karin Michels, a clinical epidemiologist from Harvard University, asked: “If the patient was your son or your mother, would you withdraw active treatment from them for the sake of science?”

Instead of using fake pills, critics argue that new psychiatric drugs should be compared to one of the many drugs already on the market. Nobody would even consider using placebos for such treatable diseases as cancer and AIDS, Michels pointed out, adding that the practice violates the Declaration of Helsinki. Worse still, some psychiatric patients sign up for such trials without fully understanding what they're getting into, said Harold Vanderpool of the University of Texas (UT) Medical Branch in Galveston. For these reasons, Institutional Review Boards (IRBs), the panels at universities and hospitals that scrutinize trials for human risks, are increasingly loath to approve the use of placebos in psychiatric trials, says Paula Knudson, who administers an IRB at UT Houston.

But FDA maintains that psychiatric drugs—and some others, such as antihypertensives—are a special case because their effects are notoriously hard to prove. It's common in these trials for 30% to 50% of the patients in the placebo group to improve, thanks to a phenomenon called the placebo effect, while those on the real drug improve just a bit more. Indeed, even already-approved drugs regularly fail to beat placebos in later trials. If all FDA demanded was that a new drug perform as well as an old one, it would have no way of knowing how much of that improvement was caused by the placebo effect, officials say.

Because the placebo effect is so fickle, and nobody knows how to reduce it, FDA usually recommends that psychiatric drugs be tested in three-armed trials, in which a new drug is compared to a placebo and an existing treatment. If both drugs fail to beat the placebo, the whole trial is written off as a failure. Some people may be worse off from getting a placebo instead of an approved treatment, admits Thomas Laughren, team leader of FDA's Psychiatric Drug Products Group, but that's an acceptable risk as long as patients' lives are not at stake.

Laughren believes they are not. He and his colleagues conducted a meta-analysis of every recent trial submitted to FDA to win approval for eight new antidepressants (including blockbusters such as Prozac and Zoloft) and four antipsychotic drugs. More than 42,000 patients took part in these trials. In the antidepressant trials, 0.02% of the patients in the placebo group committed suicide during the trial, compared with 0.10% in the experimental drug groups and 0.13% in the groups that received an older drug. Laughren cautioned that those results must still be corrected for the amount of time patients spent in the trial, but he was confident that won't significantly alter the outcome. Trials with schizophrenic patients showed similar results. And the outcomes also match those from a smaller study led by psychiatrist Arif Khan from the Northwest Clinical Research Center in Bellevue, Washington, who looked at FDA data from seven antidepressant trials that together enrolled almost 20,000 people. The study, published in the April Archives of General Psychiatry, also failed to see an increased suicide risk.

But the critics are unimpressed. The Khan paper, for instance, is accompanied by six commentaries—three of them arguing that the study doesn't justify the use of placebo controls. Suicide is just one risk psychiatric patients face, says Michels. “Shouldn't we also think of the quality of life of the people we're withholding the active treatment from?” she asks. And Vera Hassner Sharav, president and founder of a New York City lobby group called Citizens for Responsible Care and Research, adds that suicide rates in all trial groups may have been increased; simply enrolling in a study that includes a placebo arm causes great stress and anxiety in some patients, she says. Michels and Hassner Sharav argue that trials that compare two active drugs could produce statistically relevant outcomes if they had more patients, time, and money.

Despite the arguments, participants at the Houston meeting did find common ground. Only patients fully capable of making a sound decision should be enrolled in trials, and they should be informed as fully as possible. Researchers can also reduce risks by screening out patients likely to harm themselves or others and by making sure there's a friend or family member on standby. IRBs should be especially vigilant, says Knudson. Her own panel has sometimes required that severely depressed or psychotic patients be hospitalized for the first 3 weeks of a trial to make sure they're okay. Pharmaceutical companies and researchers don't necessarily like these restrictions, she concedes, but they do provide a safeguard.

• *Placebo in mental health research: Science, ethics and the law, UT Houston, 7 to 8 April.

8. PALEONTOLOGY

Revealing a Dinosaur's Heart of Stone

1. Virginia Morell

In the fall of 1998, Andrew Kuzmitz, a physician in Ashland, Oregon, invited seven cardiologists to a local hospital to view a computerized tomography (CT) scan. The experts all agreed on what they were seeing: two large, oval chambers or ventricles, divided by a septum. “Every one of them said, ‘It's a heart,’” says Kuzmitz. But this was no ordinary heart: It belonged to a 66-million-year-old dinosaur. “As soon as I put the scan on the screen, the [cardiologists] were like little kids again,” says Kuzmitz. “They couldn't believe what they were seeing.”

The rare specimen—the first dinosaur heart known—was discovered inside the nearly complete skeleton of Thescelosaurus, a small, plant-eating dinosaur, as Kuzmitz and his co-authors report on page 503 of this issue. The heart's anatomy is more like that of birds and mammals than crocodiles or other reptiles, the team says. And a heart that beats like a bird's suggests that this dinosaur's metabolic rate would have been more like that of an endotherm (a warm-blooded animal) than an ectotherm (a cold-blooded animal), providing yet another feature shared by dinosaurs and their putative feathered relatives, the team asserts.

“I'm afraid that this little dinosaur would have had an ‘unreasonably’ high metabolic rate, one approaching that of birds,” says co-author Dale Russell of the North Carolina State Museum of Natural Sciences and North Carolina State University in Raleigh. Until now, Russell has been skeptical about the evidence suggesting that dinosaurs' metabolic rates reached those of birds.

Some researchers want to inspect the evidence personally before accepting that the concretion is a heart, and others are already challenging the anatomical interpretations, but most are impressed. “It is a fantastic discovery,” says Jack Horner, a dinosaur paleontologist at Montana State University's Museum of the Rockies in Bozeman. “It just shows you what you can find if you keep your eyes and mind open.”

Michael Hammer, a professional fossil preparator from Jacksonville, Oregon, chanced on the skeleton in South Dakota's Hell's Creek Formation in 1993. Tucked beneath the dinosaur's upper ribcage in its thoracic cavity was a rust-colored concretion, the kind of annoying rocky material that paleontologists typically chisel away when cleaning a specimen. Hammer, however, knew that such concretions sometimes carry a surprise; he has unearthed similar concretions along the Pacific Northwest coast and carefully broken them open to find entire fossilized crabs or the heads of seals. “I think of them like little treasure boxes,” he says. “You never know what's going to be in them.” But Hammer also knew that soft tissue preservation is unlikely in such sandstone sediments.

“That's not where you normally find these kinds of things,” agrees Paul Sereno, a paleontologist at the University of Chicago. “It's not an anoxic environment. … That, and the absence of any other traces of nonskeletal tissues, raises a major red flag for me. I'd need to examine this before I'd agree that it's a heart.”

But the paper's authors suggest that chemical reactions between the blood- and iron-rich heart and minerals in the groundwater preserved the shape of the once-beating organ. And they think their CT scans will convince skeptics. “The CT scans basically sliced the heart up like a loaf of bread,” explains Paul Fisher, director of the Biomedical Imaging Facility at North Carolina State University and the study's lead author.

To study the heart, the team realigned the “slices” with a special software program, turning the two-dimensional images into several three-dimensional ones that could be rotated and manipulated on a computer screen. The images revealed the heart's ventricles and a single, large blood vessel—the systemic aorta—leading from the heart toward the back of the chest. One image of the rib cage and heart is so clear, says Kuzmitz, that it looks like a “carcass that should be hanging from a meat hook.”

Despite the detail, the scan does not show all of the expected blood vessels. The pulmonary vessels, carotid arteries, and both atria are not visible; they may have collapsed at death or simply are not resolved in this scan, the team says. But other researchers say that the missing pieces leave the heart's anatomy and physiology open to other interpretations. “There's no apparent trace of other major vessels that we know would have been there in life,” says John Ruben, a physiologist at Oregon State University in Corvallis. “So they can't say that there wasn't a second major systemic vessel leading from the right side of the heart”—as seen in cold-blooded, ectothermic crocodiles and alligators, but not in birds or mammals. “I think it's premature for them to say this is a heart of an endotherm.”

The team plans a finer CT scan to address this question. In the meantime, they argue that their interpretation is the “most logical and parsimonious,” says Russell.

The find is certain to propel paleontologists to look for similarly preserved internal organs. “It's going to open up a field of research about how things like this get preserved, especially in a sandstone environment,” says Thomas Holtz, a paleontologist at the University of Maryland, College Park. “And it's certainly going to change the way we prepare fossils.” Indeed, Horner has a nearly complete dinosaur skeleton that will be getting an entirely different examination than it would have a week ago, he says: “We're only just beginning to learn what things can be preserved. I don't think this is the last dinosaur heart we're going to find.”

Dinosaur heart can be viewed at http://www.dinoheart.org/

9. HUMAN GENOME

DOE Team Sequences Three Chromosomes

1. Elizabeth Pennisi

Last week, the U.S. Department of Energy (DOE) elbowed its way into the spotlight with other groups that have been getting attention for sequencing the human genome. Although the Human Genome Project originated in part in DOE laboratories in the 1980s, DOE's contributions have been overshadowed in recent years by giant sequencing operations supported by the Wellcome Trust in the United Kingdom and the National Human Genome Research Institute in the United States, and the private sequencing venture pursued by Celera Genomics of Rockville, Maryland. But DOE is back: On 13 April, DOE Secretary Bill Richardson announced that it has finished the working drafts of human chromosomes 5, 16, and 19. These chromosomes are the first to reach this stage since Britain's Sanger Centre and its partners finished chromosome 22 to finer accuracy in December.

The DOE's dedicated program, the Joint Genome Institute (JGI), is part of a 16-member consortium that is scrambling to complete a draft of the human genome's 3 billion DNA bases by June and produce a 99.99% complete version by 2003. Like other members of the consortium, DOE releases sequence data to the public daily with no restrictions. Later this year, Celera plans to release its human genome data through its own Web site, strictly for noncommercial use.

DOE's chromosomes, sequenced at least three times each, contain 300 million bases and an estimated 12,000 genes, Richardson reported at the R&D Colloquium, a meeting in Washington, D.C., sponsored by the American Association for the Advancement of Science (Science's publisher). “I'm very proud,” Richardson said, noting that some of the genes are involved in diabetes, cancer, and other important diseases. “We're close to completing three chapters on the book of human life.”

Those “chapters” are still in draft form, however, containing up to one mistake in every 100 bases and only about 90% of the gene-containing DNA in each chromosome. In contrast, chromosome 22 is 99.99% accurate (Science, 24 September 1999, p. 2038). Next in line for completion is chromosome 21, which the public consortium expects to have in final form in a few weeks. Meanwhile, JGI labs at Stanford University and the Los Alamos National Laboratory in New Mexico plan to produce a fully finished version of chromosome 19 by the end of 2000; they hope to complete chromosomes 5 and 16 a year later, says JGI sequencing chief Trevor Hawkins, in Walnut Creek, California. In addition, Hawkins says, JGI is working on stretches of DNA from the mouse that are equivalent to the three human chromosomes, to speed discoveries about the functions of the genes.

JGI's news represents a milestone for DOE management, which had invested in new technology and a slightly different approach to sequencing the genome. Unlike other labs, which use a bacterial phage called M-13 in the final step of sequencing, DOE chose to use bits of DNA called plasmids, which provide order and orientation even of draft sequence. Moreover, JGI decided not to rely on the popular Applied Biosystems capillary sequencing machines but to buy 84 machines made by a competitor, Amersham Pharmacia Biotech. Working out the kinks took a lot of dedication: One night, lab manager Susan Lucas was discovered in the lab in her pajamas. But the effort paid off, Hawkins notes: “We've had a 10-fold increase in throughput and efficiency in the last 8 months.”

His colleagues are impressed: “Their performance has been remarkable,” says Richard Gibbs, who heads the sequencing effort at Baylor College of Medicine in Houston. Leroy Hood, president of the Institute for Systems Biology in Seattle, goes further, saying that JGI's process improvements suggest that its performance has been “even better” than the labs funded by the National Institutes of Health.

10. SCIENCE EDUCATION

Ehlers Offers Remedies for Ailing U.S. System

1. Jeffrey Mervis

Joined by a fellow Republican and a Democrat, and with backing from scientific, professional, and educational societies, Representative Vern Ehlers (R-MI) last week unveiled three bills to improve science and math education in the nation's elementary and secondary schools.

The long-awaited proposals would create several programs at the National Science Foundation (NSF) and provide tax breaks and incentives for teachers to join and remain in the profession. The legislation tackles problems that Ehlers identified in his 1998 report on the state of U.S. science (Science, 31 July 1998, p. 635). But unlike that policy report, which was requested by the House leadership and sank without a trace, Ehlers says he will fight to get his education bills enacted.

Ehlers knows that he faces an uphill battle to win the support—or even the attention—of a majority of his colleagues in both houses of Congress, currently wrangling over a host of bills that would revise federal education policies. So he has assembled an impressive roster of supporters, including the American Chemical Society, the National Science Teachers Association, and Nobelist Leon Lederman, to lobby for his measures. “The formula for safe passage through the rapid technological change in our society is science education,” says Lederman, the former director of the Fermi National Accelerator Laboratory. “We already know how to improve things, but we need leadership and a central strategy. I applaud Mr. Ehlers for taking the initiative.”

The centerpiece of the reforms is the National Science Education Act (H.R. 4271). It directs NSF to create a variety of programs, including grants for schools to hire master teachers in math, science, and technology education, professional development in new technologies, and scholarships for teachers to carry out collaborative research. It would also set up an advisory panel to spread the word about exemplary curricula, an exercise that has already gotten the Department of Education into hot water with many conservative groups (Science, 11 February, p. 956). Companion bills would give a 10-year, $1000-a-year tuition tax credit to science and math teachers and beef up in-service and summer training institutes. Ehlers pointedly refuses to put a price tag on his proposals, saying only that they would cost a tiny fraction of the current$20 billion federal investment. At a press conference last week, fellow House Science committee member Representative Sherwood Boehlert (R-NY) took a dig at his more fiscally conservative colleagues by urging them “to resist the temptation to cut taxes too much and instead put adequate resources into areas of need, including science education.” Representative Eddie Johnson (D-TX) noted that a better trained workforce would cut down on the cost of remedial programs and reduce the need to hire foreign scientists and engineers.

NSF officials praise Ehlers's interest but express concern over whether they would receive additional funds to pay for the added responsibilities. The bills will be referred to the science, education, and tax committees. Even if passed, however, they would need the support of a separate set of spending panels before any of Ehlers's ideas could be implemented.

11. ASTROPHYSICS

LIGO's Mission of Gravity

1. Robert Irion*
1. Robert Irion is co-author of One Universe: At Home in the Cosmos (Joseph Henry Press).

Physicists are preparing the first experiments with a real chance to spy the elusive gravitational ripples predicted by Einstein to fill space. But success may take many years—and come in unexpected guises

Livingston, Louisiana—The road from New Orleans to this small town in Livingston Parish crosses great expanses of water and swamps. Mist often engulfs the trees, and the air feels as turbid as the prose in a bad novel by Anne Rice. It seems an unlikely place to build an astrophysical observatory. Nonetheless, a bold new observatory has arisen here, and it promises to expose details of the most exotic events in the cosmos today.

This rural outpost will not collect visible light with giant mirrors or radio waves with vast dishes. Its quarry is another sort of signal entirely: gravitational waves, first predicted by Albert Einstein in 1916 as part of his general theory of relativity. If Einstein's equations are correct, then the gravitational chaos spawned by colliding black holes, exploding stars, and other cataclysms must rend and distort space itself. Imprints of those violent motions should spread through the universe like watery surges from a joyous plunge into a lake.

To catch those waves, physicists have constructed an elaborate apparatus here and a near-twin across the continent in Hanford, Washington, in which lasers gauge precise distances between mirrors that hang 4 kilometers apart. Together, the two facilities form a $365 million venture known as the Laser Interferometer Gravitational-Wave Observatory, or LIGO. Scientists and dignitaries marked the end of construction by inaugurating the facilities last November. Physicists have now started to send their first arrows of light down LIGO's dark tunnels in a commissioning and fine-tuning process that will take about 2 years. If LIGO succeeds, it will conquer Einstein's own doubts that scientists would ever detect gravitational waves. The physical challenge is daunting. By the time a wave from a disturbance deep in the cosmos reaches Earth, it has grown exceedingly weak. It might stretch and squeeze the 4-kilometer space between a pair of LIGO's mirrors by less than 1/1000th the diameter of a single proton. Only now have technological advances and political wherewithal—courtesy of the National Science Foundation (NSF), which paid for the project—combined to make it feasible for physicists to search for such tiny ripples. All involved with LIGO acknowledge that it may take years to identify the first gravitational waves, let alone do research with them. The project won't start its first scientific run until mid-2002, and the waves may elude detection until upgrades make the instruments more sensitive by 2007. But the wait should be worth it, says Massachusetts Institute of Technology physicist Rainer Weiss, one of the field's pioneers. LIGO and similar observatories around the world promise the sternest possible tests of general relativity, and they may open a new window on regions of the universe where extreme events are warping space and time on a grand scale. Says Weiss: “It would be the crowning achievement for this field to see physics from a place where gravity is pure Einstein.” Eight jumbo jets LIGO may be the most impressive effort so far to detect gravitational waves, but it's not the first. In the 1960s, physicist Joseph Weber of the University of Maryland built an aluminum bar that he hoped would quiver in response to a passing gravitational wave, much as a wine glass shatters at just the right high-pitched frequency. Several such “bar detectors” around the world today are tuned to signals from space. They include ALLEGRO, a 2300-kilogram bar at Louisiana State University in Baton Rouge, just down the road from LIGO. With further increases in sensitivity, ALLEGRO and other bars in Italy and Australia may help spot bursts of waves from explosive astrophysical events, such as supernovas or merging neutron stars. However, because of their fixed sizes and modes of vibration, the bars resemble radios that pick up just a few stations while the rest of the programs stream past. Interferometers, on the other hand, can sense a broad range of gravitational-wave frequencies. These L-shaped devices use the wavelength of light as their measuring stick. In LIGO, a beam splitter at the corner of the L sends infrared laser light down the two identical tubes. The tubes, a bit more than a meter wide and 4 kilometers long, contain one of the largest vacuums on the planet: nearly 10,000 cubic meters, about the same as the volume within the cabins of eight Boeing 747 jets. The light bounces back and forth between two pairs of thick mirrors made of fused silica, specially coated to transmit a little light and reflect the rest. This optical trick effectively stretches the arms of the interferometer as the light bounces back and forth between the mirrors dozens of times. Enough light leaks through the mirrors during each round trip for the detectors to analyze. When the light beams recombine, a photodetector gauges whether they have traveled the same distance. If so, the beams will produce a bright patch of light, because their waves are in phase and reinforce each other. If their path lengths differ by half a wavelength of light because one of the mirrors has shifted relative to the other, the waves cancel out exactly. That creates a dark “interference fringe,” which gives the interferometer its name. (To minimize the photodetector's exposure to light, physicists actually arrange the light paths so that the beams produce a dark fringe when the mirrors aren't moving.) That's the idea in principle. In practice, however, the technical hurdles seem absurdly high. To catch a gravity wave, LIGO must be sensitive to phase shifts as small as a few 10-billionths of a single interference fringe. To accomplish that, physicists and optical engineers are resorting to wizardry never before attempted on such a large scale. “This would be the most powerful device on Earth to measure relative motions,” says LIGO physicist Joe Kovalik. “To actually have this built and see a gravitational wave would be amazing.” The key, Kovalik says, is to beat down the noises that constantly threaten to blur the light beams into indecipherable smudges. Counterintuitively, the way to do that is not to bolt the mirrors onto a rock-solid platform but to let them hang freely—almost as if they were in low-Earth orbit, says LIGO physicist Anthony Rizzi: “We want no other forces acting on the test masses except gravity.” That means isolating the mirror supports from the thumps of civilization and the whistling winds with a series of thick, fist-sized springs. To separate the mirrors from the rest of the equipment, the physicists dangle them from loops of fine wire that look like high-tech dental floss. Other sources of noise are more devilish, such as random thermal motions within the mirrors and the surrounding equipment. Quantum mechanics makes life even more complicated by slightly altering the intensity of light—literally, the number of photons—measured within each fraction of a second. That “shot noise” makes it harder to trace the finest of the interference fringes. Rainer Weiss was well aware of these issues in 1970 when he first broached the idea of using a huge interferometer to detect wiggles far smaller than an atomic nucleus. “People threw me out of the room,” he recalls. “They were polite about it, but you could see the snickers on their faces.” Weiss withstood the skepticism and enlisted some key physicists to refine his notions, notably theorist Kip Thorne of the California Institute of Technology—who became enamored of Joseph Weber's work as a Princeton graduate student in 1963—and experimentalist Ron Drever of the University of Glasgow, Scotland, who was lured to Caltech. Weiss, Thorne, and Drever managed the budding LIGO project as a troika through the mid-1980s. After much debate in the community, NSF sponsored LIGO as a joint Caltech-MIT initiative, with a 40-meter prototype built on the Caltech campus. When it became evident in 1987 that NSF would back a full-scale LIGO, Caltech physicist Robbie Vogt took over as director. He shepherded the scientific development as political winds blew around a contentious site-selection process for the two interferometers. As the project grew and pressures mounted for a more accountable management style, internal squabbles arose, especially between Vogt and Drever. In 1994, Caltech and NSF replaced Vogt with Caltech physicist Barry Barish, fresh from his tenure as director of one of the two detector groups for the doomed Superconducting Super Collider (SSC). Barish brought in SSC physicist Gary Sanders as project manager, and many other SSC refugees joined the staff. “LIGO is one of the silver linings in the dark cloud of the demise of the Super Collider,” says Sanders, now the project's deputy director. “We were able to attract some very talented individuals who are used to the rhythm of designing and constructing a project that takes years.” The truth is out there Whereas the SSC was designed to hunt for the Higgs boson, a pleasing theoretical construct with no experimental basis, LIGO will search for a theoretical wave with convincing—albeit indirect—experimental support. In 1974, physicists Russell Hulse and Joseph Taylor, both now at Princeton University, observed a binary system in our Milky Way consisting of two ultradense neutron stars. One of the stars was a pulsar; its steady blips allowed Hulse and Taylor to characterize the mutual orbits of the stars with great accuracy. They found that the two stars are slowly spiraling in toward each other, presumably as they lose energy to space in the form of gravitational radiation. The rate of that process matches that predicted by general relativity, an observation for which Hulse and Taylor won the 1993 Nobel Prize in physics. The stars in the now-famous Hulse-Taylor binary will merge violently in about 240 million years. Astrophysicists estimate that there are about 100 such neutron-star binaries in our galaxy. If LIGO becomes sensitive enough to extend its reach to the nearest 1 million or 2 million galaxies, it might see gravitational ripples from about one neutron-star merger per year. And indeed, that's a primary goal of the second phase of LIGO after a series of upgrades in the middle of this decade. Such a merger “will give us high-precision measurements of relativistic effects in the gravitational field of a binary system,” says Kip Thorne. “We will get an exquisitely detailed picture of the curvature of space-time between the stars.” Several groups of theorists use supercomputer models to predict the shapes and durations of gravitational waves from such mergers, which will allow LIGO physicists to distinguish them from other events. Binary systems involving two black holes should be even more promising for LIGO. Their greater masses and more intense gravity should make them powerful sources, shining like beacons above the rest of the gravitational fog in the universe. Unfortunately, no one yet knows how to model them accurately, nor is it known how abundant such binaries are. A paper in the 1 January issue of Astrophysical Journal Letters by theorists Simon Portegies Zwart of Boston University and Stephen McMillan of Drexel University in Philadelphia suggested that interactions within globular clusters—old, dense knots of stars that swarm around most galaxies—may create black-hole binaries by the bushel. But most LIGO physicists regard such calculations as interesting guidance rather than gospel. “It gives us some hope, but far from a guarantee, that LIGO [in its first phase] will see gravitational waves from the mergers of these black holes,” says Thorne. A disruption of the cosmos stemming from a single object may trigger LIGO as well. However, any such event must have a high degree of lopsidedness; according to the equations of relativity, symmetric explosions or smoothly rotating spherical objects do not emit gravitational waves. Some astrophysicists think that supernova blasts are inherently asymmetric, perhaps enough to rattle LIGO or even the bar detectors with short bursts. Another possible source, championed by physicist Lars Bildsten of the University of California, Santa Barbara, is a rapidly rotating, slightly lumpy neutron star that rips gas from a companion star like our sun. The accretion of the gas should make the neutron star spin faster and faster. Astrophysical surveys show an unusual clustering of “x-ray binary” pulsars that spin between 300 and 500 times per second, but no faster—even though the theoretical limit is 2000 times per second, at which point they would fly apart. “There is a wall beyond which it is difficult to spin up the star because it sheds angular momentum by emitting gravitational radiation,” Bildsten claims. LIGO might be able to detect such gravity waves as periodic, ongoing signals from about 10 such binaries in our galaxy, he says. Perhaps most intriguing of all, the big bang may have stamped the universe with a gravitational-wave background similar to the cosmic microwave glow that pervades space. Theories vary wildly on how accessible this “stochastic background” will be to LIGO and its successors, and it may prove too faint to perceive. If physicists do manage to detect this background and decipher its signatures, Thorne says, they may glimpse the primordial era when all four of nature's fundamental forces acted as a single force—a “quantum gravity” unification that was severed less than a trillion-trillion-trillionth of a second after the big bang. It is even possible that a new theory of gravity lies beyond general relativity, waiting for hints from LIGO to expose it. For example, Einstein predicted that gravitational waves are polarized, like sunlight reflecting off a lake. If LIGO should encounter such polarized waves, one arm of the interferometer will expand while the other shrinks. But alternate theories of gravity, such as so-called scalar-tensor theories, put forth a different possibility: a “breathing” mode in which both arms expand and contract at the same time. Such theories also maintain that the graviton—the hypothesized force-carrying particle of gravity—has a dash of mass. If so, it would travel through space at slightly less than the speed of light. Physicists may be able to use LIGO and its descendants to check the speed of gravitational waves by correlating their arrival with visible signs of a cataclysm in deep space, such as a gamma ray burst. If some new gravitational theory should arise, it might contain general relativity as a special case, just as Einstein's theory circumscribes Newton's laws. “There is a potential for LIGO to find something new that would modify general relativity in very strong gravitational fields,” says physicist Clifford Will of Washington University in St. Louis. “Many of us in our heart of hearts believe that it won't, but even a null result is an important advance. Zero is as good a number as any other number.” Be careful what you say If LIGO physicists do detect a gravitational wave, they will tread carefully before shouting “Eureka!” Everyone wants to avoid repeating what happened in 1969, when Joseph Weber claimed that a passing wave had set his aluminum bar ringing. That claim fell apart under close scrutiny, casting a shadow over the nascent interferometry plans as well. “The whole field was considered very risky,” MIT's Weiss recalls. The LIGO team will safeguard against false detections in several ways. First, “real” signals must appear in both interferometers within 10 milliseconds of each other. That's the travel time of a gravitational wave across the 3030-kilometer distance between Livingston and Hanford. If the delay is longer, it means the blips probably came from local noise rather than from a source in the sky. The shape and amplitude of the wave should be nearly the same in each detector as well. “If it isn't seen in both detectors, it's just not viable,” Weiss emphasizes. As an additional check, the Hanford observatory has a separate interferometer housed in the same tunnel, but with arms that span 2 kilometers rather than four. Real waves should trigger the half-size detector at Hanford, but with half the amplitude of the signal in the full-size interferometer. And ideally, other detectors around the world will observe a wave passing through Earth at the same time. The leading candidate is VIRGO, an Italian-French interferometer under construction near Pisa, Italy. At 3 kilometers, VIRGO's beam tubes are shorter than LIGO's, but its more advanced design for seismic isolation may compensate for that. A German and British team is using daring optical technology to build GEO, an interferometer near Hannover, Germany. However, GEO's size—600-meter arms, less than one-sixth of LIGO's scale—is a significant handicap. A Japanese group has built a 300-meter interferometer called TAMA near Tokyo and plans to scale it up in a few years, while physicists in Australia are pondering plans for a LIGO-sized interferometer later in the decade. Detection of a short gravitational-wave pulse in three or more detectors worldwide would let researchers triangulate back to its point of origin on the sky. Otherwise, LIGO physicists would need a stroke of luck—simultaneous observation of a gamma ray burst, for instance—to trace the source. If the gravitational waves are periodic and long-lasting, physicists could pinpoint their origin with a single detector by observing how the signals vary as Earth revolves around the sun. Collaboration among the international teams is collegial—up to a point. “We're still in an era where we are talking about cooperating, but we are competing,” says LIGO director Barish. “Everyone wants to be first.” LIGO physicists and the other teams recently took a big step toward eventual teamwork by agreeing to record all data in the same format. “That's never been done in high-energy physics,” Barish notes. In addition, GEO scientists will design a delicate optics suspension system for the next phase of LIGO, known simply as LIGO 2. For a future orbiting version of LIGO on a grand scale, called LISA (see sidebar), international collaboration has been built in since the first proposals. Still in the daytime The upgrade of LIGO, scheduled to begin in 2005, is critical to the project's success. The chances of detecting a bona fide gravitational wave between 2002 and 2004—the two planned years of full-time operation for LIGO 1—might be “slightly better than 50-50,” says Syracuse University physicist Peter Saulson. (Over a lunch of Louisiana seafood, Saulson's colleagues chided him for being overoptimistic.) With the proper improvements to key systems, he says, “the odds for LIGO 2 become darn good.” Even with that rosier future scenario, NSF and LIGO physicists opted to go with tried-and-true technology for LIGO 1 to demonstrate that the concept will work. NSF has budgeted$6 million per year for research and development toward new systems for LIGO 2. For instance, a more powerful laser will pump more photons of light between the mirrors. That will reduce the uncertainty in counting the photons, leading to better measurements of any slight differences in length between the two arms. Better cushioning will shield the laser and mirrors even more carefully from environmental noise. The improved system may include active cancellation of small seismic motions, just as modern astronomical telescopes cancel the blurring of Earth's atmosphere with small flexible mirrors. In the GEO-designed suspension, each mirror will hang within a nested set of three or four pendulums, isolating them far from any moving parts.

The mirrors themselves also will change. Crystals of nearly pure sapphire will replace the fused silica now being used, giving each mirror much more heft and making it less prone to internal vibrations. Sapphire also conducts heat more efficiently than silica, so it can handle the more powerful laser planned for LIGO 2.

Even incremental improvements will make a big difference, Barish observes. For instance, he estimates that replacing the fused silica mirrors with sapphire ones will double the interferometer's sensitivity to gravitational waves. By extending LIGO's reach twice as far into space, the new mirrors will let the instrument probe eight times as much volume—and detect eight times as many sources. Barish believes that all of the planned improvements will make LIGO 2 about 15 times more sensitive than its predecessor. That would increase the volume of the universe accessible to LIGO by a startling factor of about 3000. Depending on funding, Barish envisions a LIGO 3 sometime after 2010 using even bolder technology—such as mirrors cooled to within a whisper of absolute zero for maximum stillness.

In other words, the gravitational window on the universe has barely begun to open. Only the strongest signals will penetrate the initial crack in the window, and LIGO physicists are restless with uncertainty about when those signals will first appear. “The history of astronomy is that each increase in telescope sensitivity has led to a fundamental leap in our knowledge about the universe,” says Saulson. “Where we differ with gravitational waves is that we're still in the daytime. No one will tell us how dark we have to make our sky to begin to see the stars.”

12. ASTROPHYSICS

LIGO's Laser-Packing Big Sister LISA May Hunt Black Holes From Space

1. Robert Irion

Within 10 years, three satellites may orbit far from Earth in a precise triangular formation, aiming lasers at each other. That may sound like the Strategic Defense Initiative run amok, but in fact it's the next possible step in the search for gravitational waves. Its proponents call it LISA: the Laser Interferometer Space Antenna, a $500 million gleam in the eyes of astrophysicists in the United States and Europe. Researchers envision LISA as a big sister to the Laser Interferometer Gravitational-Wave Observatory (LIGO) and other gravitational-wave detectors being built on the ground (see main text). But LISA is to LIGO as a radio telescope is to an optical telescope: It would “see” a completely different part of the gravitational spectrum. Each of LISA's three arms would stretch 5 million kilometers, making it sensitive to gravitational waves that span vast distances in space. Such waves might flow for months from binary systems of dense objects, such as a pair of supermassive black holes nearing a titanic merger at the heart of distant galaxies in collision. Indeed, black holes in all their dark variety would be the primary targets of LISA, says astrophysicist Peter Bender of the University of Colorado, Boulder. “The origin of massive black holes is an open question,” Bender says. “We don't know whether they form from the collisional growth of seed black holes or the sudden collapse of gas clouds.” With the obvious difficulty of peering optically into such environments—especially halfway across the universe—LISA may offer the best chance to resolve that question. LISA would consist of three identical spacecraft with Y-shaped cores, each aiming its optical assemblies at the other two. Small telescopes would collect the laser light after its 16-second trek across space. The “test masses” determining the positions of the spacecraft would not be mirrors, but metal cubes just 4 centimeters across. Each cube would free-float in space, surrounded by but not touching the rest of the spacecraft. LISA's designers will strive to gauge the relative positions of the cubes to within two-tenths of an angstrom—tiny, to be sure, but about a million times bigger than LIGO's goal of 1/1000th the diameter of a proton. The project would be a joint endeavor between NASA and the European Space Agency, with the European efforts led by physicist Karstan Danzmann of the University of Hannover in Germany. Although LISA is not yet officially on the drawing board, the team hopes to fly a test mission within a few years and to launch the full-scale array by 2010. If it works as advertised, LISA should see plenty of sources of long-period gravitational waves. For instance, binary systems of white dwarfs in our own galaxy would create a persistent buzz of gravitational noise. Above a certain frequency, physicists should be able to resolve the signals from thousands of such binaries in the Milky Way, as well as distinct hums from the interactions of massive black holes throughout the universe. Beyond that, any number of new sources could flood LISA's airwaves. A few scientists even worry that there may be an embarrassment of riches. “I'm concerned that we're going to fly this thing, and there will be so much going on that we will have to work very hard indeed to have a clue what we're looking at,” says astronomer Douglas Richstone of the University of Michigan, Ann Arbor. To wave-starved LIGO physicists, that potential data analysis nightmare sounds like a sweet dream. 13. SOCIETY OF TOXICOLOGY MEETING Hazards of Particles, PCBs Focus of Philadelphia Meeting 1. Jocelyn Kaiser Some 5300 toxicologists met in Philadelphia from 19 to 23 March for the 39th annual meeting of the Society of Toxicology. Among the highlights were a symposium on the health effects of breathing fine particles and a study finding that PCBs may suppress toddlers' immunity. Soot's Health Effects Strengthened In the early 1990s, Harvard researchers published startling evidence implicating soot—or more specifically, particulate matter (PM)—in 60,000 deaths a year in the United States. Carol Browner, administrator of the U.S. Environmental Protection Agency (EPA), took these and related findings to heart, proposing stringent new regulations to control fine soot in 1996. But critics in industry and the scientific community immediately blasted the EPA for imposing costly regs on the basis of what they said was shaky science. New results, some of which were reported at the toxicology meeting, now bolster the case against fine particulates. Some of the work is in fact an offshoot of the original brouhaha. The EPA went ahead with the new regs in July 1997, building in an 8-year delay for implementing them. Congress reacted by launching a massive research effort on the health effects of PM, approving around$50 million of EPA's budget for the research over the past 3 years and mandating that the National Research Council (NRC) oversee the studies.

The earlier work was criticized partly because other air pollutants that rise along with PM—nitrogen oxide, for example—can exacerbate cardiopulmonary disease and thus they, rather than PM, may have caused the effects the Harvard researchers saw. Critics also complained that direct evidence of how breathing fine particles makes people sick was needed (Science, 25 July 1997, p. 466). At the symposium, Harvard epidemiologist Douglas Dockery, a key author of the original studies, addressed the first issue with new work that tightens the link between PM and worsened heart disease.

In one experiment, his group and Arden Pope at Brigham Young University in Provo equipped seven Utah retirees with heart monitors during periods when the researchers knew air pollution would rise as an idle local steel mill cranked into operation. As reported last November in the American Heart Journal, the researchers found that during the periods of high pollution, the subjects' heart rates became less variable—an indication that they were at increased risk of having heart attacks. In another study, published in the January issue of Epidemiology, Dockery's team found that heart arrhythmias in 100 men with implanted heart defibrillators increased along with daily particulate levels.

Results from the Health Effects Institute (HEI), an industry and EPA-supported nonprofit organization in Cambridge, Massachusetts, further dispel doubts about the 1990s findings. In one study, epidemiologists led by Johns Hopkins University's Jonathan Samet found a “robust” association between PM monitor readings and deaths and hospitalizations in 90 U.S. cities. In addition, a reanalysis by an HEI panel of the original Harvard data shows that the results hold up even after trying several different statistical approaches.

While these and other studies are firming up the epidemiologic case against PM, new animal research points to the metals, such as vanadium and nickel, that are found in PM as a prime source of its toxicity. At the meeting, EPA toxicologist Penn Watkinson reported that he and his colleagues had exposed rats with weakened cardiopulmonary systems to mixtures of the metals. The team found harmful changes in the rats' heart function and increased deaths. By contrast, rats exposed to volcanic ash, which contains essentially no metals, were little affected.

Despite the plethora of research now under way, a host of questions remain about how PM does its dirty work. The metal studies, for example, used doses much higher than city dwellers typically encounter, warns toxicologist Joe Mauderly of the Lovelace Respiratory Research Institute in Albuquerque, New Mexico, who is a member of the NRC panel overseeing PM research. Other compounds in PM such as organics also likely play a role in its toxicity, but little is known about their effects. With definitive answers still not in, expect another wave of controversy in 2002 when the agency next reviews the data on the health effects of PMs—a step it must take before implementing the more stringent PM standard in 2005.

Low-Level PCB Dangers

Without question, PCBs are nasty chemicals. These polychlorinated biphenyls, as they are properly known, accumulate in the food chain and cause a variety of ill effects in lab animals, from liver damage to cancer. For that reason, most developed countries banned the use of PCBs and many similar chemicals decades ago. Despite that, PCBs can still be found, often in minute levels, in the body fat of all people examined. New evidence presented at the meeting now suggests that even these low levels of PCBs may affect development in young children.

The work comes from Nynke Weisglas-Kuperus, a developmental pediatrician at Sophia Children's Hospital in Rotterdam, the Netherlands, and her colleagues. They found that PCBs and related chemicals called dioxins, passed by Dutch mothers to their babies during pregnancy and in breast milk, appeared to weaken the infants' immune systems. This in turn contributed to more infections in the first 3 1/2 years of life, Weisglas-Kuperus reported. “These are really subtle effects,” says dioxin toxicologist Linda Birnbaum of the U.S. Environmental Protection Agency. But, she adds, if they translate into lots more childhood infections over a population, “that has an impact.”

Previous work in both animals and humans had suggested that PCBs and dioxins, which are still produced as byproducts of incineration and industrial bleaching, suppress the immune system. For example, they've been blamed for spurring a 1988 virus outbreak that killed 20,000 European harbor seals feeding on PCB-tainted fish. And in Taiwan, a group of infants born to mothers who had accidentally consumed high doses of PCBs in 1979 had an elevated rate of infections.

To find whether health effects might also arise from the lower exposures more typically seen in developed countries, in 1990 Weisglas-Kuperus and colleagues began a long-term study of 207 mothers and infants living outside Rotterdam. Roughly half the mothers nursed their babies and the others fed them formula, which was not contaminated with PCBs.

When the infants were 18 months old, the researchers detected slight changes in the immune cells of some of them, particularly those who had been breast-fed, suggesting that their immune systems had been influenced by PCB exposure and might be less able to fight infections. These changes correlated with PCB and dioxin levels in blood from the babies' umbilical cords and in the mothers' blood and breast milk. At that time, however, those with greater immune changes didn't get sick more often than the other babies.

That changed when the researchers reexamined the children at age 3 ½, when a typical child has had many infections. As before, they found that the toddlers whose mothers had more PCBs in their blood had higher levels of certain T cells. The researchers then looked at the current level of PCBs in the children's blood and their history of infections. After adjusting for confounding factors such as parental smoking, which tends to increase infection rates in children, and breast feeding, which, while exposing the babies to PCBs, is also well known to boost immunity, they found that children with high PCB exposures at age 3 ½ were eight times more likely to have had chickenpox, and three times more likely to have had at least six ear infections than those with lower exposure.

The Weisglas-Kuperus team continues to monitor the children for neurological and other effects. But she says the immune suppression alone underscores the importance of strict regulations on the release of PCBs and dioxins.

14. AMERICAN CHEMICAL SOCIETY MEETING

Chemists Unveil Molecular Wizardry in San Francisco

1. Robert F. Service

San Francisco, California—Baseball fans looking to check out the new stadium of the hometown Giants aren't the only ones flocking to the City by the Bay this spring. The 219th meeting of the American Chemical Society (ACS) drew nearly 20,000 chemists, physicists, biologists, and engineers from around the globe from 26 to 30 March. Among the meeting's home runs were reports of novel organic molecules that give old plastics new life and a new alternative to DNA chips.

Ribbons Make Tough Stuff

In fairy tales, a sprinkling of magic dust can make your dreams come true. In the world of plastics chemistry, that makes Samuel Stupp a wizard. At the ACS meeting, Stupp, a chemist at Northwestern University in Evanston, Illinois, reported developing new three-part molecules, a pinch of which can drastically change the strength and optical properties of commodity polymers such as those used to make coffee mugs and Plexiglas. Down the road, the new molecular seasoning could be used to lend everyday plastics like polystyrene the strength and toughness of a bulletproof vest at a fraction of the cost.

“It's really a unique approach” to polymer chemistry, says Timothy Long, a polymer chemist at Virginia Polytechnic Institute and State University in Blacksburg. “It's really remarkable how the properties [of the plastics] changed by adding so little of the new material.”

Initially, Stupp and his colleagues weren't looking for a way to spice up tired plastics. Stupp's team had previously made two-part molecules called rodcoils, so named because half of each molecule was rigid, the other half flexible. And this unique structure, they found, caused the molecules to assemble into mushroom-shaped clumps that stacked themselves into sheets (Science, 18 April 1997, p. 354).

For their current project, they wanted to see if they could tweak the chemistry of the rodcoils to coax them to form one-dimensional chains. They started by grafting a third section onto the rigid end of the molecules. This new portion, called a dendron, begins as a “Y” shaped group with two arms jutting from the end of the molecule. Each arm is capped by hydroxyl groups that can readily form weak hydrogen bonds with their neighbors. That gave them three-part molecules, which they dubbed “dendron-rodcoils,” or DRCs. When the researchers dissolved a dash of DRCs in an organic solvent, they were surprised to find that the liquid solvent instantly turned into a gel. Evidently the DRCs were sticking together. Closer inspection showed that the DRCs had indeed lined themselves up, not into chains, but into ribbons.

When DRCs are added to a solvent, Stupp explains, the hydroxyl-tipped arms of one dendron meet up and form hydrogen bonds with hydroxyls on a second dendron, linking two DRCs together like pencils fused at their erasers. Afterward, the DRCs can still form additional hydrogen bonds, so as other molecular pairs drift by they line up alongside the first one. The result is a zipperlike structure only 10 nanometers wide but up to 500 nanometers long, with the hydroxyl “teeth” locked in the middle and the long bodies of the molecules trailing behind them. Still more hydrogen bonds in other parts of the molecules then link the ribbons into a web.

But the DRCs don't just force one another into line, Stupp says. The ribbons in the newly formed web also attract solvent molecules to nuzzle alongside, as doing so allows the ribbons to lower a property known as their surface energy. As this association occurs, the liquid solvent stops flowing and becomes a blue gel. (The color, Stupp adds, likely results from the fact that the ribbons are about the same size as the wavelength of blue light and therefore scatter it more effectively than other colors.) And this entire transformation occurs with a mix of just 0.3% nanoribbons to 97.7% solvent.

And that isn't all the magic the nanoribbons have in store. Stupp and his collaborators—postdoc Eugene Zubarev and graduate student Martin Pralle of the University of Illinois, Urbana-Champaign, along with graduate student Eli Stone of Northwestern—decided to see what would happen if they replaced their original solvent with styrene, the liquid precursor to the common plastic polystyrene. To their surprise, they found that the solution again formed the blue gel. And the ordering remained even after they linked the styrene into polymer chains. What's more, when they then melted the nanoribbon-spiked plastic and pulled the plastic melt into a fiber, they found that all of the nanoribbons and associated polymer chains lined up in the same direction, making the fibers far stronger than those in which the polystyrene molecules are randomly oriented.

In addition to working with polystyrene, Stupp's team showed that by adjusting the chemistry they could prompt other types of plastic and rubber building blocks to assemble around the nanoribbon webs. Using the ribbons as templates, they also deposited semiconductors to create nanosized wires, which could be used to wire up future miniaturized computer chips. Not quite on a par with Harry Potter's sorcerer's stone, perhaps, but blue-ribbon magic all the same.

Tracking DNA When The Chips Are Down

Tracking which genes are turned on or off in different tissues, or during the course of a disease, is becoming de rigueur in biology labs these days. Eventually, if the enthusiasts are right, physicians will routinely check your patterns of gene expression to determine which medicines will work best for you. The key to this new world of gene tracking is the DNA chip—a small slice of silicon dotted with thousands of snippets of DNA corresponding to genes of interest. But the technique has one big downside: A single chip can cost thousands of dollars, and it's used just once and thrown away. However, help may be on the way. At an ACS symposium that explored the latest research in DNA diagnostic technologies, Wlodek Mandecki of the New Jersey-based start-up company Pharmaseq reported a potentially cheap way to track active genes with tiny radio transmitter tags.

DNA chips have surged in popularity in recent years in part because they are so simple to use. To track active genes, researchers first synthesize short snippets of single-stranded DNA, link them to a glass or silicon chip in a checkerboard array, and use a computer to keep track of the DNA sequence of each spot on the array. DNA bases bind only with complementary pairs—A's with T's, G's with C's. So short DNA segments on a chip can be used to bind genes with a complementary sequence. To find out if an organism is expressing one of those genes, researchers convert its cellular messenger RNA (mRNA)—which is made by active genes—into a single-stranded version of DNA and tag it with a fluorescent compound. They then wash the tagged DNA over the chip, see which spots on the array light up, and check those positions in the computer to find which genes are active.

To find a cheaper alternative, Mandecki needed a new way to tell DNA snippets apart. The solution he hit upon was what he calls microtransponders, essentially tiny silicon chip-based devices each just a few hundred micrometers on a side, about the width of a few hairs. Each transponder stores an ID number in memory and when prompted emits an identifying radio signal that can be picked up by a nearby receiver. Much larger transponders—about the size of aspirin tablets—are already used as ID tags for everything from luggage to cars. But because they are powered by incoming radio-frequency energy, they require comparatively bulky imbedded devices to convert that energy to electricity and back into a radio signal.

To make a transponder small enough to track molecules, Mandecki teamed up with researchers at the Sarnoff Corp. in Princeton, New Jersey, who designed a version that incorporates a tiny photocell to absorb laser light and then uses the energy to broadcast a unique ID number programmed into the device.

The transponders are relatively straightforward to use as DNA tags, Mandecki says. First Mandecki and his Pharmaseq colleagues synthesize short DNA fragments called oligonucleotides, tailored to bind to complementary sequences in known genes. They attach these probes to their silicon ID tags, using a standard chemical process. As the oligo is attached, a computer records its sequence along with the number of the transponder to which it is paired. Next, the Pharmaseq team follows the same strategy as with DNA chips to convert a cell's mRNA to single-stranded DNA with fluorescent tags, which they then mix into a solution containing the radio-tagged oligos.

The researchers then simply flow their solution through a device akin to a flow cytometer, a common lab instrument for counting cells one at a time. As the transponders and their genetic cargo stream through a narrow channel, the researchers shine a laser light on them. When the light hits a transponder, the device emits a radio signal revealing its identity and, by association, that of its attached oligo. The laser light also triggers the fluorescent tags on the DNA to emit light. So if a flash of light accompanies the radio signal, Mandecki and his colleagues know that the oligonucleotide has a stretch of DNA in tow—and therefore that the gene is actively expressed. If no light flash occurs, DNA didn't bind to the oligo, and thus the corresponding gene is inactive.

“It looks like an impressive technology,” says Wai Tak Law, an analytical chemist at PortaScience Inc. in Moorestown, New Jersey. Law says that the new technology has a way to go to catch up with DNA arrays. But he adds that it could also push into new areas, such as tracking proteins and small molecules used in pharmaceutical research.

Sharp Jump in Teaching Fellows Draws Fire From Educators

1. Jeffrey Mervis

A drop in the number of prestigious research fellowships and a rapid rise in a new program that sends students into public schools prompts questions

For almost half a century, the National Science Foundation's graduate research fellowships (GRFs) have been among the most prestigious and sought-after awards for aspiring young researchers. They have helped launch the careers of tens of thousands of scientists. Next year, however, the agency is shrinking the program while it expands a new effort that puts graduate students to work in public schools. The shift, contained in NSF's 2001 budget request to Congress, has led to pointed questions at two recent NSF hearings and are drawing flak from some university officials.

Critics say that NSF's spending plan reflects a continued erosion of the $55 million GRF program (see graph) and an overhasty expansion of the teaching fellows program—a proposed 165% increase, to$28 million—before it has been proven effective. NSF officials dispute both points. They argue that they are strengthening the research fellows program by raising stipends, which are now so low that they are driving away good applicants. Scaling up the teaching fellows program, they add, is part of a broader and much-needed push to encourage universities to help improve precollege science education.

The research fellows program is much loved in the research community. Fellows compete for a 3-year grant to work in a lab of their choice. It is “the sterling silver program in federal graduate education,” says Chris Simmons of the Association of American Universities. “Any cuts are a cause for concern.” Indeed, last fall, as part of its 50th anniversary celebration, NSF compiled a loving tribute to the GRF program in which Harvard biologist E. O. Wilson, a member of the inaugural class of 1952, writes that news of the awards “fell like a shower of gold” on him and other recipients and opened up a career in research.

Although the GRF program is NSF's most prestigious program to support graduate students, the agency funds far more students through grants to investigators. These tie the student, typically hired as a research assistant, to a particular professor and institution, and NSF has little control over their training. NSF also runs a program, called Integrated Graduate Education and Research Training (IGERT), that gives awards to universities to pick their own students in selected fields. Although NSF can attach strings to these grants, their major purpose is to build a cadre of researchers in burgeoning fields.

Rita Colwell added a new program to this portfolio shortly after becoming NSF director in August 1998. It gives budding researchers a chance to share their knowledge and enthusiasm for science with beleaguered public school teachers and their students. The graduate teaching fellows (GK-12), she argued, might also help to reverse recent test results that ranked U.S. high school students at the bottom in global measures of science and math achievement. Awardees spend 10 to 15 hours a week in school, aiding teachers and supplementing class lessons. “This program may seem small,” she told legislators last year about an initial $7.5 million investment, “but it has a potential impact well beyond the dollars. It will broaden graduate education and boost the science, engineering, and technology content in K-12 classrooms.” By next year Colwell hopes to spend$28 million on the program, nearly matching the $31 million IGERT program and more than half the size of the GRF program. In contrast, the GRF budget is scheduled to drop by$500,000 in 2001, and the number of GRF fellows it supports would fall from 900 to 850, after peaking at 1000 in 1997. The cuts will allow NSF to finance a $1000 boost in the stipend, to$16,200; GK-12 fellows receive $18,000. “We'd like to get up to$18,000 by 2002, and we also want to boost the size of the educational allowances [universities receive $10,400 in administrative expenses for each fellow],” says NSF education chief Judy Sunley. “To do that we needed to make a small cut in the research fellowships.” Ultimately, says Sunley, “we would like all three [graduate programs] to be the same size.” University lobbyists, unhappy with NSF's decision to hold GRF's budget flat, are walking a fine line. They don't wish to jeopardize the agency's overall request for a 17% increase by making a stink over a tiny piece of the pie. But they hope to convince Congress to raise the GRF stipend to$18,000 this year. “It shouldn't be a competition,” says Nan Wells of Princeton University's Washington office, which has pushed hard for the research fellowships. “The GRF is a national talent search—and it's produced 18 Nobel laureates. I don't understand why NSF can't expand both programs.”

Some admit, however, that they have reservations about the underlying premise of the GK-12 program. “A lot of graduate students don't know how to teach. So why are we sticking them in the schools?” asks Michael Lubell of the American Physical Society. Gerald Wheeler, executive director of the National Science Teachers Association, worries that many professors might regard the teaching fellowships as “just another way to get additional lab help. … Most professors don't care about who's teaching their introductory lab courses,” he argues, “so why should they care about improving instruction in the public school down the street?”

That message has registered on Capitol Hill. At separate hearings, Representative Nick Smith (R-MI), chair of the Science Committee's basic research panel, and Representative Rodney Frelinghuysen (R-NJ), a member of the spending panel that oversees NSF, pushed NSF officials to explain their actions during a review of the agency's overall budget. Smith was particularly concerned about expanding the GK-12 program before any evaluation had been carried out. “How do you know it's working?” he asked Sunley. “It's too new to know,” she acknowledged, “but we plan to monitor it.”

If resources are tight and the teaching fellows program has no track record, many educators wonder, why then is it growing so rapidly? “Even if it turns out to be a good idea, wouldn't it make more sense to do [the GK-12 program] in a few schools and see what happens?” asks Lubell. But Colwell is convinced that the GK-12 fellows program will work. “I've had so many young people tell me they are excited to have a chance to do this,” she says. “I'm sure that it's going to be a great success.”

16. PLANT BREEDING

Hopes Grow for Hybrid Rice to Feed Developing World

1. Dennis Normile

U.S. company builds on successes in China as improved techniques and better management deliver higher yields

Los Baños, the Philippines—Sant Virmani, who heads hybrid rice-breeding efforts at the International Rice Research Institute (IRRI) here, remembers when the number of scientists interested in the subject could fit into his living room. But this month, organizers of an international conference marking the 40th anniversary of IRRI* had to fold back a room divider and bring in more chairs to handle the throng that gathered to hear his talk.

That heightened interest reflects the growing number of researchers who hope that hybrid rice will help feed the billions of people who rely on the crop. “Hybrid rice is really the only [technique] at hand that has proven to boost yields in farmers' fields,” Virmani says.

Although rice breeders have created improved, higher producing rice varieties, they haven't been able to take advantage of a natural phenomenon that jacks up the yields of grains such as corn. Thanks to an imperfectly understood effect called heterosis, the first generation, or F1, hybrid of a cross of two different varieties grows more vigorously and produces from 15% to 30% more grain than either parent. But because rice is self-pollinating, with each plant producing its own fertilizing pollen, producing hybrid rice was commercially impractical. Now, 3 decades of effort has produced hybrid rice varieties and commercially viable methods of producing the hybrid seed. “Finally, hybrid rice is ready to take off,” Virmani says.

Such a jump is needed because increases in rice yields have leveled off in the 1990s, while the population continues to grow. But others counsel caution. They warn that the quality of the hybrid rice hasn't yet matched that of current varieties and that growing hybrid rice requires changes in farming practices, in particular, the purchase of new seeds for every growing season. “Hybrid rice is not a success story—yet,” says Wayne Freeman, a retired agronomist who formerly oversaw The Rockefeller Foundation's food programs in India.

The road to the current progress has been long and arduous. In the late 1960s, Chinese researchers discovered a wild male sterile rice variety. Because male sterile plants don't produce pollen of their own, that allowed researchers to fertilize the plants with pollen from other varieties. Not all crosses work, however. Some produce lots of vegetation but little grain. Yuan Longping, director of China's National Hybrid Rice Research and Development Center in Changsha, Hunan Province, has spent 2 decades working on breeding this male sterility trait into the indica rice varieties grown in China and improving seed-production techniques. Hybrid rice now covers about 50% of China's rice acreage and accounts for 60% of production.

For a long time, however, China was the exception. Its success rested on its vast pool of cheap labor and heavy government subsidies. Producing hybrid seed requires growing the male sterile line together with a second parental line, which provides the pollen. Large teams of Chinese workers spray the male sterile plants with a growth hormone that induces the panicles, or grain clusters, to emerge from the rice leaf sheath to catch pollen more easily. The pollen has to be shaken loose from the second line by workers dragging ropes or sticks over the plants. In a separate area, the male sterile line must be grown alongside a third line, which provides pollen to reproduce the male sterile line for the next seed-growing season.

However, the lure of potential payoffs has proven irresistible. A hybrid rice project launched by the Indian Council of Agricultural Research has boosted yields from less than 100 kilograms per hectare to about 1.5 metric tons through a painstaking trial-and-error breeding effort that involved more than 1000 experimental hybrid lines. Commercial cultivation began in 1994, and some 150,000 hectares are now planted in hybrid rice. Hybrid rice is also being grown in Vietnam and the Philippines, and scientists in Bangladesh, Sri Lanka, and Indonesia are developing hybrid rice varieties for farmers.

In the United States, an effort in the 1980s by a subsidiary of Occidental Petroleum Corp. fell flat. But in the early 1990s a company controlled by the prince of Liechtenstein picked up the rights to use the Chinese hybrid techniques and plant materials and underwrote a research program by RiceTec Inc. of Alvin, Texas, to commercialize the technology. In addition to transferring the male sterility trait into varieties suitable for the United States, RiceTec has mechanized the seed-production process to eliminate the hand labor used in China. “We are doing with just two workers what the Chinese are doing with 100,” boasts Robin Andrews, company president, who says that the details are proprietary. Last year, field trials in Arkansas and Missouri produced hybrid plots with an average yield 33% greater than a variety of the farmers' choice. As a result, says Andrews, “every one of the farmers who participated in our trials has bought seed to plant this year.”

Farmers in other parts of the world remain to be convinced that hybrid rice is better, however. A small study in India by Aldas Janaiah, an agricultural sociologist at IRRI, found that actual harvests often fell short of projected yields and that most farmers do not plan to plant hybrid rice again. And Virmani admits that hybrids require farmers to monitor fertilizer applications more closely and take greater care of the transplanted seedlings. Breeders also need to improve the quality and taste of hybrid rice.

Still, Virmani believes that the growing cadre of researchers around the world will eventually solve those problems. “Ten years ago you couldn't get even 20 people interested in talking about hybrid rice,” he says. “Hybrid [rice] research efforts are really just getting started.”

• *Rice Research for Food Security and Poverty Alleviation, 31 March to 3 April.