News this Week

Science  17 Jan 2003:
Vol. 299, Issue 5605, pp. 320
1. GENE THERAPY

Second Child in French Trial Is Found to Have Leukemia

1. Eliot Marshall*
1. With reporting by Jocelyn Kaiser.

For the second time in 4 months, a child has developed a leukemia-like disease after receiving gene therapy at the Necker Hospital for Sick Children in Paris. Concerned about the safety of such trials, the panel that monitors U.S. research in the field scheduled a public meeting this week to review the clinical data and weigh its next steps.

The news of a second case of cancer in the French trial reached the U.S. Food and Drug Administration (FDA) on 20 December and quickly spread to other gene-therapy researchers. FDA responded by putting a “clinical hold” on U.S. studies that, like the French trial, use retroviruses to shuttle therapeutic genes into the chromosomes of target cells; most were already on pause for a safety review (see table). Meanwhile, the Necker Hospital's lead investigator, Alain Fischer, asked colleagues around the world to keep the information confidential until after he had spoken with the families of the 10 children in the experiment. At press time, Fischer had not gone public and declined to discuss the details. He said that he intended to present his data on 17 January to the Recombinant DNA Advisory Committee (RAC) of the U.S. National Institutes of Health (NIH).

View this table:

This adverse event, according to researchers in the field, mimics another one that came to light in September (Science, 4 October 2002, p. 34). In that case, Fischer told health authorities in France and abroad that one of 10 boys treated by his group for a disorder called X-linked severe combined immunodeficiency (X-SCID) had developed a blood problem. A type of immune cell—a gdT cell—whose growth had been boosted by gene therapy was discovered to be replicating at an unhealthy rate. French clinicians treated it as though it were leukemia and reported within several weeks that they had successfully stopped its proliferation. But Fischer himself raised the possibility that his X-SCID cure—which is credited as the first unequivocal success for gene therapy—could have caused the cancerlike response.

FDA ordered a pause in all three U.S. clinical trials using retroviruses. The only trial of this type not to pause for review was one run by Adrian Thrasher at the Institute of Child Health in London. It suspended therapy this week.

When the first adverse event appeared last fall, FDA and RAC sought outside advice on whether to suspend similar trials in the United States or to impose new reporting requirements. Some experts warned officials not to overreact, saying that the cancer might have been triggered by a harmful genetic mutation not related to the therapy. They noted that two other people in the child's family also had developed cancer at an early age. Others, however, felt that the clinical data strongly suggested that the gene therapy itself had triggered the dangerous T cell proliferation. In the end, U.S. authorities decided to allow the potentially risky trials to continue, with closer monitoring (Science, 18 October 2002, p. 510). But none actually resumed.

The second adverse event in the Necker trial could have a devastating impact on researchers' plans and the hopes of patients who volunteered for these trials. “It kind of threw a wet blanket over everything,” said Joseph Glorioso, president of the American Society of Gene Therapy and a molecular biologist at the University of Pittsburgh. “We will have to take a hard look” at possible causes, he says, including the chance that selectively promoting the growth of certain target cells in blood—as this type of gene therapy aims to do—could increase the likelihood of cancer.

Nobody knows how great the risks to patients are, admits Brian Sorrentino, leader of a gene-therapy group based at St. Jude Children's Research Hospital in Memphis, Tennessee, which treats a type of immune deficiency similar to X-SCID called JAK-3. “I don't think we can use any of the prior data” to develop risk estimates, he says, because clinicians may not have focused on the relevant parameters. In October, the risk of cancer in X-SCID therapy looked to be no higher than 1 in 10, he says, but now it could be 2 in 10. The new odds “raise a real red flag,” says Glorioso.

Jennifer Puck, who leads a gene-therapy group at NIH's National Human Genome Research Institute that is targeting X-SCID, suggests that the age of patients in gene-therapy trials may be important. She notes that both of those who experienced adverse events in Fischer's trial were younger than 3 months when they received therapy.

U.S. health officials and researchers are scrambling to assess the news, says Philip Noguchi, FDA's leading gene-therapy expert. “We're trying to learn whether this is a coincidence or something similar to the first case,” in which the retrovirus inserted itself into a gene that's known to promote cancer. “Those data are coalescing to make it look similar: The therapy that clearly works may also be responsible for adverse events.”

If that's true, says Noguchi, patients may face a “poignant dilemma” that offers them the chance of better health through an experimental gene therapy at the risk of contracting leukemia. FDA's Biological Response Modifiers Advisory Committee will discuss the policy implications at its next meeting on 28 February.

2. SCIENCE AND SECURITY

Researchers Urged to Self-Censor Sensitive Data

1. David Malakoff

Do it yourself—before the government does it to you. That's the advice U.S. bioscientists received last week at a workshop on developing guidelines to handle research findings that could threaten national security.

The current debate over what kinds of biomedical research findings shouldn't be published began after the anthrax mail attacks in fall 2001. But it picked up steam after major scientific journals published papers containing data that critics said could aid terrorists. Science, for instance, last year published a paper by Eckard Wimmer of the State University of New York, Stony Brook, showing how to assemble a working poliovirus from off-the-shelf chemicals. Although experts pointed out that build-it-yourself virology would be a poor way to construct a bioweapon, members of Congress wanted the U.S. government to block the publication of such results. That prompted the American Society for Microbiology (ASM) to ask the U.S. National Academies to help scientists come up with voluntary guidelines for handling sensitive information (Science, 2 August 2002, p. 749). Last week, the academies joined with the Center for Strategic and International Studies (CSIS) to host a 1-day meeting in Washington, D.C.

Presidential science adviser John Marburger told the gathering that nuclear physicists and mathematicians have a long history of keeping secrets. But time-tested methods of protecting nuclear technologies and computer codes may not work for biology, he said, because they could restrict the flow of basic data needed to develop both desirable and undesirable products. Past policies “do not give adequate guidance for the technology of bioterrorism,” he concluded, adding that the government will need biologists to help it identify and censor truly sensitive findings.

The editors of major journals—including Science, Nature, the Proceedings of the National Academy of Sciences (PNAS), and 11 ASM titles—said they are already giving special scrutiny to papers that raise security concerns. PNAS has subjected about 20 papers to extra review, said its editor-in-chief, Nicholas Cozzarelli, but none was rejected. “We think we will know [information that shouldn't be published] when we see it, but so far we haven't seen it,” he said.

That approach might not be enough in today's climate of heightened security, say government officials. The lack of “defensible criteria for what constitutes appropriate science” could lead panicked policy-makers to impose “onerous and ineffective” rules, warned physicist Penrose Albright, who is examining the issue for the White House offices of Homeland Security and Science and Technology Policy. “The scientific community needs to get its act together, or someone is going to do it for them,” he said.

Even strict U.S. rules won't work in the absence of international cooperation, noted several speakers. “If guidelines aren't universally adopted, we are going to have problems,” predicted biodefense expert Stephen Morse of Columbia University in New York City.

To help craft a better definition of taboo science, the academies and CSIS plan to convene periodic meetings of top science and security leaders. This spring, an academies panel led by geneticist Gerald Fink of the Massachusetts Institute of Technology in Cambridge will recommend strategies for preventing the misuse of biotechnology. And journal editors held a private workshop following last week's meeting to begin hammering out some common guidelines for handling sensitive information. Gerald Epstein, a security expert with the Institute for Defense Analysis in Alexandria, Virginia, proposes a simple question scientists can ask themselves before submitting a paper: “Would you like [it] to be found in a cave in Afghanistan with sections highlighted in yellow?”

3. CHEMICAL ENGINEERING

Chemists Concoct Quick-Change Surface

1. Robert F. Service

Chemists Concoct Quick-Change Surface

A flick of a switch lights up the room, powers up the television, and turns on a host of other gadgets large and small. Now a group of chemists and chemical engineers in Massachusetts and California has thrown the switch on the once stagnant world of surfaces. On page 371, the researchers report the ability to make a thin gold plate alternate between attracting and repelling water by electronically twisting molecules arrayed on top.

“The idea that you can change the surface is extremely exciting, because many technologies are based on surface structures and most surfaces are static,” says Edith Mathiowitz, a chemist at Brown University in Providence, Rhode Island. Mathiowitz and others hope the technique, modified to work with compounds with different surface properties, will make possible novel schemes for releasing drug compounds on cue, trapping and releasing proteins for large-scale proteomics studies, and manipulating liquids in microfluidic chips. “I can see how this strategy could find general use,” Mathiowitz says.

Teams have created switchable surfaces before, but only by altering them or the surrounding solvent chemically, says Robert Langer, a chemical engineer at the Massachusetts Institute of Technology, who led the current effort. Langer's team is the first to make the switch by using an external force to change the molecules' shape rather than their chemical makeup.

Last spring, Langer and postdoc Andreas Lendlein reported crafting novel shape-memory polymers that can return to their native shape if deformed (Science, 31 May 2002, p. 1673). Another postdoc, physical chemist Joerg Lahann, then wondered whether such malleable polymers could be used to make surfaces that could switch their physical properties. Langer gave him the green light, freeing Lahann to work with three colleagues in Langer's lab as well as Samir Mitragotri and Jagannathan Sundaram of the University of California (UC), Santa Barbara, and Saskia Hoffer and Gabor Somorjai of UC Berkeley.

The team turned to chainlike polymers called alkanethiols, which naturally assemble into what looks like rows of tightly packed miniature cornstalks—a sharp contrast to the spaghettilike tangle of most polymers. If they could synthesize alkanethiols with different chemical properties on their tops and sides and then attach them to a plate, the researchers thought, they could alter the surface properties of the plate simply by making the molecules stand straight or bend over, exposing their stalks.

Getting the alkanethiols to stand up was easy. Sulfur atoms at one end of the molecules naturally bind to gold surfaces, and the molecular stalks have little choice but to stick straight up if packed in densely enough. To bend over, however, the alkanethiols needed breathing room. Lahann supplied it by synthesizing novel alkanethiol stalks with bulky mushroomlike heads and pouring a solution of them over a gold plate. As the molecules—which go by the jawbreaking name of (16-mercapto) hexadecanoic acid (2-chlorophenyl)diphenylmethyl ester, or MHAE—latched onto the gold surface, the bulky headgroups prevented them from packing tightly together. Lahann and his colleagues then used a standard chemical reaction called hydrolysis to lop off the tops of the mushrooms, leaving each molecular cornstalk tipped with a negatively charged, water-loving carboxylic acid group.

To persuade the MHAEs to bend over, Lahann and his colleagues needed only to wire up the gold surface to a power supply. When the researchers applied a negative electric potential, the gold repelled the negatively charged carboxylic acids, causing the MHAEs to stand straight up. But when they switched the potential to positive, the plate yanked down on the carboxylic acids, bending the MHAEs and exposing their water-repelling hydrocarbon chains to the surface.

“The effects were not enormous” on a macro scale, Langer says. But even small changes in surface properties could create new ways to manipulate molecules in fluids, he says. The technique could be useful for separating out protein species of interest or coaxing an implanted semiconductor chip to release a cargo of therapeutic drugs, Langer says. Langer's team hasn't demonstrated either of those applications yet. But if it does, many researchers may have to switch to a new way of thinking about surfaces.

4. PHYSICS

Confirmation of Gravity's Speed? Not So Fast

1. Robert Irion

SEATTLE—Maybe it's just as well that Albert Einstein isn't around to read the newspapers. Last week, the media was awash with reports that astronomers had confirmed a prediction of his general theory of relativity: that gravity's tentacles cross space at the speed of light. Not necessarily, say some physicists. The work described at the annual meeting of the American Astronomical Society here, they warn, may have been nothing more than a test of the speed of light itself.

Most scientists expect the first measurements of gravity's speed to come from experiments seeking gravitational waves, theorized ripples in the fabric of space. However, physicist Sergei Kopeikin of the University of Missouri, Columbia, proposed another way. About once a decade, he noted, the planet Jupiter passes near a quasar, a bright radio beacon in the distant universe. Kopeikin calculated that Jupiter's gravity should deflect the quasar's radio waves in a subtly different pattern that would depend on whether gravity acts at light speed or instantaneously, as Isaac Newton believed.

When Jupiter skirted the quasar J0842+1835 on 8 September 2002, Kopeikin and astronomer Ed Fomalont of the National Radio Astronomy Observatory in Charlottesville, Virginia, monitored the encounter with 11 radio telescopes, including the U.S.-spanning Very Long Baseline Array. Their analysis showed that gravity's speed is 1.06 times that of light, Fomalont announced, with a 20% margin of error (±0.21). The usual statistical standard of 95% confidence limits would double the uncertainty, he noted later. Still, says Fomalont, “we can rule out an infinite speed [of gravity] with very high confidence.”

The scientific reception was cool. Seven weeks before the celestial event, physicist Hideki Asada of Hirosaki University in Japan published a paper in Astrophysical Journal Letters claiming that the test would measure the speed of light, not gravity. Prominent gravity theorist Clifford Will of Washington University in St. Louis, Missouri, posted a preprint (arxiv.org/abs/astro-ph/0301145) on 9 January asserting that terms in the tortuous equations of relativity cancel the slight effect that Kopeikin claims to measure. “The speed of gravity should have a second-order effect, but it would be far too small to see,” Will says.

When Kopeikin and Fomalont submitted their results to Astrophysical Journal Letters in late December, the initial referee rejected the paper. Kopeikin says the review was unfairly dismissive, and he disputes Asada and Will's analyses as mathematical errors.

The row is hard to resolve because the speeds of light and gravity are so intertwined in general relativity, says physicist Steven Carlip of the University of California (UC), Davis. “This is going to take a while to shake out,” Carlip says. In any case, notes mathematical physicist John Baez of UC Riverside, the stakes are low. “At best, it confirms a theory I already believe,” says Baez. “At worst, it just measures the speed of light in a spectacularly imprecise new way.”

5. STEM CELL RESEARCH

Same Results, Different Interpretations

1. Gretchen Vogel

Some of the most promising results with embryonic stem (ES) cells yet published may need a second look. In the past few years, several groups have reported tantalizing progress toward one of the most sought-after goals in this research: the creation of insulin-producing pancreatic β cells, which are damaged in diabetes. But now a research team claims that some of those results might have been misleading.

If researchers could coax ES cells into producing a ready supply of insulin- producing cells, doctors could potentially treat millions of patients with type I diabetes, freeing them from insulin injections and complications of the disease. But the goal has been difficult to achieve. The normal development of the pancreas is poorly understood, and regular pancreatic cells are relatively fragile. Nevertheless, several groups have reported evidence for the development of insulin-producing cells from either mouse ES cells or adult-derived human stem cells (Science, 8 June 2001, p. 1820).

But on page 363, developmental biologist Douglas Melton and postdoc Jayaraj Rajagopal of Harvard University report that rather than producing insulin, the cells may instead be concentrating the hormone from their surroundings. Rajagopal and his colleagues used a protocol developed by Ron McKay and Nadia Lumelsky of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, for triggering mouse ES cells to differentiate into clusters of cells resembling the islet cells in the pancreas (Science, 18 May 2001, p. 1389). Hoping to reproduce the result with human ES cells, Melton's team studied the protocol with Lumelsky. Rajagopal found that both mouse and human ES cells formed the clusters described, and between 10% and 30% of the cells took up an antibody stain that binds to insulin.

However, further tests made Melton and his team wonder whether the cells were really producing insulin. When they used a procedure called RT-PCR to detect messenger RNA that codes for insulin, they found almost none. And when the scientists tried the protocol on genetically altered ES cells that stain blue when they produce insulin, only 1 out of 100,000 cells turned that color, although between 10% and 30% of the cells were stained by the insulin antibody. When Melton's team cultured cells in a medium that lacks insulin, the clusters lost their insulin-positive staining—more evidence suggesting that the cells are somehow concentrating insulin from the medium.

Other researchers say the Harvard results underscore the need for vigilance, but they still believe their cells are producing insulin. “In light of this result, people will be more careful,” says developmental biologist Seung Kim of Stanford University, who has used a protocol with some similarities to McKay and Lumelsky's to make insulin-producing cells, as described in the 10 December 2002 online edition of the Proceedings of the National Academy of Sciences. But he points out that “we see something very different [from Melton's lab] when we use our protocol.” What's more, the cells they produced were able to rescue mice with an induced form of diabetes. Treated mice survived for up to 21 days, whereas controls died within 2 weeks.

Melton cautions that although the rescue looks impressive, the mouse model of diabetes may also be misleading. Despite his warnings, he remains optimistic. “We still haven't learned anything that makes me think it will be impossible to turn ES cells into insulin-producing cells. But at the same time, we must admit we don't yet know how to do it.”

6. EVOLUTIONARY BIOLOGY

On Ant Farm, a Threesome Coevolves

1. Elizabeth Pennisi

One of nature's oddest partnerships is that between certain ants and the fungi they cultivate. The two have evolved in synchrony for millions of years. But there is a third wheel in this relationship—a pathogen that infects the fungi. And now Cameron Currie of the University of Kansas, Lawrence, and his colleagues report on page 386 that, in terms of evolutionary history, this pathogen is as tightly entwined with the other two as they are with each other.

The data show that “almost immediately after this unique and beautiful cooperative system [between ants and cultivated fungi] evolved, the fungal parasites were there, and they've never gone away,” says Koos Boomsma, an evolutionary ecologist at the University of Copenhagen, Denmark.

Attine ants, which include leaf-cutter ants that can defoliate a tree in one night, can't digest plant matter themselves. But they retrieve leaves and other detritus from their surroundings and heap them up in their nests as offerings for hungry fungi. Thus nourished, the fungi send out nutrient-filled threads that are eaten by their faithful keepers.

Six years ago, researchers demonstrated that ant farming of fungi developed 50 million years ago. Since then, the ants and fungi have maintained their intimate symbiosis even as new species of both arose. Other research has shown that early on in evolutionary history, it's likely that the ant species weren't that picky about which fungal species they grew. But today, many of the 210 attine ants are faithful to a particular fungus.

This happy relationship can be wrecked by the pathogen Escovopsis. Infections of this microfungus can reduce both the size of the “farm” and the ant workforce; some have destroyed entire colonies. The ants fight back by weeding out infected bits of fungi and removing the pathogen's spores.

To better understand the pathogen, Currie and his colleagues analyzed DNA from 17 strains, focusing on 2600 bases from several genes. Using the differences in the bases, they built an evolutionary tree. It pointed to a common ancestor that dated back to the days of the first cultivation of fungi by ants.

The researchers are not sure how Escovopsis initially got involved with this pair. Currie and his colleagues at first suspected that it was once an insect pathogen and switched hosts when the ants started cultivating fungi. But now they think Escovopsis started out as a pathogen of the free-living ancestors of the fungi currently farmed.

The evolutionary history also revealed that different branches of Escovopsis appeared in parallel with new branches of ants and fungi. “It looks to me as if the pathogen was locked into the relationship” early on, notes Daniel Janzen, an evolutionary biologist at the University of Pennsylvania in Philadelphia. Today, there are four lineages of the microfungus, and each is associated with a particular ant-fungi system. “It's a nice, clean example” of coevolution, Janzen adds.

The social circle isn't complete, however. Currie showed previously that there's a fourth partner that has yet to be studied. Many of the ants host bacteria on their bodies that produce antibiotics targeted against the pathogens. These too are likely to show some signs of coevolution, and DNA studies should help reveal their relationship to the ant and the fungi, he predicts.

Rod Page, a theoretical systematist at the University of Glasgow, U.K., knows of only one other instance in which researchers have attempted to understand a three-way partnership: that between a fig, a fig wasp, and a nematode that infects the wasp. Now, he adds, the ant-fungus-microfungus threesome “might encourage people to think about how many layers are in these associations and what [species] they are tracking” as these organisms evolve.

7. PLANETARY EXPLORATION

Scientists Pick Two Sweet Spots for Rovers on Mars

1. Richard A. Kerr

In the end, the choice of where to land NASA's two Mars Exploration Rovers next year turned out to be a no-brainer for planetary scientists. They just obeyed their thirst.

Researchers have pored over 185 potential landing sites for more than 2 years looking for technically practicable, reasonably safe, and scientifically interesting choices. In a meeting last week near the Pasadena, California, campus of NASA's Jet Propulsion Laboratory, they agreed that two sites stood head and shoulders above the rest. Those choices—the Terra Meridiani (now called Meridiani Planum) and Gusev Crater sites (Science, 10 May 2002, p. 1006)— satisfied NASA's desire to ferret out sites where water and therefore life might once have existed. One of the selections poses some lingering safety issues, says John Grant of the National Air and Space Museum in Washington, D.C., co-chair of the workshop. But the safer alternatives were mostly too “big, flat, ugly, and boring,” as one wit put it, to tempt the scientists.

The participants were intrigued by new evidence gleaned from 30-year-old spacecraft data suggesting that the hematite spotted from orbit at Meridiani Planum has an aqueous origin— perhaps an ancient hot spring. And new imaging from the Odyssey spacecraft alleviated concerns that all the deposits on the floor of Gusev, where water pooled billions of years ago, might now be covered by deep dust or volcanic ash. Instead, small impacts have blasted out debris that a rover could inspect, although the crater floor might be a tad rougher and windier for the lander than engineers would like.

Safety and science ruled against the two remaining alternatives. Doubts arose about whether the Isidis impact basin really would have water-washed rocks from the adjacent highlands, as hoped. The fourth potential target—sited in smooth, flat, and boring Elysium for its low winds—looks too inscrutable to merit the trip. The final decision rests with NASA's space science chief, Ed Weiler, who will make the call in early April.

8. SCHOLARLY CONDUCT

Skeptical Environmentalist Labeled 'Dishonest'

1. Lone Frank*
1. Lone Frank is a science writer in Copenhagen.

COPENHAGEN—A Danish panel decided last week that Bjørn Lomborg's controversial 2001 best-selling book, The Skeptical Environmentalist, is “scientifically dishonest.” The government misconduct committee also may be asked to examine whether Lomborg's views have colored the work of the environmental institute that he heads. At the same time, the Danish Research Agency (DRA) plans to review the panel itself, which is under fire for its vaguely worded report.

In The Skeptical Environmentalist, Lomborg, a 38-year-old political scientist, argues that ills ranging from air pollution to global warming are less injurious to the environment than has been claimed—a message at which many scientists take umbrage. After receiving three detailed complaints, DRA's Committee on Scientific Dishonesty mounted a 6-month investigation. It concluded that although Lomborg was not deliberately deceptive, his naiveté resulted in “systematic one-sidedness.” “Lomborg is highly selective in his use of references in practically every field he covers. This is not in accord with scientific standards,” committee chair Hans Henrik Brydensholdt, a high-court judge, told Science.

It's “an unusually hard ruling by a committee known for being immensely difficult to convince of any wrongdoing,” says ecologist Carsten Rahbek of Copenhagen University. The ruling, adds Stuart Pimm, an ecologist at Duke University who authored one of the complaints lodged with the panel, “serves as a warning to people who think they can hijack the scientific process.”

Lomborg defends his book and protests that the committee's 16-page report “does not actually give examples” of any missteps. Brydensholdt doesn't dispute that, saying that the details can be found in 600 pages of supplemental materials that the committee analyzed. Included there are allegations that Lomborg disregarded known extinction rates when estimating species loss and that he glossed over the effects of uncurbed population growth in some regions when discussing the reassuring implications of a global slowdown.

The controversy could also embroil the Institute for Environmental Assessment, which Lomborg heads. Prime Minister Anders Fogh Rasmussen told Danish TV last week that he still “has full confidence” in Lomborg but that it would be a “good idea” to have an impartial investigation into eight reports from the institute. One environmental group says that it plans to file a request with the scientific misconduct committee to investigate an institute report touting the benefits of burning aluminum cans instead of recycling them.

Meanwhile, some critics accuse the committee of having tailored its criteria for scientific honesty to fit the Lomborg case. DRA has agreed to hold a meeting later this month to look into this allegation.

9. TOXICOLOGY

Academy Panel Mulls Ethics of Human Pesticide Experiments

1. Jocelyn Kaiser

Over the past 6 years, a number of companies have been deliberately exposing human volunteers to pesticides to see how much is needed to trigger a metabolic response or even make subjects sick. Such “dosing” experiments offer the best safety data, industry officials assert. Yet despite their admitted utility, these tests have posed a quandary for the U.S. Environmental Projection Agency (EPA). If it accepts these data in its safety reviews, is the agency condoning practices that many consider unethical?

Such information “is valuable,” says EPA office of risk assessment director William Farland, and the agency “would like to find a way to bring human data into the process.” But, he adds, “whether it's ethical is the question we're all struggling with.”

In late 2001, the agency turned to the National Academy of Sciences (NAS) for advice. Last week, the new NAS panel heard from both advocates and opponents of dosing experiments. Their vehement debate underlines the difficulty the panel faces in trying to untangle the scientific and ethical questions.

The panel's recommendation, due in December, could be far reaching. In addition to pesticides, EPA has recently received data on humans deliberately exposed to groundwater contaminants, and the agency hopes to continue to use outside human studies testing the toxicity of air pollutants.

The trigger for this recent spate of testing was the 1996 Food Quality Protection Act, which mandated that EPA reduce acceptable levels of pesticides in foods to protect children. Up to that point, EPA had set a limit several orders of magnitude smaller than the minimum dose that causes effects in animals. Faced with the new law, pesticide companies began supplementing animal studies with human data in an effort to avoid a 10-fold safety factor built in to account for possible higher sensitivity in people; this could offset the tighter limits for children. Since the new law was enacted, companies have submitted about two dozen human toxicity studies to EPA (see table).

View this table:

In 1998, the Environmental Working Group (EWG) in Washington, D.C., questioned the ethics of these studies, in which volunteers (mostly in the United Kingdom) were paid $600 or more. EPA officials had become concerned as well and had shelved the studies until an advisory committee weighed in (Science, 1 January 1999, p. 18). That committee issued a report in 2000 saying that some human tests, such as metabolism studies, were acceptable under strict conditions—but most dosing experiments were not. Switching gears, EPA under the Bush Administration indicated that it would consider the tests but, facing heavy criticism, it held off and requested the academy study. Much of the ethics debate hinges on what are perceived to be industry's motives. Pesticide-dosing tests are unethical because they are done expressly for the benefit of industry and offer no conceivable advantage to society, asserted EWG's Richard Wiles. But Ray McAllister of CropLife America, an industry group, argued that dosing tests of pesticides are in fact no different from phase I clinical trials of drugs, which test toxicity and don't directly benefit the subject. Some of the toxicologists on the panel suggested that if the tests were well designed ethically and scientifically, the public might benefit from the data. John Doull of the University of Kansas Medical Center in Kansas City pointed out that some human studies have shown that people are more sensitive than animals to certain substances, such as lead, so human dosing experiments might sometimes result in more protective standards. But Jennifer Sass, a toxicologist at the Natural Resources Defense Council, argued that the dosing studies are often too small to be scientifically meaningful. Lynn Goldman, a pediatrician at Johns Hopkins University who headed the EPA pesticides office from 1993 to 1998, argued that EPA should ban dosing tests with well-studied organophosphate pesticides because they don't add much new information. But EPA should permit human-dosing studies of environmental pollutants such as ozone to which people are “exposed daily anyway,” she suggested. Goldman added that mechanistic studies involving human subjects might sometimes be justified, for example, with new pesticides. The overarching problem with all human data used by EPA, said Goldman, is that unlike the Food and Drug Administration, EPA has no protocols for human studies and lacks a stringent policy for ethics reviews of human data. The agency “needs strong and enforceable standards,” she says, an issue the panel will likely consider. 10. EVOLUTIONARY BIOLOGY Uphill Dash May Have Led to Flight 1. Elizabeth Pennisi A century-long flap among evolutionary biologists concerns how the ability to fly evolved in birds. Some propose that avian ancestors took wing by gliding from trees; others say early birds got a running start and lifted off the ground as they beat their feathered forelimbs. A new study suggests that neither idea is quite right. Instead, flight may have evolved in protobirds that used their wings to scale inclined objects and trees, says Kenneth Dial, an experimental functional morphologist and behavioral ecologist at the University of Montana, Missoula. Dial's 15-year-old son clued him in to this new possibility. He claimed that he saw half-kilogram chukar partridges, whose flight development Dial studies, running straight up bales of hay. On page 402, Dial reports that the birds indeed flap their way up steep inclines—although not the way he and his colleagues would have thought—and suggests that avian ancestors may have done the same. Dial hypothesizes that in evolving the ability to climb ever steeper slopes, these animals came to move their forelimbs as modern birds do—up and down—instead of just back and forth like reptiles. This switch set the stage for flight, he explains. His finding “has blown the field wide open,” says Kevin Padian, an evolutionary biologist at the University of California, Berkeley. Chukars are related to chickens, quails, and turkeys. These galliform birds' flight and running dynamics might reflect those of their great, great ancestors—the birdlike dinosaurs. Like them, the modern descendants have wings but don't fly well, and their legs are strong. Working with his son Terry and another high school student, Ross Randall, Dial monitored chukars' movements and found that newly hatched birds could walk up slopes of 45 degrees and could master steeper inclines by flapping their baby wings. They tackled ever steeper slopes as they matured. Even more remarkable, adults could sprint up overhangs of 105 degrees, sometimes climbing 5 meters. These skills declined when the researchers clipped or removed the birds' feathers. Using high-speed video recordings and devices that monitor acceleration, Dial analyzed wing strokes and the effects of flapping on the bird's body. As the birds run up an incline, the films reveal, they flap their wings at a different angle than when they are flying. The net effect pushes the bird into the incline so that its feet don't slip—akin to spoilers on a race car. On a vertical surface, they hold their wings as if flying. “The films are amazing,” says Padian. “[They] tell us something about living birds that we didn't know.” Researchers interested in the evolution of bird flight are taking note, and some interpret the results as bolstering their own ideas. For those who think flight evolved from birds parachuting from trees, this behavior could solve the problem of how the birds got into the trees in the first place. In contrast, Luis Chiappe, a paleontologist at the Natural History Museum of Los Angeles County, sees the findings as supporting his theory that flapping wings led to ever faster running speeds that eventually made it possible to lift off. “Although [Dial's] view falls between the strict application of the ground-up and trees-down theories, I would place it closer to the realm of ground-up theories,” he notes. But Dial thinks his findings add a new scenario to the debate. “These animals are doing something that none have proposed,” he says. The key innovation that allowed avian ancestors to fly, he claims, came as they evolved a new way of moving their forearms. Being able to flap wings up and down as well as back and forth was advantageous because it got the animals up steep surfaces. Once thus equipped, they could flap away as nature's first flyers, Dial says. It might be impossible to determine when this new ability developed, but analyses of some fossils indicate that protobirds—much like chukars—were able to flap their wings either back and forth or up and down. Dial is now studying other, more primitive birds, such as South America's tinamous, to rule out the possibility that this locomotor skill evolved late in bird history. However, neither Dial nor his colleagues think the issue is settled. Indeed, Chiappe points out, “I imagine people will continue to argue about the origin of bird flight for a long time.” 11. UNIVERSITY-INDUSTRY COLLABORATION Last of the Big-Time Spenders? 1. Andrew Lawler Much-debated university research deals backed by Amgen and Novartis appear on balance to have benefited the universities. But as they fade away, some observers predict that their kind will not be seen again The Biotech Baking Brigade was waiting with a vegan pumpkin pie for Novartis chief executive officer Douglas Watson when he arrived at a press conference in Berkeley, California. Watson was there in November 1998 to announce that the Swiss pharmaceutical giant had made a pioneering deal with the University of California, pledging$25 million for broad access to Berkeley's plant and microbial research labs and the scientific discoveries coming from them. The antibiotech activists, who opposed such close ties to industry, hit Watson but missed their second target, a Berkeley dean.

The scene was prophetic. Four years later, Basel, Switzerland-based Novartis and its successor, Syngenta, appear to have reaped more grief than glory from the investment, which granted them 5 years of unprecedented access to university research. All the while, plant and microbial research prospered at Berkeley. The agreement, which expires this year, is not likely to be renewed. Nor is another large-scale university-industry alliance that lapses next year—between the Massachusetts Institute of Technology (MIT) and the pharmaceutical firm Amgen.

The quiet dissolution of these two deals marks a retreat from a big-is-better approach to industrial collaboration that became fashionable in the 1990s—and sparked a heated debate about academic independence. But it isn't the critics who have done in large-scale agreements. The companies themselves, facing an anemic economy, are moving to old-fashioned contracts with individual scientists or making research deals with small companies.

Many big agreements are still in effect, however. MIT alone has partnerships valued at $173 million with seven major corporations, including Amgen. The university's dependence on industry funding, most of it from such deals, grew more than 50% between 1992 and 1999 (see table). And at many other institutions—from Atlanta's Georgia Institute of Technology to Seattle's University of Washington—the percentage of industry funds soared in recent years, often in the form of ambitious partnerships. View this table: Yet the goal of creating stable, long-term relationships between profit-oriented businesses and research-focused academic institutions remains elusive. “The dynamics of companies changed faster than the life of the partnerships,” says MIT engineer R. John Hansman. He serves on an MIT panel that later this month will release a study examining the effect of big umbrella agreements on faculty members. Berkeley is likewise evaluating the impact of the Novartis arrangement. Administrators and many faculty members at both universities say that the large deals proved to be a benefit for research without significantly affecting the independence of the academic researchers. But the boom appears to be over. Tapping industry's cash What drove these large alliances? Experts say that slow growth in funding by the federal government chased universities in the early 1990s into the arms of industry. There was a dawning realization, says MIT Provost Robert Brown, that “research universities needed to diversify to maintain their vitality.” One source of R&D cash was the high-tech sector. Companies eager to create new products had long funded basic research at certain universities. Monsanto Corp., now part of Pharmacia, had a relationship with Washington University in St. Louis, for example, and in the 1980s the German chemical company Hoechst poured$70 million into genetics research at Boston's Massachusetts General Hospital. But until the mid-1990s, industry scientists worked mainly with individual investigators at major research universities.

That began to change when MIT actively sought an industrial partner. Its president, Charles Vest, was looking to diversify, and Gordon Binder, head of the pharmaceutical firm Amgen, based in Thousand Oaks, California, was looking for a major university to rev up its research endeavors. The result, a $30 million, decade-long deal unveiled in March 1994, was hailed by Binder as a “model for industry-academic partnerships.” Vest called the agreement “an essential element in the kind of future I see for MIT: a synergy of basic research efforts at universities and long-term commitments by industry.” The company agreed to provide up to$3 million a year to MIT biology researchers; MIT agreed that patents resulting from work using Amgen funding would be held jointly and that up to four Amgen researchers would work in its labs as visiting scientists. The umbrella agreement replaced the myriad negotiations required for single-researcher projects. MIT administrators were pleased to have a single policy on matters such as intellectual property rights.

But the deal also stirred concern. Some MIT faculty members worried that the corporate influence would restrict the university's independence, that overburdened professors would neglect other duties, and that intellectual-property restrictions would limit what they could publish. If Amgen chose to file a patent, researchers would have to delay publication of findings made with Amgen money.

Brown and Chancellor Phillip Clay last year asked a group of MIT professors from different schools to examine the impact of the Amgen and another half-dozen large-scale agreements with companies ranging from Merck to Ford Motor Co. to DuPont. That report, due for release later this month, concludes that the chief concerns raised by faculty members never materialized. “For the faculty and students, it looks like there were a lot of benefits,” says study chair MIT management professor Glen Urban. “The [industry] money has freed up research and didn't add to the overload” of faculty work, he adds. Hansman, a self-professed skeptic of industry-university ties, “expected to see more problems,” such as faculty members neglecting other work or intellectual-property constraints. “I was an initial critic, but I came away with a more positive view.” The study concludes that such agreements are a good thing—but that MIT should limit their numbers.

Some researchers, such as MIT biologist Nancy Hopkins, have nothing but praise for the Amgen deal. “It was a phenomenal success for basic research,” says Hopkins, who benefited directly. “There was no attempt by either side to change the direction of our basic research. Amgen at that time wanted exactly what we were doing.” Hopkins says her zebrafish project to study vertebrate embryos led to “totally unexpected” discoveries about cancer and “would have been impossible without” Amgen's $1.2 million per year over 3 years. But internal changes at Amgen doomed the agreement and chances for long-term synergy between Amgen and MIT. Binder left the company—as did the majority of its scientists—and the firm shifted its emphasis to marketing. As a result, Amgen's interest in the agreement flagged, although it continues to provide some funding. Binder is now managing director at a Los Angeles venture-capital firm. So what did Amgen get for the$35 million it plowed into MIT from 1994 to 2002? Lita Nelson, MIT technology licensing office director, says that the deal resulted in “a few” patents, and that Amgen followed up on one. Perhaps more importantly, she says, the company's scientists got access to MIT researchers, postdocs, and graduate students. “You don't do this kind of deal for patents but to get to know the leaders in a field,” she adds. But once the company researchers leave, the business-to-university bonds are broken. “Virtually every person who worked on the Amgen-MIT agreement has left” the company, says Hopkins.

Current Amgen officials declined to comment. Binder defends the agreement as a wise investment, but he adds that such deals can fail unless the research—and the expectations—are carefully managed: “What doesn't work is to give a university a ton of money” and then sit back to wait for useful returns. Binder won't comment on Amgen's current direction, but one MIT scientist says the company was “kind of right to walk away” after changing its goals.

Berkeley boon

Four years after the MIT-Amgen deal was signed, Berkeley and Novartis formed a more intimate relationship. After highly secret negotiations, the company agreed in late 1998 to provide $25 million over 5 years to Berkeley's plant and microbial biology department. Honchoing the deal for industry was Steven Briggs, a respected former academic with close ties to the Berkeley faculty. Unlike the MIT-Amgen deal, this one gave Novartis control over intellectual property from department research not conducted with company money. The company received many privileges, including first right to negotiate licenses from inventions coming out of the department's labs, based on a certain formula. Berkeley's academic Senate leadership cried foul, warning of “an unhealthy narrowing of the nature and direction” of research. The outcry launched the Biotech Baking Brigade's pies and sharpened the national debate over industry-university relationships and academic freedom, including a critical 2000 cover story in The Atlantic Monthly. Yet by 2000, agricultural biotech was in crisis, with opposition to genetically modified foods getting louder. Novartis, which had leapt into the field in the 1990s, was backing out, as were others. In 2000, Novartis and the London-based AstraZeneca combined their agricultural divisions to create a new entity called Syngenta, establishing a scientific center at the Torrey Mesa Research Institute in Southern California. Since then, the prospects for corporate research have grown bleaker. Last month, Syngenta announced that it would close the Torrey Mesa facility, which Briggs ran, lay off dozens, and transfer the remainder to other labs (Science, 13 December 2002, p. 2106). That move, say academic and industry officials, was the final nail in the coffin for the Berkeley deal. Berkeley researchers who benefited from the agreement are sorry to see it end. “We hoped to have a longer term relationship,” says biologist Steven Lindow. He says Syngenta helped him further research on bacteria that aggregate on and damage plants. Lindow wanted to explore signaling in the aggregates, but he didn't feel he had enough experimental data to win federal funding. “This was seed money you only dream of,” he adds. The company funding, covering one-third of his$250,000 research costs, made it possible to detect microbes on plants that interfere with bacterial signaling. Lindow hopes to develop transgenic plants that can fend off bacterial colonization, but he is still doing fundamental research. “We made a lot of headway” thanks to the industry funding, he says. “But it's a misapprehension that this was corporate driven.”

Academics on the outside envied the Syngenta deal. “What a boon for Berkeley!” exclaims Roger Beachy, president of the Donald Danforth Plant Science Center in St. Louis. Robert Price, Berkeley's associate vice chancellor for research, is just as enthusiastic. “This was the poster child for industry-academic partnerships,” he says. “I can't think of a better deal; it was a terrific deal. Would that we could get that kind of money again!”

That's unlikely. Many outside Novartis and Syngenta say the arrangement—despite criticism to the contrary—actually ended up favoring Berkeley (see sidebar). The approach has not been copied by other companies at major universities. “One could make the argument that the company made a bad deal,” says Lawrence Busch, a sociologist at Michigan State University in East Lansing who is studying the agreement.

Some suggest that corporate chiefs became dissatisfied: “Basel started asking, ‘Where are the goods?’” says one Berkeley official. But Briggs insists that it is too early to judge the investment. “No one expected that Novartis and then Syngenta would make money on the agreement before it was over,” he says. “At least one project appears to have great commercial promise,” he adds, although neither he nor Berkeley officials will be more specific. Perhaps product revenues will appear, he thinks, “4 or 5 years from now.”

Others say the returns could come eventually but will be hard to measure. “I don't think any blockbuster patents resulted,” says Carol Mimura, associate director of the Berkeley office of technology licensing. “There wasn't a huge [intellectual property] payoff of actual deliverables.” She, Briggs, and others agree, however, that the close contacts between industry and university scientists provided access that will pay off in the long term. “Novartis wanted to pick up the phone and call the Berkeley faculty” to find out what's hot in the field, notes Glenn Hicks, former research director for the South San Francisco biotech company Exelixis.

But that relationship depends in part on the continuity of staff at the company. And Syngenta, beset by financial and strategic uncertainties, might not reap those long-term benefits. “It was a risky venture” for the companies, Hicks says. “The odds were less than 10% that something very valuable would result.” Adds Beachy: “I wouldn't have invested in it.” The Berkeley deal, predicts Alan Jones, a biologist at the University of North Carolina, Chapel Hill, “is a deal that will never happen again.”

Taking stock

The heyday of the broad, Amgen- and Syngenta-style deal might be over, although some partnerships continue, such as those at MIT. On the West Coast, the Scripps Research Institute in La Jolla is just beginning its second 5-year agreement with Novartis, expecting to receive $20 million a year on top of the$25 million it got in the first 5 years.

But concern about the short-term payback of such deals—especially in a weak economy—is changing the picture. “There will be fewer companies knocking on the [academic] door and more emphasis on individual linkages rather than institutional ones,” says Beachy. “The Berkeley-Novartis deal was a high point,” adds Hicks. “Now, as industry contracts, agreements will be more targeted.”

That appears to be true for the drug business as well. Large pharmaceutical companies are trying to minimize research costs by contracting for work rather than conducting it in-house or through Amgen-like deals, says Binder. “Universities need to understand that the focus for pharmaceutical companies is moving from universities to biotech companies; you won't see giant pharmaceutical companies signing deals” with academic institutions, he adds. “Instead of signing with the big guys, they'll be signing with lots of little guys.”

Excluding clinical trial spending at medical schools in other universities, MIT leads the United States in industry funding, with nearly 18% of its money coming from that source, according to data gathered by Tufts University urban and environmental policy researcher Sheldon Krimsky. But if industry money begins to dry up, universities such as MIT have the safety net of stable or increasing federal R&D spending to cushion any fall. The university also stands to benefit from smaller research contracts. “Given that many large industries are cutting back their own central R&D programs, they'll have to return to universities,” says MIT biologist Phillip Sharp. At Berkeley, industry funding is minor only if compared to the state contribution. Losing the Novartis/Syngenta money “is a small matter compared to our state funding,” says Berkeley's Price.

If nothing else, the fading of large-scale alliances such as the Amgen and Novartis deals will give universities a chance to pause and take stock of the impact of industry funding on the nature and direction of academic research. MIT and Berkeley have started the process. The issues will be explored in depth in a major, 3-year research study funded by the U.S. Department of Agriculture and just begun by researchers at Portland State University in Oregon. “This is an opportunity,” says Beachy, “for universities to regain their third-party independence, regain the confidence of the public, and reevaluate whether they've gone too far on the commercial side.”

12. UNIVERSITY-INDUSTRY COLLABORATION

Berkeley Review Dismisses Critics' Fears

1. Andrew Lawler

The University of California, Berkeley, has concluded that its 1998 deal with Novartis was a smashing success. A recently completed internal review of the 5-year, 25 million agreement says that the university's academic soul was never for sale and that the only real drawback was the negative publicity generated by critics of the high-profile collaboration. But others say the study's focus is too narrow, and that negotiating the pact in secret was a big mistake that should never be repeated. In exchange for support to its plant and microbial biology department, Berkeley gave Novartis the first right to negotiate licenses for about 30% of all patentable inventions made in the department's labs. All participating researchers agreed to submit their research reports 30 days prior to release and sign confidentiality agreements if they used Novartis databases. Berkeley also gave company representatives seats on advisory committees managing the research. Critics warned that the agreement would skew the direction of research, make the school too dependent on a single company, and hinder the free flow of information. But 4 years later, “virtually none of the anticipated adverse institutional consequences has been in evidence,” according to a review of the agreement conducted by Robert Price, associate vice chancellor for research, the office that helped negotiate the agreement. Company money never contributed more than 27% of extramural funding, according to the report, and funding from other sources increased significantly during the agreement period. The advisory committees never rejected or even recommended modifications to any research proposal, Price says. His study also found that scientists didn't shift to a more applied focus during the period. And only one graduate student was asked to alter a planned presentation to avoid disclosure of proprietary information, which the student did willingly. Of 25 faculty members, grad students, and postdocs interviewed, none considered publication delay “a significant problem.” This “hands-off posture,” says the Price report, allowed faculty members “to pursue more novel and innovative lines of inquiry.” The only cloud was “the negative publicity” surrounding the agreement, which it dismisses as based on “misunderstandings, misperceptions, and erroneous predictions.” But communication on the issue within Berkeley remains poor; several faculty members involved in the debate who were contacted by Science weren't aware of the report's existence. After reading it, some say its focus is too narrow and that it is self serving. The faculty member overseeing an external review of the Novartis agreement says that the furor over secrecy left bitter feelings on the Berkeley campus that should not be played down. “In the future, the university should be firmer about negotiating these agreements in a more transparent way,” says Anne McLachlan, a higher education researcher who is coordinating an external review led by sociologist Lawrence Busch of Michigan State University in East Lansing. Busch says he has yet to find any evidence that faculty members altered their research as a result of the agreement. His report, which will take a broader look at the agreement's impact, will be completed this summer. 13. NEUROSCIENCE Deconstructing Schizophrenia 1. Constance Holden Large-scale family studies and new drug probes focus on cognitive deficits that may lie at the heart of the disease Generations of researchers have struggled to get at the core of schizophrenia, but with little success: No one knows the cause of this dread disease, many efforts to unravel the genetics behind it have ended in frustration, and no cure is in sight. But in recent years, scientists have made significant gains—not by tackling the disease head-on but by picking apart its components, especially those involved in cognition. And this has given them hope that they might finally be able to unlock some of schizophrenia's intractable secrets. The disease's most infamous feature is psychosis, which is characterized by delusions and hallucinations. But there are many other facets to schizophrenia, including flattened emotions and disordered thinking. As drugs have successfully controlled the psychosis, they have laid bare the persisting cognitive problems, leading researchers to view them as the symptoms closest to the heart of the disease. Several large studies, one of which will begin this month, are probing these symptoms in both patients and their relatives. Schizophrenia has a strong genetic component; for example, a child of someone with the disease is 10 times as likely as the average person to develop it. The new studies might yield solid clues about the genetic causes of the disease and guide ways to treat it. The introduction of antipsychotic drugs in the 1950s made possible the massive deinstitutionalization of schizophrenia patients, but there's been “no real change in the outcome of the illness,” says Philip Harvey of Mount Sinai School of Medicine in New York City. A new generation of drugs introduced in the early 1990s lacks many of the distressing side effects of earlier antipsychotic drugs, such as extreme agitation and the Parkinson's disease-like movement disorder tardive dyskinesia. But even these treatments “don't normalize patients,” says Michael Green of the University of California, Los Angeles (UCLA). “Fewer than 10% of schizophrenia patients ever get a regular job or live independently.” In fact, he says, outcomes aren't much better now than in 1895, when the treatment was fresh air and water. Research on the brains of schizophrenia patients helps explain why the disease can be so intractable. The frontal and temporal lobes are shrunken, neurons may be mispacked in some regions, and neural circuits based on the neurotransmitter dopamine go haywire, among other problems (see sidebar, p. 334). Some of these defects arise as early as the second trimester of fetal development. But schizophrenia is still “a disease whose mechanism is totally unknown,” says Carol Tamminga of the Maryland Psychiatric Research Center in Baltimore. Dozens of genetic studies have failed to reveal more than a tiny contribution from any single gene. And despite some common brain changes, anatomical studies are ambiguous: There is “no marked signature like plaques and tangles in Alzheimer's,” says Patricia Goldman-Rakic of Yale University. In recent years, however, prospects for understanding the disease have brightened somewhat. Researchers are armed with new imaging technologies and the human genome sequence. But equally important is a shift in approach: Instead of comparing people with schizophrenia to those without, scientists are “deconstructing” the disease, says Stephen Hyman, former director of the National Institute of Mental Health (NIMH), attempting to unravel it by looking at its characteristic features in both the sick and the well. The components that scientists are most eager to get a grasp on are the cognitive disruptions that affect short-term memory, attention, and so-called executive functions needed for planning and problem solving. “It's become increasingly clear that the driver of disability—and the reason patients never reintegrate [into the community]—is the cognitive deficits of the disease,” says Kenneth Davis of Mount Sinai School of Medicine. Researchers view psychosis as a secondary symptom; Harvard's Ming Tsuang compares it to fever: an acute response to other insults rather than a primary pathology. “If you can't think clearly, then you have delusions and hallucinations and thought disorganization,” says Monte Buchsbaum of Mount Sinai. “Before, it was thought to be the other way around.” Even by focusing on cognition, however, researchers don't expect to find simple, independent risk factors. Rather, says Goldman-Rakic, there are “multiple subtle deficiencies” that can accumulate to cross the threshold into schizophrenia. People are starting to think that “it's the particular combination [of disease components] and their magnitude” that tip a person over the edge into the disease, says Irving Gottesman, a schizophrenia researcher for 40 years. Mind modules The keys to unlocking schizophrenia may therefore lie not in the patients but in their relatives. According to Tsuang, somewhere between 20% and 50% of first-degree relatives exhibit some of the disease's features, such as social withdrawal, or more subtle symptoms, such as difficulty in visually tracking a moving object. Instead of looking for catch-all genes, scientists now hope to develop a picture of schizophrenia by tracking genes behind these behavior modules, which they call endophenotypes. Unlike usually transient psychotic symptoms, traits that qualify as endophenotypes are stable over time. They often predate the onset of the disease and are little affected by antipsychotic medication. Looking at unaffected family members allows researchers to see features of the disease in unfettered form—uncontaminated by drugs, psychosis, or factors such as prenatal exposure to viruses that may have triggered the full-blown disease in their schizophrenic kin. One oft-cited analogy is with colon cancer: The disease itself is not genetic, but it is secondary to an endophenotype that is: a tendency to form polyps. Studying relatives of schizophrenia patients is not a new approach, says Steven Moldin of NIMH, but previous work has been inconclusive, marred by small sample sizes and difficulties in standardizing measurements. Now three big new studies are joining the hunt for genes that contribute to the disease. Researchers haven't entirely given up on finding individual genes that dramatically increase one's risk of developing schizophrenia. But they suspect that, as in other so-called complex diseases such as type II diabetes or heart disease, most genetic risk factors will exert subtle effects that are difficult to discern in small samples. The new studies will attempt to identify such genes, known as quantitative trait loci, by screening the entire genome sequence of each individual. “For schizophrenia now, you have weak linkage signals scattered all over the genome,” says Bernard Devlin of the University of Pittsburgh. “We want to come up with much stronger ones in specific locations.” The studies represent “a tremendous conceptual leap” because of their size and the number of traits and genes they're examining, says Moldin, who believes that this “may very well make all the difference between finding genes and not.” Two of the studies were launched in 2002 from the University of Pennsylvania in Philadelphia, headed by Penn's Raquel Gur in cooperation with the University of Pittsburgh. One will involve up to 150 families consisting of about 1000 people of European descent; each family has at least two affected members. The other will be by far the largest study ever undertaken of black schizophrenia patients. “African-American schizophrenia patients have been notoriously understudied,” says Moldin. Small studies have hinted that some genetic linkage patterns are different in the two races. Now, the Project Among African Americans to Explore Risks for Schizophrenia, launched in November, “will be the definitive study to resolve the genetic differences between African-American and Caucasian schizophrenia patients,” he predicts. This study will ultimately involve 5000 people and include a comparison of 400 pairs of siblings who both have schizophrenia. Both studies will test participants' attention, working memory, and executive functions, including organization, problem-solving, and decision-making. They'll follow up on previous studies, such as those showing that if asked to learn a list of 16 words, most people will group the words into categories and remember them well. But people with schizophrenia and some of their relatives don't make categories, reflecting the difficulty they have in organizing information. They also have trouble retaining the memory of a target image after it has been “masked” by a second stimulus. And a test requiring the viewer to discriminate between different facial emotional expressions often stumps patients. Tsuang says the genetic components of these studies “will allow us to determine whether [different] liabilities” are caused by the same or different sets of genes. The third major study is a seven-center affair, the Consortium on the Genetics of Schizophrenia, headed by David Braff of the University of California, SanDiego. The 5-year study, which will start this month, eventually will consist of 2200 people, all schizophrenia patients or first-degree family members. In addition to cognitive tests like those in the Penn studies, the study will include three neurophysiological tests. The latter tests are aimed at defining the defects in neural circuitry underlying attention problems that characterize the disease. The researchers' working hypothesis is that patients' brains can't focus on a stimulus because they can't inhibit, or “gate,” irrelevant material. One test measures the suppression or inhibition of an electric signal called P50 that is normally elicited by a novel stimulus. If a novel sound is quickly repeated, a nonschizophrenic brain will repress the P50 response by 80%. The brains of most schizophrenia patients, however, respond as though the second tone is as novel as the first. Braff calls this endophenotype “the genetically most advanced” of any yet studied. The problem has been tracked down to a defect in a gene for a nicotinic receptor that has been linked to attention. In another measure of gating, a target appears on one side of a computer screen, and a person is supposed to look in the opposite direction—a task that requires suppressing the impulse to look at the flashed image. Braff is optimistic that some breakthroughs are on the horizon. “I think we're in the first phase of really being able to parse the complex genetic architecture” of the disease, he says. Drugs for thought The shift in focus toward cognitive dysfunction is apparent in the search for new drugs to treat the disease. “The drugs we have today are based on elaborations of drugs discovered serendipitously over 50 years ago,” says Wayne Fenton of NIMH. And although some help with the problems of disordered thinking, their main job is still to fight psychosis, he says. But recent research suggests a variety of targets directly related to cognitive functioning. One top candidate is the dopamine system. Current antipsychotic drugs reduce dopamine levels where the neurotransmitter is overexpressed. But the disease reduces the amount of dopamine released in other parts of the cortex, such as the striatum. Goldman-Rakic has found that a dopamine shortage impedes primates' short-term memory ability. Nicotinic receptors offer another tantalizing target. Nicotine transiently normalizes the P50 gating response, leading some researchers to suspect that the high incidence of smoking in schizophrenia patients is the result of unconscious efforts to self-medicate. Another candidate of interest is glutamate, the brain's main excitatory neurotransmitter. A shortage of it has been linked to both psychotic symptoms and cognitive impairments. Some novel compounds are already being tested on patients. Researchers at NIMH, for example, are experimenting with a drug targeted at an enzyme named COMT that breaks down dopamine in the prefrontal cortex. Genetic studies have shown that certain alleles of the COMT gene run in families with a high incidence of schizophrenia. The suspect versions of COMT result in less dopamine in the prefrontal cortex, a region necessary for executive functions. Michael Egan and Daniel Weinberger of NIMH are conducting a study of COMT inhibitors in schizophrenia patients and controls with and without the high-risk combination of alleles. Although COMT may account for only a tiny fraction of cases, Egan says, “we hope to detect an effect with [functional magnetic resonance imaging] in the first 20 subjects … [and] get some preliminary results in a year.” With such studies, says Braff, ultimately, “we will be able to take gene and neurological profiles of individuals and give them drugs that are more tailored, much like oncology with cancer.” To achieve this vision, new surveillance strategies are required in the world of cognitive enhancement. NIMH recently entered into a contract with UCLA to recommend regulations covering possible new drug candidates aimed at systems such as dopamine and nicotinic receptors. UCLA's Green says that at present, companies “don't want to invest too much till they know what [the U.S. Food and Drug Administration] will require, [and] FDA doesn't know what to require” because it has no definitions or standards for evaluating cognitive enhancement. A series of meetings, the first of which will be held in Washington, D.C., in April, will explore standards for assessing cognition in schizophrenia, targets for intervention, and appropriate experimental designs. But ultimately, says Tsuang, “the future is toward prevention.” Although some neural irregularities are laid down during fetal development, the disease emerges in late adolescence. Psychiatrist Larry Siever of Mount Sinai and the Bronx Veterans Administration Hospital explains that it is only in adolescence that the human brain's executive areas have matured and myelination is finally completed. “If we could intervene before this vulnerable time, we might head off the negative spiral of cognitive impairment, psychosis, and social isolation,” he says. There is already evidence that endophenotypes can help predict the disease: A 30-year longitudinal study of 324 children of people with schizophrenia, conducted by Niki Erlenmeyer-Kimling of Columbia University, has shown that children with deficits in three parameters—attention, verbal memory, and a hopping test of gross motor skills—had a 50% likelihood of developing the disease. Some evidence suggests that preventive treatment can stave off the disease, says Tsuang. He has gotten good results in a very small study: Of six “at risk” family members with mild impairments who were treated with low doses of risperidone, a second-generation antipsychotic, five showed improvements. But drugs targeting the cognitive and emotional symptoms of schizophrenia are even more important, says Braff. He says that with a combination of knowledge of endophenotypes and a sophisticated new array of drug targets—and eventually gene therapy—the prospect of protecting vulnerable people from schizophrenia might become very real. Schizophrenia prevention might still be a long way off, admits Braff. But, he says, it is finally being taken “out of the realm of the ineffable and put it into the realm of other genetically mediated complex diseases.” 14. NEUROSCIENCE White Matter's the Matter 1. Constance Holden Scientists have long known that connections somehow go awry in the brains of people with schizophrenia. Now advances in imaging and gene technology are allowing them to trace the axons that connect from neuron to neuron and make up the brain's white matter. “White matter is a very new focus,” says brain researcher Monte Buchsbaum of Mount Sinai School of Medicine in New York City. Kenneth Davis of Mount Sinai uses microarray technology to look at thousands of genes that are expressed in the brain, from schizophrenia patients and controls. His team has identified a half-dozen genes for oligodendrocytes—the cells that make up the myelin sheath covering axons—that malfunction. Buchsbaum has corroborating data from an imaging technique, called diffusion tensor technology, showing that the alignment of axons is askew in the frontal lobes of patients with schizophrenia. The new data comport with earlier observations from postmortem brains indicating that even though schizophrenia patients aren't short on brain cells, connecting fibers are sparse. Davis says this new tack is “very interesting, because other demyelinating diseases also have cognitive abnormalities.” Indeed, a genetic disease called metachromatic leukodystrophy often produces psychosis “indistinguishable from schizophrenic psychosis,” and the cognitive profiles are similar, indicating that the same brain areas are being affected. Myelin defects might help scientists understand the cortical shrinkage seen in many cases of schizophrenia. But it still doesn't get to the issue of causation, and it's just a piece of the still-endless puzzle. Says Yale University's Patricia Goldman-Rakic: “All of the things that can go wrong with a brain cell and its connections we can find evidence for in schizophrenia.” 15. AMERICAN GEOPHYSICAL UNION A New Recipe for Cooking Up a 'Mini Solar System' 1. Richard A. Kerr SAN FRANCISCO, CALIFORNIA—A record 9300 earth, ocean, atmospheric, and planetary scientists gathered here last month for the union's fall meeting. The four large Galilean satellites of Jupiter look like a doll's version of our solar system. But planetary scientists now doubt that the two systems could have formed in the same way. According to the conventional view, the jovian system got its start when Jupiter was still in the midst of forming. All the necessary ingredients were thrown into a seething ringlike disk orbiting the young planet, and the planetary bodies glommed together in a geologic instant. But although the solar system may have formed by that dump method, data from the Galileo spacecraft rule it out as an explanation for the jovian satellite system. Outermost Callisto, it turns out, is just too immature. At the meeting, planetary dynamicists Robin Canup and William Ward of the Southwest Research Institute in Boulder, Colorado, offered a new recipe for the jovian satellites: Slowly dribble the makings into a thin gruel of a disk orbiting a nearly complete Jupiter. The resulting gradual accumulation of satellites would keep the mix from overheating and solve a number of problems, including Callisto's immaturity. Galileo the spacecraft—named for the astronomer who lent his name to the large jovian satellites—flew close by each moon and measured the subtle variations in their gravitational pulls that reveal the nature of their interiors (Science, 13 June 1997, p. 1648). Innermost Io is all rock, Europa is mostly rock with a rind of ice and water, and Ganymede and Callisto are roughly half rock and half ice. Ganymede's rock separated from its accompanying ice to form a central core, but Callisto's rock never fully separated, remaining dispersed through its ice. That's all wrong for the dump method of satellite formation. A dense disk of gas and solid debris circling a nascent Jupiter would likely have hit 1000°C—too hot for new satellites to retain much water. And the Galilean satellites would have formed in 1000 years or less, so fast that heat would have built up from the fiery impacts of rock and ice feeding satellite growth. Such high temperatures drive the separation of rock and ice. To make matters worse, any disk of orbiting gas and debris will drag a body growing within it inward toward the central body. The more massive the disk, the faster the migration; a conventionally massive disk would have dragged Galilean-size satellites into Jupiter in just 100 years. A calmer, gentler satellite kitchen seemed in order after the Galileo mission's discoveries. Both planetary physicist David Stevenson of the California Institute of Technology in Pasadena and Canup and Ward concluded that the Galilean satellites were more likely to have formed slowly, late in the formation of Jupiter. After perhaps uncounted satellites had rapidly formed in a dense disk and been dragged to oblivion, Jupiter would have largely cleared away the gas and debris in its orbit that had been feeding the disk, although a trickle would have continued. Canup and Ward liken the process to water passing through a mineral-encrusted pipe. The water in the pipe at any one time couldn't have delivered the mass of minerals, but given time, the deposition of minerals from slowly flowing water could. Canup and Ward have now done detailed calculations of the starved-disk conditions needed for the Galilean satellites to form. They find that the satellites could have grown as the last 1% or 2% of Jupiter's mass funneled through a disk orders of magnitude less massive than the earlier disk. Satellite formation could have taken 100,000 years or more rather than a millennium. Such a “slow-flow” process would have allowed time for a growing moon to lose heat and keep temperatures low enough to form the icy satellites. Lower temperatures would also have allowed Callisto's rock and ice to avoid complete separation. And a lightweight disk would have slowed inward migration of nascent moons enough to preserve them until the disk finally dispersed. Canup and Ward's approach “leads to satellites that match all the present observational characteristics of the Galilean satellites,” says planetary dynamicist Stanton Peale of the University of California, Santa Barbara. “I don't see any caveats to be raised.” What the starved-disk model could use is more tests against observational constraints. That could come next year, when the Cassini spacecraft arrives at Saturn and Titan, its lone large satellite. 16. AMERICAN GEOPHYSICAL UNION Volcanic Blasts Favor El Niño Warmings 1. Richard A. Kerr SAN FRANCISCO, CALIFORNIA—A record 9300 earth, ocean, atmospheric, and planetary scientists gathered here last month for the union's fall meeting. What could a fiery erupting volcano have to do with El Niño's gentle ocean warmth? A good deal, according to a new statistical analysis presented at the meeting. The new comparison of the timing of large tropical eruptions relative to ocean warmings during the past 300 years shows that an eruption doesn't trigger an El Niño every time, as some had argued. It does appear to double the chances of an El Niño in subsequent years. Aside from giving forecasters an occasional edge, the finding supports a controversial proposal that global warming will push the eastern tropical Pacific toward cooler, La Niña conditions, rather than making it more like El Niño. The idea that volcanoes could trigger El Niños came to the fore after El Chichón's 1982 “eruption of the century” in Mexico ushered in the “El Niño of the century.” Some speculated that the haze of volcanic debris thrown into the atmosphere triggered the tropical ocean warming by changing where and how much the sun heats the atmosphere or ocean. But the specific mechanisms offered were unconvincing, and there were too few possible eruption-El Niño pairs in the historical record to persuade anyone that it wasn't all just a coincidence. Statistical climatologists Brad Adams and Michael Mann of the University of Virginia, Charlottesville, and Caspar Ammann of the National Center for Atmospheric Research in Boulder, Colorado, took up the challenge, which included correlating 300-year records of El Niño and tropical volcanic eruptions developed by others since the 1980s. Independent records of volcanic activity included volcanic dust preserved in glacial ice and geologic evidence of the explosive power of specific eruptions. The researchers found up to two dozen large tropical eruptions, depending on the definition of “large.” And Mexican tree-ring widths and global paleoclimate patterns served as more or less independent proxies for the climate state of the tropical Pacific: warm and El Niño-like or cooler and La Niña-like. Using the statistical technique called “superposed epoch analysis” to gauge the likelihood that an eruption-El Niño pair was more than mere coincidence, Adams and his colleagues found that “the probability of finding an El Niño in the first year after [a large tropical eruption] is about twice as large as it is overall,” says Mann. The odds go from a random 20% chance of an ocean warming in the next year—with all its associated climate effects around the globe—to a 40% chance following an eruption. This volcano-El Niño link is statistically robust, say Adams and his colleagues, and they bolster their argument with a plausible mechanism. El Niño modelers Mark Cane of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, and Amy Clement of the University of Miami have suggested that—contrary to most climate models—global warming would not push the tropical Pacific into a broadly warmer, more El Niño-like state, permanently skewing world climate in El Niño-type patterns. Their counterintuitive argument is that more warmth over the tropical Pacific would lead to a permanent tendency toward a La Niña state—warm in the west and cold in the east. In the east, wind-driven upwelling of cold water from the depths stabilizes the temperature and resists the warming, they said. That forces the tropical Pacific to do all its warming in the west and increases, rather than decreases, the east-west temperature difference. Similarly, Mann notes, the general cooling induced in the lower atmosphere by high-altitude volcanic haze blocking sunlight should cool the stable eastern tropical Pacific less than the west. That would decrease the Pacific's east-west temperature difference so that it resembles the broad warming of an El Niño. The new analysis is a breath of fresh air for researchers pondering whether there's a link between the two phenomena. Climate researcher Alan Robock of Rutgers University, New Brunswick, says the new result “certainly seems plausible. They looked at a much longer time period, giving them a better chance to pick a weak signal out of the noise.” The new mechanism also seems reasonable, he says. But he's not yet won over: “Does the real climate system work that way?” Robock and others say the next step is linking a volcano's fiery eruption to a model that depicts the behavior of the tropical Pacific. 17. AMERICAN GEOPHYSICAL UNION Another Way to Take the Ocean's Pulse 1. Richard A. Kerr SAN FRANCISCO, CALIFORNIA—A record 9300 earth, ocean, atmospheric, and planetary scientists gathered here last month for the union's fall meeting. Climate change may be slowing the heart of the ocean's globe-girdling circulation system, says a new study, with possible unpleasant effects for some nations bordering the North Atlantic. Oceanographers have long had trouble gauging the ponderous flow of the ocean's “conveyor belt” that carries southern heat northward before sinking into the depths beyond Iceland. If computer simulations of global warming are correct in suggesting that it might already be slowing, the result could be an ice age chill in this century for northern Europe (Science, 27 September 2002, p. 2202). At the meeting, a group of oceanographers from the University of Bremen, Germany, proposed a sensitive means of gauging the conveyor's changing speed by simply measuring temperatures about a kilometer down in the tropical South Atlantic. A 50-year record of these temperatures hints that the conveyor belt may indeed be slowing. Carsten Rühlemann and his Bremen colleagues looked to the south because that's where the conveyor's upper-ocean currents pick up heat as they head north and join the Gulf Stream. They eventually lose their heat and sink in the far North Atlantic before heading back south as deep, cold currents. Physical oceanographers have occasionally measured the southward flow of conveyor water through the deep passages to either side of Iceland with current meters or indirectly by measuring the distribution of density that drives deep currents. In fact, the strand of the conveyor to the east of Iceland does seem to have slowed in recent decades, but there are several strands, and year-to-year and decade-to-decade fluctuations in flow occur that may have nothing to do with changes driven by global warming. However, frequent, comprehensive surveys would be expensive. As a supplement to North Atlantic observations, Rühlemann and colleagues suggest monitoring the buildup of heat in the south. A slowed conveyor would be able to carry away less heat into the North Atlantic, the way a slowed flow to your shower would draw less heat from your water heater. The upper South Atlantic could then warm, the way the water heater could get hotter without the extra drain on its heat. A model ocean behaved just that way when its conveyor was slowed, the group reported. And the real ocean did, too, during Earth's last deglaciation. Isotopic temperature records preserved in sediments bathed in northbound currents on either side of the tropical South Atlantic show “rapid and intense” warming at the same times that the conveyor slowed abruptly 16,000 and 12,000 years ago, they reported. Looking at modern records, the Bremen group found an irregular warming of 0.1°C to 0.2°C during the past 50 years in the waters of the northbound conveyor. “That might be a first sign that the [conveyor] water is responding to climate change,” says Stefan Mulitza of the Bremen group. Physical oceanographer Robert Dickson of the Fisheries Laboratory in Lowestoft, U.K., and other physical oceanographers hope to find more than a temperature shift. They want to see a trend in the density of conveyor waters, which determines how much water sinks at the far North Atlantic turnaround. Density (a combination of salinity and temperature) has been measured only haphazardly over the years. Researchers are gearing up to measure oceanwide flow using clever spot checking on either side of the North Atlantic, but decades must pass for a clear trend to emerge. In the meantime, perhaps southern temperature can help. 18. RENEWABLE ENERGY Norway Goes With the Flow To Light Up Its Nights 1. Richard Stone Three European teams are racing to be the first in the world to harness a new source of power: underwater coastal currents driven by the tides CAMBRIDGE, U.K.—Unlikely as it may seem, an obscure town high above the Arctic Circle was the first city in northern Europe a century ago to install electric street lamps. Living where the sun doesn't rise for 2 months in midwinter, the 9000 residents of Hammerfest, Norway, have always been alert to new ways to brighten their lives. In early March, the town expects to chalk up another illumination milestone, when engineers flip the switch of a sea-floor generator that, for the first time, will feed electricity tapped from underwater tidal streams into a power grid. The idea of harnessing the tides as an inexhaustible source of energy is not new. Since 1966, a 240-megawatt power plant alongside a dam on the River Rance near St. Malo, France, has trapped water at high tide and funneled it over turbines as the tide runs out. Canada, China, and Russia have built variations on this hydropower theme. And intense efforts are under way around the world to design generators, floating or situated on shore, that convert the punishing kinetic energy of waves into electricity. But dams are out of fashion in many countries, and the colossal energy of waves has remained tantalizingly beyond reach. “Tens of millions of dollars have been spent on offshore wave power, and not a watt has come out of it,” claims Tony Trapp, a mechanical engineer at Engineering Business Ltd., a firm in Northumberland, U.K., that's developing both tidal and wave generators. Engineering Business burst onto the tidal-stream scene last year with a generator it calls Stingray. The key feature of the underwater 180-ton machine is a hydrofoil attached to the end of an arm, resembling a cantilever. The strong current forces the hydrofoil to oscillate like a whale's tail fin. This kinetic energy is used to work a hydraulic pump at the base of Stingray; the hydraulic pressure rotates the electricity-producing generator. In January 2002, the firm won a1.7 million grant from the U.K. government's sustainable-energy program to take the project from rough blueprints to full-scale prototype. A barge lowered Stingray to the bottom of Shetland's Yell Sound on 13 September, where in 2 weeks of testing it averaged 90 kilowatts of output in currents of roughly 1.5 meters per second. “We've proved we can get a lot of power out,” says Trapp.

The next step, he says, is to redeploy Stingray this summer and start raising funds—at least \$15 million—to build a 5-megawatt demonstration tidal-power station. Engineering Business hopes to install and connect it to the power grid in 2004. One drawback: Trapp expects production costs to be several times higher per kilowatt hour than mainstream energy sources are. “There's a lot of development to be done,” he concedes. “But then costs will fall.”

The origins of the Norwegian project date to the early 1990s, when a Hammerfest businessperson, Ole Martin Rønning, a diving enthusiast, began to wonder if the swift currents in the Kvalsundet Strait between mainland Norway and Kvaløy Island could be exploited for energy. An electricity shortage a few years ago that forced Scandinavian grids to import the commodity led to a consortium that includes the Norwegian government, the petroleum giant Statoil, and the Foundation for Scientific and Industrial Research in Trondheim.

“At first, we thought it would be too expensive to be done,” says project leader Bjørn Bekken, a mechanical engineer at Statoil. But a fresh look yielded a workable design. The resulting company, Hammerfest Strøm, began installing its tidal mill at the bottom of the Kvalsundet on 25 September.

The prototype, designed to deliver 300 kilowatts of power, consists of three 10-meter-long fiberglass-reinforced blades fixed like a propeller to a massive steel tripodlike base. A big design challenge, Bekken says, was to cope with a potential power outage on shore, which would remove magnetic resistance and send the turbine into free spin. That, in turn, would place a “quite tremendous” drag on the blades as they sliced through the water.

Bekken estimates that the prototype will produce 0.7 gigawatt hour per year, a fraction of the 21 gigawatt hours consumed by the 1091 residents of Kvalsund County on the Norwegian mainland across from Hammerfest. (Kvalsund will also draw electricity from the prototype.) The last piece of the generator, a cylinder in which the kinetic energy of the blades will get converted into electricity, should be lowered into place next month. If all goes well, Bekken says that the next step will be a feasibility study for a commercial plant: 20 souped-up tidal mills that would generate 32 gigawatt hours a year.

A second major propeller-style tidal mill is emerging from the R&D pipeline. Marine Current Turbines Ltd. in Chineham, U.K., plans to test a prototype 300-kilowatt generator this summer off Lynmouth in southwestern England. Its twin rotors are mounted on a steel monopile sunk into the seabed that, in contrast to the Norwegian machine, also juts above the water's surface—allowing for easier maintenance of crucial components. “For a viable commercial technology, you need to be able to get at it easily and simply,” says managing director Martin Wright.

According to Stephen Salter, a mechanical engineer at the University of Edinburgh, U.K., tidal-stream technology is poised for a commercial breakthrough. His team has its own blueprints for a “futuristic” 20-megawatt tidal-stream generator that Salter hopes will draw investors if the British and Norwegian models are successful. “We're hoping they don't make any mistakes,” he says.