News this Week

Science  26 Jul 2002:
Vol. 297, Issue 5581, pp. 492
  1. EPIDEMIOLOGY

    Despite Safety Concerns, U.K. Hormone Study to Proceed

    1. Martin Enserink

    When a huge U.S. study of hormone drugs came to an abrupt standstill 2 weeks ago, women's health experts cast a curious glance across the Atlantic. Would leaders of an even bigger trial funded by the U.K. government and the Medical Research Council be shaken by the damning evidence of risk from the Women's Health Initiative (WHI)? Would they halt their study and advise participants to stop taking their pills? Not at all, is the surprising answer.

    Uncertainty.

    The range of likely risk from hormone therapy is wider when the data are adjusted for multiple sampling.

    SOURCE: JAMA

    Meeting last week, an independent safety panel for the $32 million Women's International Study of long Duration Oestrogen after Menopause (WISDOM) unanimously concluded that WHI's evidence that hormone replacement therapy raises the risk of heart disease is not convincing. The trial's steering committee, equally skeptical, decided to forge ahead with WISDOM. The U.S. researchers “have not determined the size of the risks reliably,” says the chair of the steering committee, Oxford epidemiologist Rory Collins. This split, according to many observers, reflects a difference between cultures as much as a disagreement over the science.

    The British researchers argue that WHI's ambiguous results make WISDOM even more important. Proponents of hormone therapy also take heart in the prospect that WISDOM's fresh data might offset the recent findings. But the decision is drawing criticism from some U.S. researchers, who say the study puts women at unnecessary risk. “It may be good for science,” says epidemiologist Curt Furberg of Wake Forest University in Winston-Salem, North Carolina, “but patients will pay the price.” WHI director Jacques Rossouw calls the British misgivings about his study “bogus.”

    Both WHI and WISDOM were designed to test a combination of estrogen and progestin. These hormone substitutes are taken by millions of women to counter short-term menopausal symptoms such as hot flashes and insomnia, or to prevent age-related diseases over the long term, such as osteoporosis and heart attacks (Science, 19 July, p. 325). More than 16,000 U.S. women participated in WHI; WISDOM has enrolled 5000 British women so far but aims to recruit 17,000 more in the United Kingdom, Australia, and New Zealand.

    The U.S. trial ended when an independent data and safety monitoring board (DSMB) concluded 31 May that the increased risk of breast cancer from hormones had crossed a previously set line. Typically, DSMBs study the accumulating data several times a year and recommend quick action if early results show that a drug is clearly effective—making it unethical to continue participants on placebo—or clearly harmful. Statistics can help clarify the choices, but “in the end, it's often a difficult decision,” says statistician Janet Wittes, president of the Statistics Collaborative in Washington, D.C., and chair of the panel that has monitored WHI since 1997. In the case of WHI, the data failed to convince WISDOM's counterpart review panel, chaired by statistician Richard Gray of the University of Birmingham, U.K.

    The two teams look at risk differently. In their paper in the 17 July issue of the Journal of the American Medical Association (JAMA), the U.S. researchers present the average risks for each adverse outcome along with a “confidence interval”—a range of values between which the true value lies with 95% certainty. But the approach has a problem, says Gray: Analyzing data multiple times and testing for several outcomes increases the chances of getting a spurious result.

    The WHI team also presented “adjusted confidence intervals,” statistically corrected for multiple sampling, which show a different picture. For each of the three main adverse outcomes, the adjusted range includes the possibility of no increased risk at all or even a decreased risk (see graph). That leaves room for doubt about hormones' effect on heart disease, says Gray—especially because the increased risk conflicts with previous data. (His panel does not question the findings on stroke and breast cancer.)

    To Rossouw, this is a quibble. He says the adjusted intervals were put in the JAMA paper mainly as a service to “aficionados,” adding that most trial reports never even mention them. He thinks the WISDOM researchers are looking for “an excuse to dismiss the results. … They have their own trial to protect.”

    The rift highlights how ethical frameworks differ on opposite sides of the Atlantic, others say. Whereas U.S. DSMBs tend to emphasize individual patients' safety, “the Brits are much more comfortable proceeding with a trial in order to get statistically significant results,” says Wittes. Collins points out, however, that stopping a trial early is not always the best way to protect patients. He recalls that several U.S. trials of AZT, the first AIDS drug, were cut short because it seemed to prolong life. But despite pressure to halt, a French-British collaborative study, called Concorde, continued and eventually showed that AZT alone had no overall effect on mortality.

    What remains unclear in this technical debate is how women in the U.K., Australia, and New Zealand will respond. Will they want to enroll in WISDOM now that American participants have been advised to stop taking the pills? All current and future WISDOM participants will be told about the U.S. results, says Collins, and some might opt out. But it's possible that the dropouts will be offset by newly motivated recruits. In the past, some women were so convinced about the benefits of therapy that they refused to participate, he says. Some might now enroll in WISDOM to support a second look at that question.

  2. NIH BUDGET

    Senate Panel Adds 16% to Complete Doubling

    1. Jocelyn Kaiser

    The last leg of the biomedical community's campaign to double the budget of the National Institutes of Health (NIH) over 5 years got a boost last week from a key Senate spending panel. The Senate Appropriations Committee approved $27.2 billion for NIH for the year beginning 1 October, a 16% increase over 2002 and twice the agency's 1998 level.

    Senators Tom Harkin (D-IA) and Arlen Specter (R-PA), leaders of the subcommittee that oversees NIH's budget, celebrated the doubling victory a day earlier at a jubilant press conference. Joining them were new NIH chief Elias Zerhouni, his deputy Ruth Kirschstein, and six directors of the larger NIH institutes, who donned spanking new white lab coats with their names sewn on them for the occasion. “This is a red- letter day. It's a milestone,” said Harkin.

    Well covered.

    Senators Tom Harkin, at podium, and Arlen Specter celebrate with NIH top brass their proposed boost in the NIH budget.

    CREDIT: JONATHAN FISHBURN

    The amount approved by the Senate panel is consistent with what President George W. Bush had proposed, but the panel didn't follow the Bush plan to the letter. Instead, it trimmed $263 million from the National Institute of Allergy and Infectious Diseases (NIAID)—which had been slated for a 57% boost, mainly thanks to $1.5 billion for antibioterrorism research—and spread it around. That put many institutes that had been slated for 8.4% increases in the president's budget closer to 9%, a committee staffer says. “All these other problems—cancer, Alzheimer's, Parkinson's—did not stop being problems on September 11,” explained the aide. The bill gives NIAID a free hand in applying the cuts.

    The lawmakers also ignored the president's request to designate a total of $5.1 billion for cancer-related research because “we would have considered that to be earmarking for a disease,” the staffer says. Even so, they matched his proposed 12% boost for the National Cancer Institute, to $4.6 billion.

    Although elated by the Senate mark, biomedical lobbyists note that the figure might be lower in the House, where the corresponding spending panel has a smaller overall total to work with. The House is expected to take up the spending bill in September after a monthlong recess; the two versions must then be reconciled.

  3. GLOBAL CHANGE RESEARCH

    Senate Puts the Heat on Science Nominees

    Jeffrey Mervis A Senate panel turned a routine confirmation hearing last week into a withering, bipartisan assault on the Bush Administration's climate change policy. The targets—nominees for two senior White House science posts, one of whom would coordinate the Administration's research agenda on climate change—were left speechless and politically wounded by the criticism.

    Heated words.

    White House nominees Richard Russell (left) and Kathie Olsen field sharp questions about climate change at Senate hearing.

    CREDITS: NASA/BILL INGALLS

    The unsuspecting victims were Kathie Olsen and Richard Russell, in line to be the principal deputies under John Marburger, the president's science adviser and head of the Office of Science and Technology Policy (OSTP). Olsen would handle science policy, including the Administration's $1.7 billion global change research program, and Russell would coordinate technology policy. Russell holds a bachelor's degree in biology and spent 10 years as a House staffer before joining OSTP last year as chief of staff; Olsen, a Ph.D. neuroscientist, worked at the National Science Foundation for 15 years before becoming NASA's chief scientist in 1999.

    Normally such hearings are innocuous affairs that showcase a nominee's expertise. But Senator John McCain (R-AZ), a ranking member, was already steamed about comments from Marburger to the same panel a week earlier in response to questions about the Administration's climate change policy. Stressing the uncertainties, Marburger had described how projections about global warming are based on assumptions of possible future levels of greenhouse gases; that should not be confused with predictions, he said, derived from known facts about current emissions. McCain, claiming that Marburger's testimony had “no credibility,” offered Olsen and Russell a litmus test.

    McCain first read a description of how “warming in the 21st century will be significantly larger than in the 20th century … and temperatures in the U.S. will rise by about 5°-9°F (3°-5°C) on average in the next 100 years.” Without identifying the source—a recent White House report that President George W. Bush has dismissed as mere speculation—McCain then asked each nominee whether he or she agreed with the statement.

    Olsen and Russell initially refused to answer the question. Olsen, despite NASA's dominant role in the global change initiative, later said that she “was nervous … [and] didn't understand the paragraph,” adding that “I don't know if we have enough data to make that statement.” Foiled in his attempt to solicit the nominees' views on climate change, McCain declared that “I will oppose your nominations until I get an answer” and stalked out of the hearing.

    Senator Ron Wyden (D-OR), who chaired the hearing of the Senate Commerce, Science, and Transportation Committee, tried to mollify the stunned witnesses by assuring them that he supported their nominations. Indeed, this week the panel approved both nominees, with McCain the sole dissenter. Still, Wyden echoed McCain in expressing his “disappointment” with their grasp of the issue. “The science behind climate change is no longer in question,” he said pointedly. Olsen, noting that her background is in neuroendocrinology, promised to “become more knowledgeable” on the subject in the coming months.

  4. PALEONTOLOGY

    Fossil Bird From China Turns Tail, Spills Guts

    1. Erik Stokstad

    If paleontologists could take a field trip back in time, many would head straight for the ancient lakebeds of what is now northeastern China. Back in the early Cretaceous, some 120 million to 125 million years ago, these shores buzzed with strange life that has come to light only in the past few years: feathered dinosaurs, odd mammals with hindlimbs like those of reptiles, and primitive flowering plants (Science, 12 January 2001, p. 232). Birds teemed too, as more than a dozen unearthed species and hundreds of specimens attest.

    Stuffed.

    Remains of Jeholornis include fossilized seeds (right) from the bird's last meal.

    CREDITS: IVPP

    Now the record of early avian life has gotten even richer. In the 25 July issue of Nature, two Chinese paleontologists describe one of the most primitive birds ever discovered, Jeholornis. The bird's peculiar tail underscores the now-common theme of kinship with dinosaurs. “This is a critical specimen that combines features that are typical of dromeosaurs with things that are typical of more advanced birds,” says Luis Chiappe of the Natural History Museum of Los Angeles County. So well-preserved is the turkey-sized specimen that even its last meal is plain to be seen.

    Prodded by memories of Archaeoraptor, a birdlike dinosaur from the same region that was shown to be a fake assembled from two creatures (Science, 14 April 2000, p. 238), paleontologists Zhonghe Zhou and Fucheng Zhang of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing took care to make sure that the new specimen was genuine. After examining how bones matched up between the several slabs, they concluded that “the possibility of a composite specimen … can be ruled out.” Zhou and Zhang note that, unlike Archaeoraptor, the new specimen was completely prepared in the lab.

    What makes Jeholornis unique among birds from the early Cretaceous is its tail. Birds usually have short tails tipped by a few vertebrae fused into a rodlike pygostyle. In contrast, the 42-centimeter tail of Jeholornis consists of at least 22 individual bones, just like a dinosaur's tail. This kind of tail also adorns the end of the most famous fossil bird, the 145-million-year-old Archaeopteryx from Germany, as well as that of Rahonavis from the late Cretaceous in Madagascar. Finding a third example in China shows how far-flung such long-tailed early birds were, says Cathy Forster of the State University of New York, Stony Brook.

    Despite its antiquated tail, Jeholornis sported an advanced shoulder girdle capable of powering flight. That fits with the notion that early birds evolved their front limbs first, Chiappe says, and only later modernized their tails into part of the flight gear. A computer analysis of 201 anatomical features placed Jeholornis with Rahonavis as the closest relatives of Archaeopteryx in the bird family.

    Jeholornis also has something to say about what fueled its airtime. Inside its chest cavity lie the fossils of more than 50 undigested watermelon-sized seeds. That's a new item on the menu of Cretaceous birds. Specimens in Spain contain crustaceans, and North American fossils dined on fish, but Jeholornis is the first proven seed eater. “The main importance is that it increases our knowledge of the ecological diversity of early Cretaceous birds,” says paleontologist Tom Holtz of the University of Maryland, College Park. Adds would-be time traveler Chiappe: “Bird watching back then would have been a lot of fun.”

  5. INTELLECTUAL PROPERTY

    U.S. Asks for Delay in Harvard Theft Case

    1. Andrew Lawler

    BOSTON—Government prosecutors surprised a court last week by seeking a 6-month delay in proceedings against two biologists accused of stealing research secrets from Harvard University. Observers say the move, which left the judge flabbergasted, suggests that the government is not yet ready to take its case to trial.

    Jiangyu Zhu and Kayoko Kimbara were jailed last month after an FBI complaint alleged that they conspired to steal research materials and secrets when the two, who are married, left the lab of cell biologist Frank McKeon at Harvard Medical School in 1999. Released on bail, they appeared on 17 July at a 10-minute hearing here in U.S. District Court. The prosecutors and defendants have both agreed to a 180-day continuance, which prosecutor Robert Wallace says will allow the two sides “to discuss the case and see if [there is] any resolution.” He declined further explanation of why the government supports a delay.

    “This is an extraordinarily unusual request,” said presiding Judge Robert Collings, adding that he had not encountered anything similar in 20 years on the bench. The government normally has 30 days after an arrest to seek an indictment.

    The FBI complaint charges the couple with shipping material of significant commercial value to Japan and mailing reagents and other materials from Harvard to Zhu at the University of Texas, San Antonio, where he had a brief postdoctoral appointment. Their intent, says the complaint, was to use Harvard trade secrets for their own profit (Science, 28 June, p. 2310). Last week, Nagoya, Japan-based Medical and Biological Laboratories Co. revealed that Zhu sent the company genetic material and asked it to make an antibody that might hinder rejection of transplanted organs.

    In a statement, the company said it created the antibody and sent it to Zhu's Texas lab in February 2000. The company, which employs a colleague of Zhu's, says it was not contacted by the U.S. Department of Justice, which declined comment. But Harvard spokesperson Donna Burtanger praised the lab's conduct, saying that “they cooperated fully and sent everything back” to Harvard. The company also said that it has tightened up oversight of interactions with outside researchers.

    In a statement released by Zhu and Kimbara's lawyers on their behalf, the researchers say that “the government did not have the benefit of hearing our position—that no crime was committed—before it acted.” Spokespeople for both sides declined to say if the FBI ever interviewed the two scientists. The researchers, the statement adds, “have no interest whatsoever in wrongfully commercializing their work.”

    Zhu, a Chinese citizen, and Kimbara, a Japanese national, are restricted in their movements and are on administrative leave from their jobs pending investigations by their current employers, Zhu at the University of California, San Diego (UCSD), and Kimbara at Scripps Research Institute in La Jolla, California. UCSD spokesperson Kim McDonald says “nothing [Zhu] has done here leads us to question his work,” and Scripps spokesperson Robin Clark says that Kimbara “will be given the opportunity” to return if the investigation finds no wrongdoing.

    Zhu and Kimbara were mobbed by Japanese television reporters and photographers on entering and leaving the courthouse. A similar case last year involving researchers at the Cleveland Clinic Foundation in Ohio (Science, 18 May 2001, p. 1274) also received intensive coverage in Japan. The two cases have raised concern in Japan that Japanese researchers are ill prepared to cope with stricter U.S. laws on intellectual property.

  6. NUCLEAR FUSION

    Chemistry Casts Doubt on Bubble Reactions

    1. Charles Seife

    A controversial claim that scientists had detected signs of fusion in a rapidly collapsing bubble may have further imploded this week. A new experiment that measures the energy budget of a collapsing bubble for the first time indicates that so-called bubble fusion is highly unlikely to occur.

    No nukes?

    Sonoluminescent bubbles look bad for fusion.

    CREDIT: K. SUSLICK/UNIVERSITY OF ILLINOIS, URBANA-CHAMPAIGN

    The controversy involves a peculiar phenomenon known as sonoluminescence. If you zap a tub of liquid with sound waves in the right manner, you can “crack” the liquid, creating bubbles that contract so violently that they glow with light. Earlier this year, Science published a paper by engineer Rusi Taleyarkhan of Oak Ridge National Laboratory in Tennessee and five colleagues reporting the detection of neutrons that the researchers claimed were generated by a fusion reaction inside a sonoluminescing bubble in a solution of acetone. The claim was greeted with skepticism even before it was published (Science, 8 March, pp. 1808 and 1868).

    The chemistry that goes on inside the bubbles is a mystery—particularly in single- bubble sonoluminescence, in which a solitary bubble is trapped by the acoustic waves. Single-bubble reactions are so small that “it's very tough” to detect the products that the bubbles create, says Lawrence Crum, an acoustician at the University of Washington, Seattle, and one of the first people to study single-bubble luminescence.

    But Kenneth Suslick and Yuri Didenko, sonochemists at the University of Illinois, Urbana-Champaign, report in the current issue of Nature that they have used fluorescent dyes to measure the amounts of hydroxyl radicals (OH) and nitrite ions (NO2) produced during single-bubble sonoluminescence in water—the products expected when the extreme conditions in the bubble shred water, nitrogen, and oxygen molecules. From the concentrations of these molecules created by different single- bubble sonoluminescence experiments, Suslick and Didenko were able to figure out the conditions that led to those reactions—and where the energy goes when the bubble collapses. “You get the first measure of the energy balance, a feeling for how much energy goes in and where it comes out,” says Suslick. “Chemical reactions are a major mechanism for the dissipation of energy,” he adds, noting that the experiment roughly matches theorists' expectations.

    Suslick's work implies that—especially in a volatile liquid such as acetone—much of the energy of the collapsing bubble is dissipated by tearing molecules apart, so such an experiment is highly unlikely to initiate fusion. “They're saying, ‘We understand what's going on inside the bubble,’ and if this is what you believe the science is, you should be suspicious of the Taleyarkhan paper,” says Crum. The authors of that paper could not be reached for comment. But for now, at least, skeptical voices rule the acoustic waves.

  7. CLIMATE PREDICTION

    Signs of Success in Forecasting El Niño

    1. Richard A. Kerr

    El Niño, the sleeping giant of climate, awakened earlier this month, according to government scientists. Six months ago, those same researchers went out on a limb when they recognized stirrings around the Pacific as likely signs that El Niño's warming of tropical waters would soon return after a 4-year absence. Their early-January forecast appears to be holding up; if the warming trend continues, El Niño's often disastrous weather shifts around the globe should crest around the end of the year. A full year's warning of El Niño's peak would be much better than forecasters achieved last time. But soon will come the next test: Will this El Niño develop into a weak-to-moderate warming this winter, as now predicted, or another barnburner like last time.

    Good call.

    Early this year, government scientists correctly predicted an El Niño warming while the tropical Pacific was still near normal.

    CREDIT: CLIMATE PREDICTION CENTER/NCEP

    El Niño forecasting has a long and checkered history. In the 1960s and even early '70s, monitoring of the tropical Pacific was so spotty that full-blown El Niños could pop up around Christmastime without warning. A simple model of tropical Pacific winds and currents successfully predicted a Pacific warming for the first time in 1986, but forecasters' optimism was short-lived. It was only with the 1997 super-El Niño that human and computer forecasters had some measure of success, and even then they were criticized for a months-late alert and gross underestimation of the event's huge scale (Science, 13 October 2000, p. 257).

    This time around, some forecasters seem to have gotten the onset of El Niño right. At the National Weather Service's Climate Prediction Center (CPC) in Camp Springs, Maryland, meteorologist Vernon Kousky and a half-dozen colleagues put out a monthly “diagnostic discussion.” Their Web site report sorts through observations from ships, islands, satellites, and buoys across the Pacific and evaluates forecasts from more than a dozen models run by CPC and others. Last fall, while most of the tropical Pacific was at near-normal temperatures, the CPC group started talking about a warming trend that would likely continue into 2002, although as a group the models dithered from month to month between calling for normal and somewhat warmer conditions.

    On 9 January, while the crucial central tropical Pacific was still normal, the CPC group came out with its first solid prediction: “It seems most likely that warm-episode conditions will develop in the tropical Pacific during the next 3–6 months.” That didn't contradict the majority of model forecasts, and it fit what CPC researchers had been seeing in the changing circulation of atmosphere and ocean.

    Not everyone agreed, however. “A lot of us felt they were too quick” to call for an El Niño, says meteorologist Anthony Barnston of Columbia University's International Research Institute for Climate Prediction (IRI) in Palisades, New York. On its public Web site, the IRI group started in January with a 60% probability of an El Niño developing and built more or less steadily to a June forecast with a 90% chance. At the same time, the models were developing a consensus for an El Niño peaking at weak-to-moderate warmth next winter. “Overall, the models are doing better this time around,” says Barnston, who maintains a summary plot of model forecasts. “Quite a few anticipated the onset, and we had only a few real losers.”

    The computer models might be doing better, but they haven't won the day yet. Kousky attributes his group's success to “a combination of more experience [watching El Niños develop], 2 decades of research, and the observation network [the National Oceanic and Atmospheric Administration] and NASA have invested in. We look to the models to confirm what we're seeing in our monitoring.” What they're seeing now is enough warmth in the central tropical Pacific (a 3-month sea-surface temperature average 0.5°C greater than normal) to justify declaring the early stages of an El Niño. They also see only enough heat stored around the Pacific to anticipate weak-to-moderate warmth this winter. Time will tell.

  8. SUPERCONDUCTIVITY

    Stripes Theory Beset by Quantum Waves

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Grosse Pointe Park, Michigan.

    A new explanation of how high-temperature superconductors get their stripes could upend a controversial theory of how some of the materials conduct electricity without resistance. According to the “stripes” theory, electric charges within copper-and-oxygen- based superconductors collect in long lines, and pairs of charges then glide along these stripes unhindered. The scenario recently got a boost when experiments tripled the number of so-called cuprate superconductors sporting these apparent stripes (Science, 15 March, p. 1992). But now physicists at the University of California (UC), Berkeley, suggest that cuprate stripes might be an illusion. In a paper published online this week by Science (www.sciencexpress.org/cgi/content/abstract/1072640), they show how the stripes in one material might be a subtle effect of overlapping quantum waves of electric charge.

    Peaking through.

    Bumps on the right signal stripes—if they're there.

    CREDIT: J. E. HOFFMAN ET AL.

    The material is bismuth strontium calcium copper oxide, abbreviated BSCCO and nicknamed bisco. The new results do not rule out charge stripes in BSCCO, but they show that stripes are not needed to explain the undulations earlier researchers spotted on the surface of the material, says Juan-Carlos Campuzano, a physicist at the University of Illinois, Chicago. “These experiments were touted as the proof for stripes,” Campuzano says. “Well, that proof will have to come from somewhere else. One does not require stripes to explain the patterns seen in these experiments.” Proponents of stripes, however, say the UC Berkeley team's data contain evidence of stripes that the experimenters themselves have overlooked.

    On one issue all agree: Those data give the best look so far at BSCCO's electrical properties. To get them, J. C. Séamus Davis and his UC Berkeley colleagues studied the surface of a single crystal of BSCCO with a tiny, fingerlike electrical probe called a scanning tunneling microscope (STM). As they slowly moved the probe across the surface, the current between the tip and the material rose and fell in ripples—a pattern that other researchers had ascribed to stripes of charge lying just beneath the surface.

    Going beyond the earlier studies, the UC Berkeley researchers found that as they varied the voltage between tip and surface, the spacing of the ripples changed. That shouldn't happen if stripes were causing the ripples, they argue. Instead, the spacing would stay the same regardless of the voltage—much as the spacing of the pickets in a fence feels the same no matter how hard you press when running your finger over them. Moreover, Davis and colleagues found that the spacing varied just as it would if the ripples sprang from a quantum wave of electrical charge, reflecting off an imperfection in the crystal and then interfering with itself to produce peaks and valleys. They compared their results with data Campuzano and colleagues had taken by another technique, which showed just which wavelengths and energies such waves might have in BSCCO. The two types of data agree nearly perfectly, Davis says.

    Davis acknowledges that stripes might still appear under other conditions—if BSCCO were exposed to a magnetic field, say, or if its composition were changed. But quantum waves explain all the ripples the researchers see in this experiment, he says: “Ockham's razor says that this is all that's necessary.”

    Yet some researchers argue that the UC Berkeley group shaved away the stripes in its own data. The STM readings also show small signs of ripples whose spacing does not change with voltage, says Aharon Kapitulnik, a physicist at Stanford University in Palo Alto, California, who previously studied BSCCO with an STM and says he saw stripes. In plots where Davis and colleagues see only one peak that moves as the voltage changes, he sees a second that doesn't (see figure). “It's in their own data,” Kapitulnik says.

    Steven Kivelson, a physicist at the University of California, Los Angeles, thinks both waves and stripes might be playing a role. Contrary to what Campuzano's data predict, Kivelson says, the spacing of Davis's ripples appears to stop changing as the voltage between tip and sample decreases toward zero. That suggests the wave phenomenon fades and stripes emerge at very low voltages. If such subtle signs of stripes are there, Davis and colleagues should find them as they push forward with their technique.

    Meanwhile, a fresh challenge to the stripes model comes from a different direction. On page 581, Peter Abbamonte and colleagues at the University of Groningen in the Netherlands and Oxxel GmbH in Bremen, Germany, report a new x-ray scattering technique that should be especially sensitive to charge stripes. The researchers zapped a film of superconducting lanthanum copper oxide (LCO) with x-rays tuned so they'd be absorbed in regions where the LCO contained extra electrical charge. That would cause the x-rays to reflect in a way that revealed the presence of charge stripes. The researchers saw no evidence of such an effect.

    Unfortunately, the samples Abbamonte and colleagues examined are not quite like the other lanthanum-based materials that have already shown their stripes, says John Tranquada of Brookhaven National Laboratory in Upton, New York. Thus it's hard to know whether there should have been stripes for the Dutch-German researchers to see. “Because it's a new technique, it would be useful to calibrate it on one of the exact same materials where we have evidence for stripes,” he says. Abbamonte, now at Cornell University, says he's currently trying to do just that by studying samples in which Tranquada spotted stripes in 1995.

    Even if all cuprate superconductors have stripes, physicists still have to determine whether they really work the way the stripes theory predicts, Tranquada says. That could be an even tougher job than reconciling contradictory results and interpretations of the data, and it might keep physicists reading between the lines for some time to come.

  9. NEUROSCIENCE

    Versatile Cells Against Intractable Diseases

    1. Constance Holden

    The attention given to embryonic stem cells has kindled excitement about the possibilities for using an array of cells to rebuild damaged nerve tissue

    Virtually unknown to the public until a few years ago, “stem cells” now have a magical ring to them—thanks in large part to the vociferous debate in the U.S. Congress and elsewhere about the ethics of research using human embryos, the source of some of these cells.

    If popular accounts are to be believed, these versatile cells hold cures for a variety of ailments, chief among them neurological disorders such as Parkinson's and Alzheimer's diseases. With such publicity, it's not surprising that patients are clamoring for treatments that are as yet barely conceptualized in the lab. “The expectation of my patients is that this will be ready tomorrow or in a year,” says Jeffrey Rothstein, who does research on amyotrophic lateral sclerosis (ALS) at Johns Hopkins University in Baltimore. In reality, he and other scientists say, even without political obstacles, the closest treatments are years away, and some will take decades.

    Nonetheless, two recent developments—the cultivation of human embryonic stem (ES) cells and the discovery of hitherto unsuspected plasticity in the human nervous system—have sparked a plethora of new investigations into the possibility of using stem or stemlike cells to treat these devastating conditions.

    Nerve networks

    Until the last decade or so, scientists didn't even know that stem cells—that is, immature cells capable of differentiating into a variety of cell types—are widely distributed throughout the brain. The nervous system, unlike other tissues, has little or no capacity for self-repair; indeed, only two places in the mammalian brain (the hippocampus and the olfactory bulb) generate significant numbers of new nerve cells.

    The nervous system itself is a morass of complexity, comprising a spectrum of cell types. First are the glial cells (primarily oligodendrocytes and astrocytes), which provide network support, housekeeping, and insulation functions for neurons. These are the cells affected in multiple sclerosis (MS). Then come the “transmitter-defined neurons”—that is, cells whose main job is releasing a particular brain chemical in a particular location, such as the dopamine-producing cells that die in Parkinson's disease. Then there are neurons that must link up correctly with specific target cells, such as the movement-related neurons affected in Huntington's disease. Finally come the neurons involved in highly complex functions such as problem solving or language—those often affected by dementia or stroke. These now appear to be irreplaceable. “Having cells go where they're supposed to go, connect up, and become functional … is a bigger problem in the nervous system than anywhere else,” says Mark Mattson of the National Institute of Neurological Disorders and Stroke (NINDS) in Bethesda, Maryland.

    Even so, the newfound plasticity of the nervous system has inspired researchers, for the first time, to devise therapies that would replace defective neurons with newly minted cells. ES cells are the latest—and many believe the most promising but by no means the only—weapon in their armamentarium, which also includes fetal cells, adult stem cells, stem cells cultivated from fetal germ cells, cells cultivated from a type of tumor, and fetal cells from pigs and other animals. Some are true stem cells, defined as cells that can divide into both an identical daughter cell and a specialized cell. Others, variously called progenitor or precursor cells, are somewhere along the road to terminal differentiation. Their options are more circumscribed than those of pluripotent ES cells, but they can travel greater distances than mature cells, and their final identities are shaped by their surroundings.

    Parkinson's disease

    Because Parkinson's is in some respects a simple disease—affecting one type of dopamine-producing neuron in one place in the brain—most neuroscientists regard it as the best near-term candidate for stem cell therapy.

    A decade ago, neuroscientists demonstrated that when immature dopamine-producing neurons (obtained from the brains of aborted fetuses) are implanted in the brains of Parkinson's patients, many experience long-term improvement. That means that whatever the disease process is, it does not rapidly destroy implanted tissue. But this approach poses considerable ethical and logistical obstacles: Six to eight 2-month-old fetuses are needed to treat one patient. What's more, fetal dopamine cells have a low survival rate and are not as malleable as ES cells.

    Hoping to create a self-replenishing source, researchers are looking for ways to transform ES cells into dopamine producers. The promise and peril of that undertaking have been shown by Harvard researcher Ole Isacson, who this year reported transplanting mouse ES cells directly into the brains of rats with a version of Parkinson's disease (Science, 11 January, p. 254). The cells successfully differentiated into dopamine-producing neurons as well as other types of cells and alleviated some symptoms in more than half the animals. But these potent, hyperactive cells also produced noncancerous “overgrowths” in 20% of the rats.

    To avoid that risk, ES cells would have to be set on a clear path of differentiation before being implanted in humans. A step in that direction was reported online in Nature on 20 June by Ronald McKay and colleagues at NINDS. They cultivated dopamine neurons from mouse ES cells in vitro and then transplanted them into rats with Parkinson-like brain damage, showing that they could improve motor function. The next step will be to test human dopamine-producing cells in a rodent Parkinson's model, says Ted Dawson of the Parkinson's Disease Research Center at Johns Hopkins University. Then would come safety trials in monkeys. Says Isacson: “If somebody's lucky, we could be moving toward the clinic in 3 to 4 years.”

    Huntington's disease

    Like Parkinson's, Huntington's disease entails degeneration of one kind of cell, the medium spiny neurons of a subcortical structure called the striatum. But, says Rosemary Fricker-Gates of the Brain Repair Group at Cardiff University, U.K., devising a therapy is more complicated, because it would entail not just getting new cells accepted into the brain but “reconnecting often complex circuitry.” Compounding the difficulty, she explains, Huntington's is more “diffuse” than Parkinson's, with afflicted cells scattered throughout the striatum.

    Resurfacing.

    Stem cells might be directed to grow new myelin sheaths along neuronal axons, as pictured above, to treat nerve-destroying diseases such as multiple sclerosis.

    ILLUSTRATION: C. SLAYDEN

    Although it's a long shot, researchers are attempting the same implantation strategy that holds promise for Parkinson's treatment. These attempts have had decidedly mixed results. In a pilot study several years ago, researchers at the University of South Florida, Tampa, injected striatal neurons into the brains of seven Huntington's patients, using 10 fetuses for each operation. When Isacson and his team, working with the Florida group, examined one man who died 18 months later, they found that many of the new neurons had integrated into his brain (Proceedings of the National Academy of Sciences, 5 December 2000)—but there was no evidence that his symptoms had been alleviated. The same month, however, a group at the French biomedical research agency INSERM reported in The Lancet that transplantation of fetal striatal cells into the brains of five Huntington's patients seemed to help three of them. Investigator Marc Peschanski says that now, 5 years later, improvement has persisted. Over the next 4 years, 100 Huntington's patients are expected to participate in a multicenter European trial of this procedure.

    Amyotrophic lateral sclerosis

    ALS, or Lou Gehrig's disease, also entails damage to one type of neuron, motor neurons. The damage occurs not just in the brain but in all layers of the spinal cord. Complicating matters, says Rothstein of Johns Hopkins, researchers don't know why cells die in ALS. It might be that neurons act “autonomously” or in reaction to malfunctions of surrounding support cells, as do astrocytes. “Sometimes the earliest changes are in astrocytes,” he says. If that's true, then unless support cells were replaced as well, newly introduced neurons would be quickly poisoned.

    Because ALS causes damage throughout the spinal cord, stem cell therapy would require implanting neuronal precursors at a fairly primitive stage of development, when they're able to migrate long distances. Figuring out the optimal stage is extremely tricky, says Rothstein. With completely undifferentiated cells, there's no way of knowing how to get them to turn into the right kind and produce the right numbers of cells, he says. On the other hand, using more differentiated motor neurons that are programmed for a specific location could be problematic: “If they only go to legs, that's no good [either].”

    At Project ALS—a privately funded venture at Harvard, Cornell, Columbia, and Johns Hopkins universities—collaborators are trying to find out which types of cells would be best. The Johns Hopkins group is focusing on the potential of a line of pluripotent stem cells, called embryoid body-derived (EBD) cells, derived from the germ cells of aborted fetuses. Johns Hopkins researchers generated considerable excitement last year when they showed that EBD cells were able to restore some mobility to rats with paralyzed hindlimbs. Unlike ES cells, these do not appear to trigger tumor formation, says Rothstein. What's more, says Dawson, ethical problems are minimal: “You would probably only need one fetus to treat tens of thousands of patients.”

    At Harvard, Evan Snyder of Children's Hospital and Beth Israel Deaconess Medical Center and Robert H. Brown of Massachusetts General Hospital in Boston are comparing not only EBD cells but also fetal spinal progenitor cells, umbilical cord blood cells, human adult stem cells called fibroblasts, a line of stem cells derived from fetal brains, mouse neural stem cells, fetal pig neurons (which resemble human neurons), and even skin cells (fibroblasts). These cells, representing varying stages of development, are being injected in an ALS mouse model to see which type does best in producing or rescuing motor neurons.

    Although no treatment is yet in sight, the Johns Hopkins researchers are already testing the safety of EBD-derived neurons in 28 African green monkeys. “We meet constantly with FDA [the Food and Drug Administration]” to be sure all the rules are being followed, says Rothstein, so researchers can move quickly into clinical trials if a promising treatment emerges.

    Multiple sclerosis

    Stem cells might ultimately provide some benefit, albeit limited, to MS patients. The problems are multiple. First, MS is both an autoimmune and a neurological disorder. Finding the right type of precursor cell for therapy might be relatively simple, because the disease attacks not the neurons but the tiny cells called oligodendrocytes that make the protective coating of myelin on axons, the long fibers that conduct impulses from cell bodies. Because they're homogeneous and easy to identify, Steven Goldman of Cornell University believes they will be one of the first neural cell types to be used in stem-cell therapy.

    But damage sometimes extends beyond the myelin sheaths to the underlying neurons. Furthermore, “adding cells may be adding fuel to the fire, unless the underlying inflammatory process is approached,” says Goldman. For this reason, congenital myelin diseases might be easier targets, he says.

    An unorthodox approach to remyelination research is being taken at Yale University, where Jeffery Kocsis and colleagues are experimenting with several different kinds of stem or stemlike cells: bone marrow stem cells, adult stem cells from tissue taken from brain surgery patients, and two types of myelin-forming cells taken from the olfactory bulb and from peripheral nerves of pigs and humans.

    These peripheral cells, called Schwann cells, function like oligodendrocytes in the central nervous system. The Yale group is currently conducting a clinical trial with five MS patients to test the safety of injecting a patient's own Schwann cells directly into brain lesions. Because it's difficult to get enough cells from peripheral nerves, Kocsis hopes to cultivate myelin-producing cells from stemlike cells taken from brains or bone marrow.

    Goldman says, however, that oligodendrocytes, which are native to the central nervous system, are much better at remyelination. Oliver Brüstle of the University of Bonn Medical Center in Germany and Ian Duncan of the University of Wisconsin, Madison, have shown that oligodendrocyte precursor cells made from mouse ES cells generate myelin and wrap around axons when implanted in spinal cords of myelin-deficient rats (Science, 30 July 1999, p. 650). The next goal, not yet achieved, is to demonstrate this result in mice using human ES cells.

    Spinal cord injury

    Remyelination also appears to be a very promising approach in treating spinal cord injuries in which axons lose myelin but are still intact. John McDonald of Washington University in St. Louis, Missouri, for example, is conducting the first safety trial with six patients, using neural stem cells from pigs (which can be engineered to evade the human immune response) injected directly into the spinal cord to see if they will remyelinate damaged axons.

    However, although stem cells are frequently touted as a potential cure for more devastating spinal cord injuries such as that suffered by actor Christopher Reeve, “no one has a clue at this point” about how such therapy might be used to grow back a transected spinal cord, says Goldman: “There's a tremendous amount of hype in the literature, but I haven't seen a single report yet that I've found convincing.”

    A big stumbling block is that neuronal connections have to reach from the spine up to the brain and down to the feet. Mattson of NINDS notes that spinal cord connections “are made during embryonic development, when the distances are very small. … In the adult, the distances are very large.” Axons would have to grow over a meter long to extend from the spinal column to the big toe. What's more, he says, the neurons in a developing embryo are egged on by growth factors, whereas in an adult new axons would encounter mechanisms to repel and prevent growth.

    Alzheimer's disease

    Last summer former U.S. First Lady Nancy Reagan sent a letter to President George W. Bush pleading for ES cell research that might help patients with Alzheimer's disease. But currently, that's a very distant hope. “No one's even close to figuring out a stem-cell therapy for Alzheimer's,” says Mattson.

    Alzheimer's disease involves a wholesale attack on neurons throughout the brain. “The disease is so widespread that you couldn't reasonably expect to replace lost cells,” says neurologist Steven DeKosky of the University of Pittsburgh. And restoring motor neurons is child's play compared to repairing those involved in higher cognitive functions. “How does a new neuron know that you went to Woodrow Wilson High School?” says DeKosky. We have “absolutely no idea how to reproduce the subtle changes in gene expression and synaptic structure that occur when a neuron encodes a memory.” As Irving Weissman of Stanford University points out, scientists still don't know at what level the pathology develops: “We don't know if it's gene products of neurons or the environment of neurons that makes the problem.” So there's no way to know whether fresh cells would succumb to the disease.

    Future therapies

    Replacing damaged neurons with specific neural progenitors, as has already been shown with Parkinson's, is just the first step, says Duncan. Later, he says, scientists hope to be able to “seed” patients’ brains with neural stem cells that can supply a “resident population” of cells able to respond to ongoing disease with whatever cell types are needed.

    Supplementing this vision is a yet more far-reaching goal: figuring out how to stimulate a patient's own stem cells to repair an injury or tackle a disease. Some early research suggests that scientists might be able to activate dormant stem cells. Goldman, for instance, reported last month at the meeting of the American Society for Gene Therapy in Boston that his group has been able to stimulate progenitor cells in the striatum of mice with the Huntington's disease gene by injecting them with a virus containing a gene for a neurotrophic factor. Jonas Frisén of the Karolinska Institute in Stockholm has been injuring mouse brains with a toxin specific to dopamine neurons in an effort to stimulate stem cells to increase the supply of dopamine neurons. So far, the experiment (not yet published) has resulted in the creation of 20 new dopamine neurons a day per mouse—not much, but a doubling of the usual rate, he says. Other scientists, however, are skeptical that these new neurons are actually functional. A third group, at Tokyo University, reportedly has had success in activating stem cells, leading to regeneration of tissue in the hippocampus of rats—an important area affected in Alzheimer's disease in humans.

    But the brain, unlike other body parts, doesn't naturally jump in to fix itself, notes Harvard's Isacson, who predicts that endogenous brain repair is “decades from application.”

    Safety

    Even if scientists can conquer the complexities and create replacement neurons from stem cells, huge regulatory hurdles remain. ES cells are the most daunting: Scientists must devise ways to cultivate them that don't involve contact with potentially contaminating mouse cells. Geron Corp. in Menlo Park, California, which is working to create dopamine-producing cells, says it has already developed a culture medium free of mouse cells.

    More difficult will be ensuring that the cells are safe, because a single remaining undifferentiated cell could lead to tumor formation in the patient. Dawson of Johns Hopkins speculates, for example, that the FDA “may want us to engineer in a kill switch” so introduced cells could be turned off if they started misbehaving after transplantation. Then there is the problem of overcoming a patient's immune response to foreign cells, although researchers say this is not a major obstacle with the brain, which has been called “immune privileged.”

    In short, there's a long road ahead. Nonetheless, says Isacson, “I am still surprised at how much remodeling is going on in the brain.” He is confident that someday researchers can learn how to tap into that plasticity.

  10. A. P. J. ABDUL KALAM PROFILE

    The Political Ascent of an Indian Missile Man

    1. Pallava Bagla

    Indian scientists are pleased that for the first time the country's next president will be one of them

    NEW DELHI—He's an aeronautical engineer, a devout Muslim, and a workaholic who shuns the limelight. But however one describes India's new president, Avul Pakir Jainulabdeen Abdul Kalam hardly fits the mold of his predecessors, career politicians being rewarded for decades of faithful service. Indeed, his scientific colleagues hope that Kalam's election will send a new message to the country's 1 billion citizens: Technology can take you to the top.

    Coming together.

    Muslim A. P. J. Abdul Kalam extends a Hindu greeting of peace to well-wishers familiar with his key role in developing Indian missiles.

    CREDIT: KAMAL KISHORE/REUTERS

    “If somebody had told me 30 years ago that Abdul Kalam will one day be a Bharat Ratna [the country's highest civilian honor, literally ‘India's Jewel’], I would have said, ‘What's new?’” says Vasant Gowariker, one of Kalam's first colleagues at the Indian Space Research Organisation (ISRO) and subsequently his boss. “But if the forecaster had told me that Kalam would one day be president of India, that would have sent me in orbit round the moon!”

    The metaphor is an apt one. Kalam's election last week caps a 42-year career in India's space and defense establishment. The first person with a technical background to hold the office, the 71-year-old engineer has parlayed talent, persistence, and exemplary administrative skills into a venerated status as the father of the country's missile program. “What makes Kalam stand out is that he is not afraid of taking risks,” says V. S. Ramamurthy, a nuclear physicist and secretary of the Department of Science and Technology.

    The Indian presidency is largely ceremonial, with the prime minister holding all executive power. Still, the president is supreme commander of the country's armed forces, principal trustee of the constitution, and prime mover in the selection of the prime minister if the election yields no clear majority. He serves a 5-year term.

    Kalam declined repeated requests from Science for a personal interview, but he has discussed his upbringing and worldview in speeches and two recent books. Kalam's father was a struggling boatmaker in the small seaside town of Rameshwaram in southern India, and his sister pawned her jewelry to pay for his engineering training. One of Kalam's first jobs was designing hovercrafts at a little-known defense laboratory in Bangalore, where he caught the eye of the well-known Indian physicist M. G. K. Menon.

    With Menon's support, Kalam's career began to take off. At ISRO he spent almost 20 years mastering India's first-generation launch vehicle. He was project leader of the satellite launch vehicle in the Department of Space, where he worked with Satish Dhawan, one of India's top rocket scientists. In 1982 he moved to the Defence Research and Development Organisation (DRDO), becoming director-general in 1992. There he spearheaded India's indigenous guided missile program, including the Agni ballistic missile, which has a range of more than 2000 kilometers and can carry a nuclear payload. He also played a pivotal role in the country's 1998 nuclear tests (Science, 22 May 1998, p. 1189).

    Kalam strongly defends the need for India's nuclear weapons program. But he veers from the official government line in arguing that it, not conventional weapons, has prevented recent crises with Pakistan from escalating into full-scale war. “War did not take place because we had nuclear weapons,” he asserts.

    At DRDO, the projects he worked on personally had a mixed record. Of the five missiles that his team set out to design and build, only two have moved into production and only one has been delivered to the armed forces. There also have been chronic delays in ambitious projects such as the Light Combat Aircraft and the Main Battle Tank.

    Despite these problems, Kalam's prominent role in these high-profile projects appears to have boosted his standing among government leaders and the general public. His personal asceticism appeals to India's majority Hindu populace, which has warmed to the image of a devout Muslim who is a strict vegetarian and a teetotaler. Likewise, his contribution to making India more self-reliant in several high-tech fields draws praise from the country's intelligentsia and helps explain his 30 honorary doctorates from Indian universities.

    “He is extremely hard-working,” recalls V. S. Arunachalam, a metallurgist at Carnegie Mellon University in Pittsburgh and former DRDO head. “He is totally committed to his job and lives a simple, almost spartan life.” Hubert Curien, president of the French Academy of Sciences and former head of the French space agency, recalls Kalam's “matter-of-fact manner” and close attention to detail during a collaborative project in the 1970s to develop a new civilian rocket. “He was very qualified” to lead the project, adds Curien, one of the few foreign scientists to have interacted with him.

    Not everybody holds Kalam in such high regard, however. A close aide who has shared the same corridors of power called Kalam's move into the presidency “akin to making a mason the chief architect of a building.” To be sure, Kalam is cloaked in the sort of scientific invisibility that is not uncommon among government science managers: He has never studied overseas, never published a peer-reviewed research paper, and never been awarded a patent in his own name. This spring the prestigious Indian Institute of Science in Bangalore spurned an effort to name him a visiting professor, purportedly because he did not hold an advanced degree. And in the late 1980s Kalam was repeatedly denied election to the Indian National Science Academy in New Delhi.

    Last year, after spending 2 years as principal scientific adviser to the government, Kalam returned to his alma mater, Anna University in Chennai (formerly Madras) in southern India. He's trying to “ignite the minds” of students toward a career in science, part of a campaign to transform India by 2020 into what he calls a “developed country … using technology as one of the tools.” It's the sort of grand challenge that Kalam has always relished, say Ramamurthy and others, and one for which the presidency should offer the perfect bully pulpit.

  11. PHYSIOLOGY

    Setting the Human Clock: Technique Challenged

    1. Marcia Barinaga

    Skeptics fault a study showing that people's sleep-wake cycles can be altered by shining light on the backs of their knees

    Imagine getting over jet lag simply by strapping a light to your leg and leaving it on while you sleep or read in your airline seat. This scenario was suggested by a report published in Science 4 years ago describing a new way to reset the human internal clock. The discovery captured the imagination of scientists and entrepreneurs as well as the public. But researchers at Harvard Medical School in Boston now have cast doubt on the finding—and on prospects for commercializing this patented form of light therapy.

    In the original study, Scott Campbell and Patricia Murphy of Cornell University Medical College in New York state reported that by shining light on the backs of the knees of human subjects, they could shift the so-called circadian clock that governs sleep-wake cycles (Science, 16 January 1998, p. 396). The scientists chose the area behind the knees because it is laced with blood vessels close to the skin, and they believed it might be possible to transmit a timing signal through blood. But many researchers in the field were skeptical. Now circadian clock researchers Kenneth Wright and Charles Czeisler of Harvard Medical School report on page 571 that they have refuted the earlier study.

    Many clock researchers praise the new work: “I am convinced that this shows that it is very difficult to replicate the [original] finding,” says circadian physiologist Derk-Jan Dijk of the University of Surrey, U.K. “This paper will have a high impact,” adds clock researcher Shin Yamazaki of the University of Virginia in Charlottesville. “The experiment … is really well controlled.” Campbell counters, however, that the original study was properly controlled and has been replicated in his lab.

    Before 1998, researchers had known that exposing animals or humans to bright light during the night could reset, or “entrain,” their biological clocks. But Campbell and Murphy's report was shocking, says clock researcher Russell Foster of the Imperial College of Science, Technology and Medicine in London, because it “completely contradicted” a large body of research showing that in all mammals tested, including humans, light can reach the clock only through the eyes. “There was absolutely no evidence that individuals without eyes can entrain to light,” says Foster.

    Night-light.

    BiliBlanket pads like this one were used in the study, carried out in total darkness, to shine light on the backs of knees.

    CREDIT: KENNETH WRIGHT/HARVARD MEDICAL SCHOOL

    But iconoclastic findings have been right before, and if this one were, it might mean more than a cure for jet lag. Light shone on skin areas might replace the unpleasant regimen for seasonal affective disorder, a type of depression brought on by the short days of winter, that exposes people's eyes to bright light for hours each day. And shift workers might use the therapy to avoid drowsiness in dimly lit conditions. “It would have been terrific,” says Foster, “because it was a way of selectively hitting the circadian system without overwhelming the visual system.” The Cornell group patented the approach, but it has not been licensed.

    Based on its promise, other researchers tried to build on the finding but did not get the expected results. For example, if light to the knees resets the clock, it should suppress the nighttime hormone melatonin, but several labs found that it did not.

    Until recently, no one had duplicated the conditions Campbell and Murphy used, and that is what Wright and Czeisler decided to do. They used the same light source, a fiber-optic lamp called the BiliBlanket, designed for treating premature infants with light. Like Campbell and Murphy, they wrapped the blanket around the subjects' knees, shielded it so no light reached the eyes, and treated their subjects for 3 hours.

    But Wright and Czeisler were dissatisfied with some aspects of Campbell and Murphy's experiment. Subjects in the earlier trial had been seated in a dimly lit room for treatment. Dim light was believed at the time not to affect the clock, but Wright and Czeisler have since shown that even lower light levels can do so, so they kept their subjects in total darkness before and during the treatment. What's more, changing position can alter physiological indicators of a subject's circadian phase such as body temperature or melatonin levels. So the Harvard subjects remained lying down.

    Campbell and Murphy had used body temperature as the indicator of circadian phase for all their subjects. But Elizabeth Klerman of Harvard Medical School reported in 1999 that body temperature can be off by as much as 5 hours from circadian phase in subjects who are allowed to move around. So Wright and Czeisler used melatonin levels, thought to be a more precise measure. Unlike the New York team, they gave their subjects several days to adjust to the laboratory routine before beginning the experiment, and neither researchers nor subjects knew who got the treatment and who didn't.

    The result: Light to the back of the knees did not shift the subjects' clocks.

    “We still don't know for sure why there are these differences between the results,” says Dijk. But Dijk and others suspect that Campbell and Murphy's study could have been influenced by a statistical phenomenon called regression to the mean. A person's clock can be either delayed or advanced, depending on the time of light treatment. The subjects presented in Campbell and Murphy's report who underwent a delay had clocks that ran earlier than average to begin with, and those who underwent an advance had clocks that ran later than average. Statistically, Dijk says it was likely that even without any treatment, these subjects' cycles would be closer to the mean on the next measurement.

    “Regression to the mean is always a legitimate concern in any experiment,” says Campbell, but he believes his original results are valid. “I totally understand why there has been skepticism” about the work, he adds, given that there is no known mechanism by which the signal could travel from knee to brain. His team is currently searching for the answer. “If we can show physiological changes in which mechanism can be inferred,” he says, “it would add a lot of credibility to the study.”

  12. RESEARCH CENTERS

    Science With an Agenda: NSF Expands Centers Program

    1. Jeffrey Mervis

    A 15-year-old program of collaborative academic-based research centers at NSF is thriving after a controversial start. What makes it tick?

    This summer, six U.S.-based university consortia will cross the finish line as the sole survivors of a grueling competition that began with 143 entrants. Barring last-minute hitches, the six will sign contracts with the National Science Foundation (NSF) to establish science and technology centers (STCs) in fields ranging from space weather modeling and biophotonics to water purification and Earth-surface dynamics (see page 507).

    Sensitive.

    Deborah Estrin hopes to use vast networks of tiny sensors to monitor changing environments.

    CREDIT: REED HUTCHINSON/UCLA PHOTOGRAPHIC SERVICES

    The winners have run a 2-year gantlet of proposal development, reviews, endless rewrites, site visits, and intense interactions with NSF officials. Their reward: up to $20 million over the next 5 years, with the likelihood of another 5 years' support if they make it through an exhausting midterm review.

    “What's not to like?” says psychiatrist Thomas Insel, director of the Center for Behavioral Neuroscience at Emory University in Atlanta, one of five winners of a similar STC competition in 2000. “It's a lot of funding, for a long time. It's really a fantastic opportunity.”

    But, although receiving an STC sounds like the fulfillment of every scientist's dream, it's actually the start of a longer, and even more difficult, race. Conducting world-class research is only part of the centers' mission. They are also expected to improve education in local schools; strengthen undergraduate and graduate training; improve minority representation in the sciences; and link up with other academic institutions, industry, and the community. It's a tall order, and one for which many academic scientists have scant training.

    There's also nowhere to hide: NSF officials engage in a hands-on style of supervision that some would call micromanagement. “There are no rules,” says Nathaniel Pitts, head of the Office of Integrative Activities, which oversees the STC program. “But we expect a big return for our investment.”

    Welcome to NSF's flagship program to support long-term interdisciplinary and cooperative research centers, begun amid controversy in 1987 by then-NSF director Erich Bloch. Many scientists worried that the proposed centers would drain funds from NSF's traditional support for individual investigators or, even worse, that they would promote applied research at the expense of basic science. But Bloch persisted, bolstered by a bevy of review panels that endorsed the concept.

    In the end, the critics' fears proved groundless. With an annual budget of $45 million, the program this year will consume a mere 1.1% of NSF's overall research budget. Industrial participation is voluntary, and many centers focus on the most basic of scientific pursuits.

    The six new centers will bring to 36 the number funded since the first group was chosen in 1989 (see map). Their predecessors have helped spawn a population explosion of academic research centers in the United States and around the world: NSF alone will spend $360 million this year on 18 different types of centers, in fields ranging from early childhood education to earthquake engineering. Agency officials say that more than a dozen countries have sought NSF's advice in creating similar programs.

    Centers of attention.

    Regional disparities are reflected in the pool of 36 centers that NSF has funded.

    CREDIT: SOURCE: NSF

    The STCs are in a class by themselves, however. Whereas most academic research centers focus on a single discipline or field, be it mathematics or nanoscience, STCs are open to researchers working in any area that NSF typically supports. Each center reflects a unique combination of participants and goals, and every one is a work in progress. Insel says he wanted to create what he calls a “collaboratory,” but “with 10 to 20 [principal investigators] at the table, we didn't have a clue how to do it. It took us 2 years, and lots of people dropping out and coming in, before we really figured it out.”

    Computer scientist Deborah Estrin, director-designate for the new Center for Embedded Networked Systems (CENS) at the University of California, Los Angeles, spent more than a year signing up collaborators at five universities for the center, which aims to develop algorithms for wireless and distributed sensing systems. For example, researchers will develop technology to monitor tree canopies, wildlife corridors, and other systems in the James Reserve in Southern California, as well as instrumenting structures on shaker tables that simulate damage from an earthquake. Their goal is to learn the best way to collect, analyze, and disseminate vast amounts of data. “The Internet was built on that same idea” of linking independent nodes, says Estrin. “It's a powerful model when it works.”

    Indeed, figuring out what works has been a challenge for all the centers—and for their NSF overlords. In fact, NSF pulled the plug on one center in each of the first two classes after deciding that each had fallen short of its goals (see page 508). To get a sense of what the new centers can expect as they try to live up to NSF's expectations, Science asked current and former center directors and their colleagues about their experiences in managing these strange beasts. Here are five attributes that emerged from these discussions:

    · Be a big-league manager.

    “The management plan is the key to success,” says NSF's Douglas James, who oversees two centers in the geosciences. “Can they run it properly? Do they have the facilities and the people to pull off what they have promised?”

    ILLUSTRATION: TIM SMITH

    For most scientists, managing a large enterprise is a lot less appealing than working at the bench or in the field. Chemist Anne Myers Kelley, for example, says she enjoyed being part of the Center for Photoinduced Charge Transfer at the University of Rochester in New York but hated her 2-year stint as its director. “I had a miserable time trying to serve a million masters. You have to be a first-rate politician, and I'm not,” says Kelley, now a professor at Kansas State University in Manhattan.

    One challenge that Kelley's cohort faced was a budget crunch that forced NSF to slice in half the budgets of its 1991 class of 12 centers. “We had to decide what was most important,” says Gérard Mourou, director of the Center for Ultrafast Optical Science (CUOS) at the University of Michigan, Ann Arbor, which ended its 11-year run in January but continues with funding from another NSF program within the physics division. “You have to make decisions, and they're not always popular.” Ophthalmologist Ronald Kurtz, a former colleague, likes the fact that Mourou ran the center “like a corporate CEO. … There was a shared interest in the success of the enterprise rather than just the fate of individual projects.”

    A center director is also likely to see his or her scientific productivity slump, notes biophysicist Barbara Baird, director of the 2-year-old National Center for Nanobiotechnology at Cornell University in Ithaca, New York. “My research has definitely slowed down,” she says, “and that's what I need to maintain my [National Institutes of Health] grants. I'm concerned that the publications won't be there when my grants come up for renewal.”

    · Avoid a school daze.

    Without exception, center directors consider research, not education, to be their strong suit. “We at the university are not in the business of K-12 education,” says Mourou. Yet, since the early 1990s, NSF has required each center to put a high priority on improving precollege education. “We wanted to support graduate students,” recalls Kelley, “but suddenly NSF was telling us to get involved in the public schools and to hire a full-time educational coordinator.”

    ILLUSTRATION: TIM SMITH

    Although all the centers fell in line, some did little more than graft an education component—offering summer workshops for teachers, for example, or sending scientists into the classroom—onto existing research activities. But that's not what NSF has in mind. “We've tried to move away from feel-good educational activities and toward something much more rigorous,” says Pitts. Rather than tallying the number of students served or the numbers of parents who have gotten involved in an activity, NSF wants centers to show how they have helped districts boost test scores or increase the number of upper level science and math courses. “Don't talk about outreach,” says Pitts's deputy, Dragana Brzakovic. “We want to see the impact you're having.”

    Recognizing that some faculty members might resist spending more time working with local schools, some centers have tapped a research scientist to run their education programs. “It's a mistake to put an educator in charge,” argues physicist Ramon Lopez, who will handle those chores for the soon-to-be Center for Integrated Space Weather Modeling based at Boston University. “You need someone who can go head-to-head with the scientists, someone whose body of work they respect.” Lopez is a member of both camps: He did a stint running education programs for the American Physical Society while continuing to work on space-plasma physics at the Applied Physics Laboratory in Laurel, Maryland.

    · Take advantage of commercial opportunities.

    Although NSF has never required centers to work with industry, such links are a mark of success for researchers working in commercializable fields. In addition to attracting major corporate support, successful centers foster an entrepreneurial culture that encourages researchers to cash in on their results.

    ILLUSTRATION: TIM SMITH

    It's no accident that CUOS has created four spinoff companies, says Kurtz, medical director and corporate vice president for one, IntraLase of San Diego. “The people who are going to take a technology out of the university often aren't your traditional faculty,” notes Kurtz, who discovered the center while he was an ophthalmologist on call at the university hospital. Working as a center investigator, Kurtz realized that the center's ultrafast laser pulses could be applied to human tissue as well as glass. The result was IntraLase, founded in 1997 by applying the center's femtosecond technology to clinical eye care.

    “The neat thing is that CUOS wasn't designed to do laser medicine,” Kurtz says about his company, whose Food and Drug Administration-approved product uses a laser instead of the traditional metal blade to form a corneal flap at the beginning of corrective eye surgery. “But it happened because [Mourou] was interested in finding collaborators with good ideas.” The center also became a training ground for budding scientist-entrepreneurs in ultrafast lasers, meeting NSF's wish that centers help stock emerging fields.

    · Broaden the talent base.

    Despite NSF's explicit charge to each center to foster diversity, directing a center is by and large a white male prerogative. Although NSF officials say that gender and race play no role in selecting leaders, women have been principal investigators on only two of the 36 initial awards. Nor have any of the centers been led by an underrepresented minority (an African-American, Hispanic, or Native American) scientist. “I think the problem [that accounts for the lack of women directors] is at the point of entry,” says Estrin of CENS, who says she's received “wonderful support in pursuing leadership positions. … We just don't have enough young girls who want to invest in this intellectual pursuit.”

    ILLUSTRATION: TIM SMITH

    NSF expects centers to find creative ways to expand the traditional scientific talent pool. “It takes somebody to go out there and get [minority undergraduates] excited from their first year, and there wasn't anybody doing that,” says Emory's Insel, describing a center program that involves the five historically black institutions in Atlanta. “Neuroscience has a really crappy record [of attracting minorities].”

    Training more minority scientists is not enough, say educators; centers must also help funnel talent where it is most needed. Minnesota's new center on Earth-surface dynamics, for example, has teamed up with the Fond du Lac Tribal and Community College in Cloquet, Minnesota, in hopes of attracting more Native Americans into upper level and graduate science and engineering programs. But Fond du Lac's Andy Wold says that the college is there “to educate the local population and have them remain in the community,” not to prepare them for careers at an urban center or major research university.

    · Plan your next step.

    A successful STC can provide scientists with a huge springboard for their next project. The Southern California Earthquake Center (SCEC) received $41 million from NSF over an 11-year span that ended in January. Led by seismologist Thomas Henyey of the University of Southern California (USC) in Los Angeles, the center helped establish a 250-station global positioning system network in Southern California to monitor the buildup and release of crustal strain and coordinated analyses of the Landers, Northridge, and Hector Mine earthquakes. But NSF rejected a bid from center scientists to become part of the third round of STCs funded in 2000, deciding its proposal was too derivative.

    ILLUSTRATION: TIM SMITH

    That rejection didn't end their plans, however, or even their ties to NSF. This winter, the new entity, dubbed SCEC-2, was awarded $12.5 million over 5 years from NSF's geosciences directorate and $5.5 million from the U.S. Geological Survey, which also funded its predecessor. That's on top of $10 million handed out last year by NSF's Information Technology research program for an online collaboration of seismic researchers and $650,000 from NSF's National Science Digital Library (NSDL) project to expand a pilot version of the Web-based Electronic Encyclopedia of Earthquakes.

    “SCEC reinvented itself,” says seismologist Thomas Jordan, who collaborated with the original center and came back to USC in 1999 to lead the SCEC-2 effort. “The STC funding was ending, but the community felt the need for a continuing analysis of these issues.” The new award, he says, will extend “a solid team that took years to build, thanks to enlightened management and a little blood on the floor.”

    Paying heed to all these lessons doesn't guarantee success if a center doesn't also have an exciting research agenda. “We're looking to fund the best science, with the best people,” says NSF's Pitts. But, as governments around the world look to maximize the return on their scientific investment, it's a good bet that this approach to basic research will remain a center of attention.

  13. RESEARCH CENTERS

    Power of the Purse: Down-to-the-Wire Talks Shape a New NSF Center

    1. Jeffrey Mervis

    MINNEAPOLIS, MINNESOTA—Civil engineer Gary Parker was nearing the end of a grueling 16-hour day this spring preparing his senior management team for a visit from National Science Foundation (NSF) officials. The stakes were high. Parker and his colleagues had spent much of the past 2 years shepherding, through a complex process, a proposal for one of NSF's coveted Science and Technology Center awards. They already knew that their plan for a National Center for Earth-Surface Dynamics (NCED) at the University of Minnesota had made the final cut. Now, if they could reach agreement with NSF on a final strategic plan, a 5-year, $18.8 million grant would be theirs.

    But first, they had to deal with a casual remark by NSF program manager Doug James early in the 3-day workshop about how to describe their intended research. His comment had forced the center's three teams to abandon their takeout pizza slices and scurry back to laptops and blackboards for a long night of revisions aimed at pleasing their funders. Environmental conditions only added to the tension: The lecture hall at the St. Anthony Falls Laboratory (SAFL), a dank, bare-bones facility built in the 1930s on an island alongside the only major waterfall on the Mississippi River, was stifling from an April heat wave, and the hard wooden chairs were uncomfortable.

    Making a silk purse …

    The University of Minnesota's Gary Parker heads a multidisciplinary team taking an integrated look at Earth-surface dynamics.

    CREDITS: (TOP TO BOTTOM) NCED; GRAHAM RAMSAY

    Each of the new centers-to-be carried out a similar drill this spring as NSF tried to make sure that its money would be well spent. And the Minnesota partnership had no shortage of ideas about how to use the anticipated award. The goal, the teams had already decided, was to create “a complete set of tools for interpreting the past and predicting the future evolution of the Earth's surface.” It was a once-in-a-career opportunity to create a new field—surface process science—that could provide new insights into everything from how to mitigate an impending subsidence of New Orleans and the surrounding Mississippi River delta to explaining erosion on the surface on Mars.

    But NSF officials seemed to want more. “We don't care what you call the science,” Parker said an NSF official had told him. “We want to know what this center will do for people.”

    The next day, James and his NSFcolleagues reinforced that message. In addition to a portfolio of world-class research that would benefit society, they said, the center needed a plan to improve science and math in the public schools and the community, broaden participation by underrepresented groups, and strengthen training for undergraduate and graduate students weighing careers in science. No detail was too small to attract NSF's attention, from whether a summer program should serve children in foster homes to the allocation of graduate students.

    The elements of the proposed center had remained surprisingly fluid throughout the review process. NCED began as a joint effort of Minnesota scientists—in particular, Parker and his co-directors, sedimentary geologist Christopher Paola and SAFL's director, Efi Foufoula-Georgiou—and colleagues from the University of California, Berkeley; Princeton University; and the Science Museum of Minneapolis, which offered land adjacent to its new $100 million facility in downtown St. Paul for outdoor exhibits on the changing Earth.

    David Mohrig, a sedimentary geologist at the Massachusetts Institute of Technology, was added to the mix after his work on submarine channels seemed a good fit with planned studies of terrestrial changes. And a suggestion from NSF to link up with a minority institution led the team to Fond du Lac Tribal and Community College, 2 hours north of the Twin Cities in Cloquet. Fond du Lac had just received a $2.5 million NSF grant to strengthen its programs, and the college already runs several environmental monitoring projects.

    NSF officials also thought that the center would benefit from studies of how humans change the landscape, and the foundation added money to the center's proposed budget for that purpose. The first step is likely to be a workshop, at which social scientists will be asked to draw up a research agenda. “Even though we didn't include a social science component in the original proposal, we have gotten quite excited about the possibilities,” says Paola.

    The door swings both ways, however. Midway through the workshop, the group agreed to part company with one proposed collaborator, U.S. Geological Survey hydrologist Dick Iverson of the Cascades Volcano Observatory in Vancouver, Washington. The decision would also mean doing without the observatory's 87-meter-long flume, with a 31-degree slope, that is used to study landslides and other sudden shifts in the landscape. “[Iverson is] very protective of his debris-flow facility,” Parker told the group, “and he's clearly uncomfortable with the idea of attaching his name to a paper that might contain concepts and ideas with which he strongly disagrees.”

    Iverson, who did not attend the workshop, said later that his decision to bow out was based mostly on concern that “I couldn't serve two masters. … The center is focused on the big picture, while we have a mission to develop new tools for hazard prevention.”

    One unexpected conference guest didn't appear until just after the full NSF delegation was seated. That's when Berkeley field ecologist Mary Power handed Parker a gift-wrapped box. Inside was a toy pig. The audience broke into applause as the pig, hanging on a cord attached to a light fixture, floated about the room on its battery-powered wings. “The pig can fly,” Parker proclaimed. The unspoken message to NSF was clear: And so can NCED.

  14. RESEARCH CENTERS

    The Geometry Center, 1991-1998. RIP.

    1. Jeffrey Mervis

    Was the cause professional jealousy, loss of key personnel, shifting priorities, the lack of community support, or a breakdown in communication with National Science Foundation (NSF) managers? Observers disagree on what killed the first Science and Technology Center (STC) at the University of Minnesota, created in 1991. But its demise provides a cautionary tale. Only one other center in the program's history has been terminated early, and that death was due to technical problems in trying to apply magnetic resonance technology to basic biology.

    From the start, the Geometry Center faced long odds. Even its mission was controversial. “Its attempt to introduce computer graphics and visualization into pure mathematics and geometry was novel, and the community wasn't very welcoming,” says Princeton University's David Dobkin, who chaired the center's governing board. “It wanted to change the field, but people weren't ready for that.”

    The center's enviable budget of $2 million a year was another red flag to a community in which investigators typically receive $25,000 grants and work alone or with a single student. “We were immediately a target for people who said we didn't deserve all that money,” recalls its former director, Richard McGehee. Shifting leadership was also an issue: McGehee took over after 2 years from Albert Marden, the center's founding director, although McGehee and others say the center's intellectual father was William Thurston, a Fields Medal winner who had little involvement in the center once it was established.

    NSF's first site visit team identified numerous problems—in particular, too narrow a scientific focus. That led to an abrupt shift toward developing educational products for students and teachers, as well as exploring possible applications for the fledging World Wide Web. “We had one of the first 100 Web sites,” recalls McGehee, “but there wasn't any way to put math on the Web.”

    Given those concerns, NSF boosted its oversight, holding a second site visit in 1994 and a third in late 1995. The scrutiny left investigators wondering what was expected of them. “They were like a deer caught in the headlights,” recalls applied mathematician Rosemary Chang, a member of the third site visit team, which recommended renewed funding. But the reviewers also pleaded with NSF to delay the next site visit “to give them a chance to get back on track.”

    Instead, Donald Lewis, who had recently arrived as head of NSF's mathematics division, impaneled three outside reviewers in June 1996 to tell him if the center was likely to pass its scheduled 6-year review in 1997. The team concluded that the center would fail and recommended that it be phased out. “We didn't feel that the money was providing the same yield” as other math centers funded outside the STC program, says Richard Herman, provost of the University of Illinois, Urbana-Champaign, and one of the reviewers.

    Robert Miner, a former researcher at the center who is now with Design Science Inc. of Long Beach, California, thinks the center “was right all along” to focus on Web-based communications. His company recently struck a deal to incorporate the center's software, now called Equation Editor, into Microsoft Word. And McGehee says that the center's involvement in education and training “are exactly what the division is now emphasizing.”

    Lewis, now back at the University of Michigan, Ann Arbor, doesn't dispute those accomplishments. But he believes that the center never lived up to its promise. “I didn't see any progress,” he says, “so I pulled the plug.”

  15. Debate Surges Over the Origins of Genomic Defects in Cancer

    1. Jean Marx

    Cancer cells are chock-full of mutations and chromosomal abnormalities, but researchers can't agree on when and how they come into play

    Inside a cancer cell is a veritable gallery of horrors: inactivated genes, extra or missing chromosomes, and a host of other genetic abnormalities, large and small. Most researchers agree that the great majority of cancers are triggered by accumulation of several such changes—some wiping out tumor suppressor genes and others activating growth-promoting oncogenes, for instance. But there's no agreement on how incipient cancer cells acquire so many mutations and chromosomal abnormalities. Increasingly, the debate is focusing on the role of genomic instability: some kind of inherent defect that makes the cancer cell genome more susceptible than that in normal cells to developing the various abnormalities.

    Some researchers maintain that genomic instability is needed early on to set cells on the path to cancer. Even within this camp, however, the members disagree about what kind of genomic instability comes into play. Some think that cancer cells have what is sometimes called a “mutator phenotype” that makes them more prone to acquiring small mutations, simple base changes or insertions or deletions of small DNA segments that can cause trouble if they happen to strike oncogenes or tumor suppressors. Others within the genome-instability camp think that much bigger changes are needed, such as gains or losses of whole chromosomes, or shuffling of large segments either within or between chromosomes.

    On the opposite side are researchers who assert that cancer cells start out no more prone to genomic instability than normal cells. The cells mutate at a normal rate, they say, but because they divide more often than normal cells do, they have more opportunities to accumulate mutations. At most, these researchers maintain, genomic instability might arise late in the development of a tumor and might contribute to its ability to spread in the body. But it's not necessary for a cancer to occur.

    “The field, even though it's been going on for a while, isn't mature yet. There are a lot of divergent views,” says Garth Anderson of Roswell Park Cancer Institute in Buffalo, New York. But that leaves a large gap in researchers' picture of carcinogenesis.

    “If you want to understand cancer, you need to know the answers” to the many questions about the role genome instability plays in cancer, says Bert Vogelstein of Johns Hopkins University School of Medicine in Baltimore, whose own lab's work supports the view that cancer cells are especially susceptible to large chromosomal changes. Less clear is whether having those answers will aid in the development of better cancer therapies, although researchers favoring the different views can come up with various scenarios where it might. “That's speculative,” Vogelstein says. “But so much of what has been learned about cancer has been so surprising that these ideas shouldn't be dismissed.”

    Mutations great and small

    Researchers have been speculating that genomic instability might be involved in cancer development for roughly a century. The earliest work on genomic aberrations in cancer cells focused on big changes easily detected by peering at cells through a microscope. In 1914, for example, German biologist Theodor Boveri postulated that cancer cells might be aneuploid: having an abnormal chromosome number. They might possess extra copies of some chromosomes while lacking others altogether. That suggestion has stood the test of time, particularly for the common tumors of the colon, lung, and breast. “It's extremely difficult to find a cancer cell with a normal karyotype [chromosome composition],” Vogelstein says. Conversely, he adds, “you can't find a normal cell with an abnormal karyotype.”

    The idea that increased genomic instability might lead to cancer by producing a much less extensive type of DNA damage—mutations of individual genes—got a big boost in the mid-1990s. Several teams, including Vogelstein's, showed that a hereditary form of colon cancer known as HNPCC (for hereditary nonpolyposis colon cancer) is caused by mutations in so-called mismatch-repair genes needed to repair a certain kind of DNA damage that occurs when the DNA is copied incorrectly prior to cell division. The inability to repair this damage leaves the cells vulnerable to other mutations that can, by hitting genes involved in growth control, lead to HNPCC and related cancers. Other hereditary cancers, including breast cancer and some skin cancers, have also been linked to defects in the cell's DNA repair machinery (see also the Report on p. 606 and the Perspective on p. 534).

    Scrambled.

    In contrast to the chromosome complement of a normal cell (top), that of a cancer cell (bottom) is highly abnormal, with extra copies of some chromosomes, lost copies of others, and chromosomes made of fused pieces of other chromosomes.

    CREDIT: RUHONG LI/UNIVERSITY OF CALIFORNIA, BERKELEY

    But the particular mutations that cause HNPCC and breast cancer are relatively minor causes of their respective cancers, each accounting for approximately 5% of the total cases. A big question still remains about whether various types of genomic instability contribute to the much more common forms of these cancers, which are apparently not inherited, or to other forms of cancer. “Since tumor cells have these [genome] changes, it's reasonable to assume that there is genetic instability behind it, but that's not necessarily so,” says Felix Mitelman of University Hospital in Lund, Sweden, who has been cataloging the larger chromosomal abnormalities associated with human cancers.

    To determine whether cancer cells are in fact genetically unstable, researchers have to know the rate at which DNA alterations accumulate—generally expressed as the number of mutations per cell division—in both normal and cancer cells. But as Mitelman points out, we “don't know how much instability goes on normally.” As a result, researchers have a hard time establishing whether the mutation rate is elevated in cancer cells or not.

    That leaves those who favor the idea that genomic instability is a cause of cancer—as well as those who don't—plenty of room to argue their cases. One long-time proponent of the idea is Lawrence Loeb of the University of Washington, Seattle. Some 25 years ago, he calculated that the normal mutation rate, estimated at 2 × 10−7 mutations per gene per cell division, isn't fast enough for cancer cells to acquire the mutations needed to make them become malignant and grow out of control. That led Loeb to suggest that cancer cells acquire some kind of early mutation, possibly in their DNA-repair machinery or in the polymerase enzymes that copy their DNA, that give the cells a mutator phenotype that predisposes them to the accumulation of additional mutations.

    But those calculations have been questioned by other researchers, including Ian Tomlinson and Peter Sasieni of the Imperial Cancer Research Fund in London and Walter Bodmer of John Radcliffe Hospital in Oxford, both in the United Kingdom. In an analysis reported in the March issue of the American Journal of Pathology, they started with an assumption that it takes many more cell divisions to produce a cancerous tumor than Loeb and colleagues had estimated. Given that, the researchers concluded that even normal mutation rates could account for very high numbers of mutations in the cells. “My view is that there has been a bit too much emphasis on genomic instability as a driving force [in cancer development],” Tomlinson says.

    Tallying the damage

    To assess genomic instability experimentally as opposed to mathematically, researchers have often turned to colon cancers, because they have a known progression from benign polyps and adenomas to full-fledged invasive cancers, and tissue samples from all these stages are available. This work, focusing mainly on nonhereditary cancers, has produced equally discordant results. Complicating matters, the researchers who implicate early genomic instability in cancer and those who don't often look at different types of mutations.

    For example, experiments performed a few years ago by Anderson, Daniel Stoler, also at Roswell Park, and their colleagues point to a high degree of genomic instability in colon cancer cells. The researchers modified a standard technique for amplifying DNA that enables them to detect insertions and deletions in the genome. When they applied the method to human colon cancer cells, they found that the cells contain roughly 11,000 such mutations, most of them small. “When you actively look within the chromosomes, you can pick up a lot of [mutational] events,” Anderson says. Cells from colon polyps had almost as many of the mutations. In contrast, Anderson says, “we see none of this variation” in normal colon cells. These results indicate, he concludes, that colon cancer cells acquire genetic instability very early in cancer development, well before they become malignant.

    Vogelstein and his colleagues disagree, at least with regard to such small mutations. In the 5 March issue of the Proceedings of the National Academy of Sciences, they report sequencing more than 3 million bases of DNA, including coding sequences from 470 genes, from human colon cancer cells that were grown either in culture or in mice. After eliminating harmless, natural DNA variations, they found that the sequences carried only three true mutations, two involving base changes and the other a 14-base pair deletion. Based on this finding plus the estimated number of cell divisions needed to form a tumor, the Vogelstein team concluded that colon cancer cells have no more of these small mutations than would be produced by normal mutation rates.

    Loeb counters that the Johns Hopkins workers seriously underestimated the actual number of mutations in cancer cells because their analytic technique only detects mutations that were “clonal,” or present in all the cells analyzed. Vogelstein concedes that their sequencing methods wouldn't pick up mutations present in a small minority of cells, but he contends that the issue is not the total number of mutations in all the cells of a tumor but the rate at which they appeared, which the team took into account. Other researchers who used different methods have come to similar conclusions for colon and other cancers, Vogelstein adds.

    Many ways to go wrong. The cell cycle provides numerous opportunities for defects to arise that can lead to mistakes in chromosome sorting. DNA-repair machinery can also malfunction and lead to genome instability. Genes linked to such defects are indicated in red.

    ADAPTED FROM CHRISTOPH LENGAUER

    Major chromosomal upheavals

    Mutation rate aside, other researchers are trying to see whether the cells are more prone to higher order derangements—at the chromosome level. In the 25 years or so since researchers began discovering first oncogenes and then tumor-suppressor genes, aneuploidy has generally taken a back seat to the smaller mutations affecting these genes. Many researchers still think that aneuploidy develops late—an effect, not a cause, of cancer development. Virologist Peter Duesberg of the University of California, Berkeley, and the University of Heidelberg at Mannheim, Germany, is a notable exception.

    Duesberg, who is perhaps best known for his contrarian views on AIDS—he maintains that it's not caused by HIV (Science, 9 December 1994, p. 1642)—is also something of an iconoclast about genes and cancer. “The prevailing gene mutation hypothesis is, to say the least, defective,” he says. Among other criticisms, Duesberg points out that some known carcinogens, including asbestos and arsenic, do not cause mutations. In addition, he notes, researchers have had a tough time making human cells cancerous through the introduction of human oncogenes. Although others attribute this to the likelihood that several mutations are needed to produce human cancers, Duesberg concluded, he says, that “something else has to happen [to make cells cancerous], and there's a good chance that it's aneuploidy.” Gain or loss of whole chromosomes can easily upset the checks and balances that maintain normal cell growth patterns, he points out.

    In support of his idea, Duesberg cites experiments in which he and his colleagues exposed cells in culture to chemical carcinogens. The treated cells became aneuploid long before they began showing signs of becoming cancerous. Duesberg's model also gains support from recent results from Vogelstein and his colleagues. Although the researchers didn't see an increased rate of small mutations in colon cancer cells, Christoph Lengauer of the Johns Hopkins group found that the most common type of colon cancer cells is genomically unstable when it comes to gross chromosomal changes.

    In these experiments, Lengauer compared the accumulation of gross chromosomal changes in two types of colon cancer cells: those with a mismatch repair defect like that in HNPCC and those without. Unlike most colon cancer cells, HNPCC cells have normal or nearly normal chromosome numbers, and the mismatch-repair-defective cells didn't acquire many chromosome-level oddities over the course of many cell divisions.

    In contrast, cells that had been spurred to become cancerous by some other means showed a high rate of chromosome loss or gain. Such cells acquired chromosomal changes 10 to 100 times faster than the mismatch-repair-defective cells. This shows that the usual type of colon cancer cell has an unstable genome, Lengauer says. Vogelstein adds that the experiments also show that aneuploidy is not just an after effect of cells becoming cancerous. If it were, colon cancer cells with the mismatch-repair defect “should also have a high rate of the chromosomal abnormalities,” Vogelstein says.

    In more recent work published in the February 2001 issue of Cancer Research, Lengauer, Vogelstein, and their colleagues detected major chromosome abnormalities even in very small, precancerous colon adenomas removed from patients not known to have an HNPCC defect. “You may need the instability to ever get to a cancer,” Vogelstein concludes.

    Evidence for the other argument—that chromosomal abnormalities are not needed for cancer development—comes from William Hahn of Harvard Medical School in Boston, Robert Weinberg of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, and their colleagues. In the past few years they have shown that, contrary to Duesberg's assertion, a variety of normal human cell types can be made cancerous by introducing the right combination of genes, including the ras oncogene. There's no need to summon up aneuploidy to explain carcinogenesis, they say.

    But Duesberg responds that he and his colleagues have looked at cells transformed by the Weinberg team and found that they are highly aneuploid. That, rather than the specific gene changes, is what makes the cells cancerous, Duesberg maintains. Weinberg responds that when his team grows the cells in culture, they see no sign of aneuploidy or other widespread genomic instability, and he suggests that Duesberg's culture conditions might have been too harsh.

    Other studies have cast further doubt on the centrality of chromosomal abnormalities, and even of the kind of repair defect seen in HNPCC, in early development of the common forms of colon cancers. William Dove of the University of Wisconsin, Madison, and his colleagues used a mutant mouse to track cells destined to become cancerous. The mice have a defect in the APC tumor-suppressor gene that causes them to develop numerous intestinal cancers. In humans, loss or inactivation of this gene causes a hereditary condition called familial adenomatous polyposis with similar symptoms. In work published online on 13 June by the Proceedings of the National Academy of Sciences, Dove and his colleagues looked for both the small mismatch-repair type of defect and larger chromosomal abnormalities in premalignant adenomas from the mutant mice and from humans. They found none, leading them to conclude that colon cancer can develop without either kind of defect, although that does not eliminate the role of mismatch-repair mutations in HNPCC.

    Tomlinson, Bodmer, and their colleagues have also looked for signs of genomic instability in early human tumors and have come to the same conclusion as Dove. “When a tumor starts to grow, genomic instability is not a big factor,” Tomlinson concludes, although it might come into play later.

    Research into how chromosomal derangements might arise is also lending support for the idea of an early role for genomic instability in cancer. Some cancers are associated with defects in the centrosomes, small cellular structures that help form the mitotic spindle and are thus necessary for normal separation of the chromosomes during cell division (Science, 20 April 2001, p. 426). Malfunction of the telomeres could contribute as well. For example, if the telomeres are lost, two chromosomes might fuse end to end and be missorted during cell division (see Maser and DePinho Review on p. 565).

    Indeed, researchers estimate that hundreds of genes, many of which control so-called “checkpoints” that keep cells from dividing if the DNA is damaged or there are other problems with the chromosomes, are potential targets for aneuploidy-causing mutations.

    At the moment there is no end in sight to the numerous debates on the role of genomic instability in cancer development. But one thing is certain: Cancer researchers don't have to worry about running out of ideas—or work—as they try to get a better understanding of how cancer arises.

    ADDITIONAL READINGS

Log in to view full text