News this Week

Science  03 Dec 1999:
Vol. 286, Issue 5446, pp. 1822
  1. ASTRONOMY

    Shadow and Shine Offer Glimpses of Otherworldly Jupiters

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    In a week of announcements colored by rivalry and bruised feelings, astronomers have assembled their sharpest picture yet of planets around other stars. Early last week, one group of observers said it had twice seen a giant exoplanet cross the face of its parent star. The finding confirmed a less certain claim by another group 3 weeks ago (Science, 19 November, p. 1451) and gave a precise fix on the planet's mass, size, and density. Only a small minority of exoplanets are likely to reveal themselves by making a transit across their parent star, however, whereas every exoplanet should reflect light. So astronomers were even more intrigued when another group posted a paper on the Web announcing the discovery of starlight reflected from the Jupiter-sized exoplanet orbiting the star Tau Boötis, affectionately known as Tau Boo.

    The reflected light has yet to be confirmed. But if it is real, it agrees with data from the transits in showing that these exoplanets have densities close to Jupiter's: about one-third the density of water. The transits “establish unquestionably that this planet, and by extension, probably all the extrasolar giant planets detected to date, are like Jupiter in composition and structure,” says planetary scientist William Hubbard of the University of Arizona, Tucson.

    Before this flurry of observations, no one had ever seen an exoplanet. Roughly 28 are known—six were announced just last week—but in every case their presence was inferred from the wobbling of their parent stars induced by the gravitational tug of the orbiting planet. Although virtually every astronomer accepts the wobbles as good evidence of unseen planets, the nature of these planets has been the subject of heated debate. A dozen exoplanets weighing about as much as Jupiter orbit their parent stars at less than one-tenth of an astronomical unit (1 AU equals Earth's orbital radius), where the star's heat might either burn these “hot Jupiters” to dense rocky cinders or inflate them into extended gas giants. With no way to estimate the radius of these planets, astronomers could not tell.

    But a planet whose orbit crosses the line of sight to its parent star should dim the star's light, by an amount that is a clue to its size. For planets orbiting close to their star, “there is about a 1 in 10 chance of getting the necessary alignment,” estimates Harvard University astronomer Dave Charbonneau. Charbonneau and Tim Brown of the High Altitude Observatory in Boulder, Colorado, got lucky.

    Observations in 1997 and 1998 had uncovered a slight wobble in the star HD209458 that suggested the presence of a planet orbiting the star once every 3.5 days. Brown and Charbonneau calculated that if the planet's orbit carried it in front of the star, it would transit on the nights of 8 and 15 September. (The passage on 11 September would take place in daylight.) The 1% dimming produced by a planet the size of Jupiter should be relatively easy to spot with a small telescope, they thought. “We used a 4-inch telescope with a CCD [charge-coupled device] camera that Tim Brown literally built in his garage,” says Charbonneau.

    Brown, Charbonneau, and their collaborators found that on the predicted nights, the luminosity of HD209458 dipped sharply, by slightly more than 1%, just when the planet should have been passing in front of the star, and remained nearly constant for several hours before climbing back up as the planet passed beyond the edge of the star. From the duration of the dimming, they could work out how the orbit was oriented relative to the line of sight—and thus how massive the planet had to be to produce the observed wobble of HD209458: 0.63 times the mass of Jupiter. And from the amount of dimming, they pegged its radius at 1.27 times larger than that of Jupiter.

    That's somewhat smaller than the 1.6 times Jupiter's diameter that Geoff Marcy of the University of California, Berkeley, Greg Henry of Tennessee State University in Nashville, and their colleagues had calculated for the same planet based on the partial transit they observed on 7 November. But the scientific disagreement was a minor element in the furor that erupted on the Internet when Charbonneau and Brown announced their findings on 23 November, after their paper had been refereed and accepted by Astrophysical Journal Letters. Along with their announcement, the astronomers circulated an e-mail message complaining about Marcy and his collaborators. Charbonneau says he and Brown believed that group had unfairly scooped their discovery and in the process violated scientific ethical standards by announcing their unrefereed results in a press release more than 10 days earlier.

    Marcy acknowledges that when he approved the press release, he was aware that Brown and Charbonneau had searched HD209458 for transits in August and September, although he did not know what they had found. But after their complaints, he quickly admitted his error in proceeding with the release without waiting. “I believe this constituted a breach of collegial standards on my part and an ethical mistake,” Marcy said in another e-mail. Over the Thanksgiving weekend, Brown and Charbonneau accepted Marcy's apology and declared the matter closed.

    The week's second major planet discovery also had its share of controversy, although in this case it was purely scientific. A team led by Andrew Cameron of the University of St. Andrews in Scotland reported that in observations at the 4.2-meter William Herschel Telescope in the Canary Islands, they detected a glimmer of starlight reflected by the planet thought to be orbiting Tau Boo. In their paper, released last week on the Los Alamos National Laboratory's preprint server (xxx.lanl.gov; see astro-ph/9911314), Cameron's team reported that the amount of light reflected by the planet indicates that it must be about twice the size of Jupiter. They also teased from the signal the planet's orbital inclination, and thus its mass: eight times that of Jupiter. (Their posting indicated that the paper was under embargo by Nature, where it had been accepted for publication, but the embargo did not last long; stories about the find appeared on several Web sites, including that of the British Broadcasting Corp.)

    Charbonneau, for one, was surprised to hear the news. Several months earlier, he and his collaborators had observed Tau Boo at the 10-meter Keck Telescope on Mauna Kea in Hawaii and failed to see any reflected light. “Something just doesn't jive between our two results,” says Lick Observatory astronomer Steven Vogt, a member of Charbonneau's team. But no one is crying foul in this controversy, mostly because identifying reflected light from the glare of a star is so challenging that success or failure can turn on the most minute of assumptions.

    The object of the search is a faint ghost of the parent star's spectrum that appears to jiggle back and forth, from longer to shorter wavelengths, in time with the star's orbital period—3.3 days, in the case of Tau Boo. The ghost is the small portion of the star's light reflected from the planet, and the jiggle is the result of the Doppler shift—the motion-induced wavelength change that makes the pitch of a car horn rise and fall as the car approaches and then recedes. Why only Cameron's team saw this telltale ghost, no one is quite sure.

    “Charbonneau did everything correctly, but Cameron's result is pretty compelling,” says University of California, Berkeley, astronomer Debra Fischer, the leader of the Lick Observatory planet search team. “It is a very suggestive result,” agrees Charbonneau, “but by no means conclusive.” Charbonneau says he can't tell from the paper exactly how Cameron's team analyzed its data, “and it is really the nitty-gritty that sets the level of confidence.”

    Cameron declined interview requests, citing Nature's embargo policy. But even Charbonneau is confident that conclusive evidence for reflected light from the Tau Boo planet will be found shortly. “We just need more telescope time,” he says.

  2. MOLECULAR BIOLOGY

    Member States Buoy Up Beleaguered EMBL

    1. Michael Balter

    A financial crisis facing the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany, and one of its key outstations has edged closer to resolution. Last week, EMBL's governing council, made up of delegates from the lab's 16 member countries, agreed in principle to meet the costs of a multimillion-dollar pay claim by staff members dating back to 1995. The council also tentatively resolved to cover a shortfall next year in the infrastructure budget of the European Bioinformatics Institute (EBI) near Cambridge, U.K., caused by a recent decision by the European Union to stop funding its share of the infrastructure costs for the EBI and several other European research facilities (Science, 5 November, p. 1058). Britain's Medical Research Council (MRC) has also come to EBI's aid with an offer to loan the center stopgap funds.

    EMBL and EBI are far from being home free, however. Last week's resolutions—which will not be implemented before the council's next meeting in March 2000, so that delegates can see if their own governments are willing to allot the additional EMBL funding needed—leave some key issues unresolved. EMBL is forced to pay retroactive pay increases because the administrative tribunal of the Geneva-based International Labour Organization (ILO) recently ruled that the lab had violated its own staff guidelines by setting 1995 salaries too low. But the ILO judgment leaves ambiguous exactly how much money is due in back payments. One interpretation would mandate EMBL to boost 1995 salary levels by an average of 8%. When back pay and the 10% annual interest awarded by the tribunal are factored in, this would amount to an immediate payment equivalent to a quarter of EMBL's annual core operating budget of about $43 million. (Cases concerning 1996 and 1997 salary levels are still pending before the ILO and could cost the lab even more.)

    The other interpretation, which EMBL's council and management are fervently hoping will win out, would require an average boost in 1995 levels of only 2.1%. At its meeting, the council agreed to make funds available to cover this less costly scenario while asking the ILO to clarify its ruling, a process that will take at least 6 months. Cell biologist Julio Celis, chair of the council and head of the Center for Human Genome Research in Århus, Denmark, told Science that the council's main concern was “to keep the morale of the staff high,” but it is at this point only prepared to pay the 2.1% figure and has directed EMBL director-general Fotis Kafatos to prepare a contingency plan for its March meeting in the event the ILO tribunal says it must pay 8%.

    Concerns over the impact of such payouts on EMBL's scientific program have prompted many staff members to accept the 2.1% figure. “This would provide a fair solution to the problem,” says molecular biologist Matthias Hentze, who was one of the original complainants before the ILO. On the other hand, Hentze says, he understands the dilemma of many EMBL staffers—particularly nonscientific workers—who are trying to cope with Heidelberg's high cost of living on relatively low salaries. But even if all of the present staff could be persuaded to accept a compromise, any one of a large number of former EMBL employees could still challenge the deal before the ILO. “The basic principle here is the rule of law,” says one former EMBL scientist who asked not to be identified. “EMBL should honor its contract” with the staff.

    But at the moment, a slim majority of current staff is in favor of compromising. In a vote conducted by EMBL's staff association last month, 54% of the staff said they would be willing to accept the 2.1% figure, while 46% insisted upon the 8% interpretation. In a 23 November letter to the council delegates, which Science has obtained, the staff association warned that despite this slim majority in favor of the less costly interpretation, “individual members of staff would continue the case” by appealing to the ILO, and went on to urge the council to “consider implementing the 8% salary adjustment.” Such an outcome “will be a substantial financial challenge to the laboratory,” Kafatos told Science. But he says that he will argue “forcefully” that EMBL's scientific program must go ahead despite the costs. “The focus has to be on science.”

    That scientific program will be put under more pressure next year by the need to make up for the withdrawal of the European Union as a funding partner for EBI. Until the council can get government approval to increase its funding to EBI next March, the MRC has offered to loan EMBL enough money to keep the center running. “EBI is not out of the woods yet,” says Graham Cameron, co-head of the institute. Cameron adds that although the council “has expressed a clear intention to insure that the 2000 budget will be up to the 1999 level … our [$8.3 million annual] budget is still less than half that of our peers in the United States”—namely the National Center for Biotechnology Information (NCBI) in Bethesda, Maryland, whose yearly budget is about $19 million. Catching up with the NCBI is a key component of EMBL's 5-year plan for 2001–05, a draft of which Kafatos presented at the council meeting.

    Despite these uncertainties, many EMBL scientists expressed satisfaction that the council had acted quickly to deal with the crisis. “The council has taken the high road, and that is very good for EMBL,” Cameron says.

  3. BIOMEDICINE

    Cholesterol-Lowering Drugs May Boost Bones

    1. Gretchen Vogel

    Most drug side effects are unwanted, but a newly discovered “side effect” of the statins, drugs taken by tens of millions of people to lower their cholesterol levels and presumably their risk of heart disease, may in fact be beneficial. On page 1946, a team led by endocrinologist Greg Mundy of the biotech company OsteoScreen and the University of Texas Health Science Center in San Antonio shows that statins trigger bone growth in tissue culture and in rats and mice. If they have the same effect in humans, statins could be the first drugs able to increase bone growth in patients with osteoporosis, the bone-weakening condition that often afflicts postmenopausal women.

    The observation could be “a real breakthrough” in osteoporosis treatment, says Lawrence Riggs, an endocrinologist at the Mayo Clinic in Rochester, Minnesota. “If you can thicken remaining bone, you could theoretically bring bone mass back to normal” in patients, he says. “We have not had effective treatment for that.” Drugs available today can slow ongoing bone loss but cannot fully repair weakened bones.

    Statins lower blood cholesterol concentrations by blocking an enzyme called HMG Co-A reductase, which the body uses to synthesize the lipid. But there were already hints that the drugs might have broader effects. A meta-analysis published in the journal Circulation last year, for example, showed that people taking the drugs in large clinical trials had lower death rates from all causes, not just heart disease. Even so, Mundy says, finding an effect of statins on bone came as a “total surprise.”

    He and his team had been screening a library of 30,000 natural compounds to find potential bone-strengthening drugs. They tested the molecules in cultured mouse bone cells, looking for any that could increase the production of bone morphogenetic protein-2 (BMP-2), which stimulates bone growth. Only one compound had the desired effect. This was lovastatin, a molecule derived from a strain of the fungus Aspergillus terreus that Merck sells under the brand name Mevacor in the United States.

    To find out if lovastatin's ability to stimulate BMP-2 production by cultured cells translated into increased bone formation in live animals, the team injected the drug into the tissue above the skullcap bones of young mice. After injecting the animals three times a day for 5 days, the researchers found that treated bone was nearly 50% larger than that in mice injected with a salt solution.

    Another statin, called simvastatin (trade name Zocor), also had promising effects, this time in female rats whose ovaries had been removed to mimic the hormonal changes of menopause, when many women start to lose bone density. In rats that received oral doses of the statin for 35 days, the leg bones and vertebrae were nearly twice as dense as in rats that received a placebo.

    Mundy and his colleagues don't know how the statins encourage bone growth. But cardiologist James Liao of Brigham and Women's Hospital in Boston, who has studied the molecular effects of statins on the cells that line blood vessels, suggests one possibility. He notes that by blocking HMG Co-A reductase, the statins also block the production of other lipids that attach to signaling proteins in the cell, allowing them to function properly. Disrupting these proteins might somehow trigger the cells to make BMP-2, he says.

    It's also far from clear what the findings mean for people who take statins, which can cost hundreds of dollars a year. A few scientists who have conducted clinical trials on statins have searched their databases for signs that the drugs improved bone density. They saw some intriguing hints: Clinical researcher Steven Cummings of the University of California, San Francisco, for example, says patients taking statins seemed to have lower risk for bone fractures. But the numbers were too small to produce statistically significant results.

    Indeed, the doses used to lower cholesterol levels may be too low to have much effect on bone density. Mundy and his colleagues gave their rats doses about 10 times higher than those typically taken by patients. The high doses may be needed, Mundy says, because the statins currently on pharmacy shelves were chosen for their ability to target the liver, the body's main site of cholesterol synthesis, rather than the bones. He and Cummings both think that similar compounds chosen for their ability to target bone would likely be more effective. “My guess is that the statins given for lipid lowering are not necessarily going to be ideal” for treating osteoporosis, Mundy says. But they might point to similar molecules that could encourage bone formation more effectively, he says—perhaps with the side effect of lowering high cholesterol levels.

  4. STEM CELLS

    Rat Spinal Cord Function Partially Restored

    1. Ingrid Wickelgren

    Behind the controversy over research on primordial cells from early human embryos is a dream: using these versatile cells to repair a wide range of injured tissues in adults. Researchers at Washington University in St. Louis have now brought this dream a step closer to reality for the spinal cord.

    In the December issue of Nature Medicine, neurologists Dennis Choi, John McDonald, and their colleagues report that when they injected immature nerve cells derived from mouse embryonic stem cells into rats whose hindlimbs had been paralyzed by blows to their spinal cords, the animals regained some mobility. What's more, because the animals were treated 9 days after they were injured, the results suggest that stem-cell therapies might someday lead to treatments for the hundreds of thousands of patients worldwide with spinal cord injuries they received long ago.

    Oswald Steward, a spinal cord researcher at the University of California, Irvine, College of Medicine, calls the work “compelling” and “an obligatory first step toward a transplantation therapy for spinal cord injury” based on embryonic stem cells. Still, he and others caution that no such therapy is anywhere near the clinic. The Washington University researchers do not yet understand how the transplants worked, and until they do, it will be hard to improve upon the results.

    Choi and McDonald started their stem-cell experiments back in 1996, upon hearing that a colleague at Washington University, neurobiologist David Gottlieb, had chemically coaxed mouse embryonic stem cells to become nerve cells in a lab dish. Initially, Choi and McDonald simply wanted to test whether Gottlieb's mouse embryonic stem cells would survive in the rat nervous system, as a first step toward a workable therapy. After Gottlieb coaxed the cells to develop into precursors of nervous tissue, Choi's team injected the cells into the spinal cords of 22 adult rats with 9-day-old spinal cord injuries. Several weeks later, the researchers examined the animals to see what had become of the transplants.

    By using fluorescent antibodies that home in on mouse tissue, the researchers could see that many of the implanted cells had survived and spread throughout the injured spinal cord area. Using antibodies that stick to specific cell types, they also detected clear signs that those cells had matured to form both nerve cells and support cells known as oligodendrocytes and astrocytes. “We're confident that the cells survive and differentiate,” Choi says.

    Meanwhile, the researchers checked the rats for any behavioral benefits of the transplants, not expecting to find anything dramatic. After all, no one had ever seen any improvement in locomotion from an attempt to repair damage to the spinal cord more than 24 hours after an injury. But within a month of performing the transplants, the Washington University team noticed that the rats could lift their rear ends and step awkwardly with their hindlimbs. By contrast, rats that had received sham injections simply dragged their behinds wherever they went. “We didn't believe the behavioral recovery when we first saw it,” McDonald recalls. But after seeing exactly the same result with a second group of rats, the scientists knew it was real.

    Exactly what accounts for the improvement is still unclear, however. One possibility is that the new mouse neurons made functional connections with rat neurons, thus partially restoring the spinal cord's ability to transmit nerve signals between the brain and the rear legs. Another is that the mouse-derived oligodendrocytes rebuilt the insulating myelin sheaths around battered spinal cord nerves, enabling them to conduct impulses again. And a third hypothesis is that the implanted cells simply secreted chemicals that acted on damaged cells in the rat spinal cord, either preventing them from dying or restoring their ability to function.

    Choi's team is now examining the various possibilities so that they can determine how to get better results. “We've got to figure this out,” Choi says. “Otherwise it's a random walk.” They would also like to extend the delay before treatment from 9 days to a month or two, which would be a better test of prospects for fixing human spinal cord injuries that are years to decades old. Nevertheless, Choi is thrilled to have taken this first step. “We're breaking new ground,” he says.

  5. PLANETARY SCIENCE

    Another 'Ocean' for a Jovian Satellite?

    1. Richard A. Kerr

    Oceans seem to be popping up everywhere among the satellites of Jupiter. First it was Europa's 100-kilometer-deep, ice-encrusted ocean, which might even harbor some life; then Ganymede and Callisto's deep waters turned up, buried deeper than Europa's. Observations from the ground and the Galileo spacecraft now suggest that it may be fiery Io's turn. But there are no tantalizing prospects for life in Io's proposed ocean. At something like 2000 Kelvin, the ocean seething beneath Io's volcanoes and lava lakes would vaporize the hardiest creature, for this ocean would consist of molten rock.

    If Io's magma ocean is really there, it may be fueling geologic “processes we don't see on Earth and that haven't been seen in billions of years,” notes geophysicist Susan Kieffer of Kieffer & Woo Inc., in Palgrave, Canada. The magma ocean that roiled Earth in the earliest days of the solar system left no geologic record, but Io could be a living example of how an infant planetary body shapes itself.

    When the Voyager spacecraft returned the first closeup images of Io in 1979, planetary scientists learned that it is outrageously active. More recent observations from Earth and now from the Galileo spacecraft have shown just how extreme its volcanism is. Io's huge calderas dribble lava onto the surface at temperatures exceeding 1500 K, when the hottest terrestrial lavas today are hundreds of degrees cooler (Science, 17 April 1998, p. 381). Such high temperatures implied compositions with high proportions of magnesium and iron, called ultramafic, which would raise the melting point of the rock. Hot ultramafic lavas were common billions of years ago when Earth itself was hotter but have been scarce since.

    In the October issue of Icarus, planetary geologists Laszlo Keszthelyi and Alfred McEwen of the University of Arizona, Tucson, and Jeffrey Taylor of the University of Hawaii, Honolulu, consider what Io's surface might be saying about the satellite's interior. Jupiter's gravity kneads Io, driving heat through the interior. Keszthelyi, McEwen, and Taylor calculated that if Io is solid down to its liquid iron core, as Earth is today, Io should have thoroughly extracted silica-rich magmas from its rock to form a thick, silica-rich crust. In this scheme, the crust would now melt from place to place to produce lavas with a low melting point, just the opposite of what is seen.

    So Keszthelyi and his colleagues assume that Io never managed to extract much silica-enriched magma from its interior. In their preferred model, beneath a 100-kilometer-thick crust built of silica-poor lavas churns an 800-kilometer-thick magma chamber that melts away the bottom of the crust as fast as surface lavas build it. Their calculations suggest that the magma would be heavy with mineral crystals—more so toward the bottom as increasing pressure encourages their growth.

    This mushy magma ocean must be global, the researchers conclude, to feed the volcanic hot spots that seem to be uniformly distributed over Io's surface—Galileo observations have yielded 100 of these so far and counting. Io's mountains, which range up to 10 kilometers high, are also evenly distributed, so they may be blocks of crust tilting as they founder into the magma ocean below. The early Earth or moon may have looked this way, says McEwen, before it cooled enough to solidify.

    “The evidence for globally distributed magma plumbing is very good,” says Galileo project scientist Torrence Johnson of the Jet Propulsion Laboratory in Pasadena, California. “That implies a global source.” But short of dropping seismic stations onto the surface, he says, proof may be hard to come by. Still, McEwen suggests at least two ways Galileo might help. During its last scheduled flyby of Io, made on Thanksgiving Day, the spacecraft recorded magnetic observations that may show whether Io generates a magnetic field. The heat of a magma ocean would frustrate the generation of a magnetic field in Io's molten iron core by erasing the temperature gradient that drives a dynamo. And Io's gravitational signature, which Galileo returned during its close passage, could reveal the unusually low density of a magma ocean. But everyone agrees that the clearest answer would come from an Io orbiter, which could probe for a soft, molten interior by measuring the rhythmic kneading of the satellite by Jupiter. Then a unique, albeit lifeless, ocean might join the club.

  6. CLIMATE CHANGE

    Will the Arctic Ocean Lose All Its Ice?

    1. Richard A. Kerr

    Miners have their canaries to warn of looming dangers, and climate change researchers have their arctic ice. The sea ice floating at the top of the world—enough to cover the United States—is highly sensitive to changes in the air above and the ocean below, and for several years Arctic watchers have been detecting what looked like slow shrinkage—a change they've read as a suggestive sign of global warming. But now they see warnings that their “canary” is in deep trouble and could expire in a matter of decades.

    “Suddenly, all these different, relatively weak indicators [of arctic change] are making a coherent story that looks really intriguing,” says polar climate researcher Douglas Martinson of the Lamont-Doherty Earth Observatory in Palisades, New York. The story, as updated in reports in this issue of Science and the 15 December issue of Geophysical Research Letters, tells of an arctic ice pack that is not only shrinking in area but rapidly thinning as well. The big question now is what's causing the shrinkage: natural polar climate fluctuations or global warming due to increasing levels of greenhouse gases. If it's all natural, the loss of arctic ice should eventually reverse, but if global warming is at fault, the entire ice pack will eventually disappear, with drastic climate implications for the Northern Hemisphere.

    The decline of arctic ice had seemed real enough, though not yet alarming. By combining satellite observations and historical records of ice extent, various groups had found that the area covered by the ice in the summer has been decreasing by about 3% per decade during recent decades. At that rate, notes Martinson, it would take another 350 years for the Arctic Ocean to be ice-free in summer. The shrinking caught everyone's attention but remained tantalizing.

    But polar researcher Ola Johannessen of the Nansen Environmental and Remote Sensing Center in Bergen, Norway, and his colleagues report on page 1937 that the arctic ice is undergoing a much more rapid change. As do all materials above absolute zero, ice emits microwave radiation, the exact spectrum of which depends on whether the ice is newly frozen or has thickened over a number of years. By compiling and analyzing satellite observations of these microwave emissions made from 1978 to 1998, Johannessen and his colleagues found that the area of multiyear ice had declined by 7% per decade during the 20-year period—twice the rate at which the total ice area has been shrinking.

    Another change in arctic ice appears to be progressing at an even faster rate, according to the Geophysical Research Letters report. In that paper, polar oceanographers Andrew Rothrock, Yanling Yu, and Gary Maykut of the University of Washington, Seattle, compared two sets of measurements of polar ice thickness taken by U.S. Navy nuclear submarines. The first were made from 1958 to 1976 while on military patrol. The second were made in 1993, 1996, and 1997 during the Scientific Ice Expeditions program. Upward-looking acoustic sounders mapped the ice depth the way depth sounders map the sea floor.

    Overall, the Seattle team found, the ice over the deep-water Arctic thinned from an average thickness of about 3.1 meters to about 1.8 meters, or about 15% per decade. That's five times faster than the ice area has been shrinking. What's more, the ice thinned at every one of the 26 sites for which the researchers compared data. Overall, says Rothrock, the arctic ice has lost 40% of its volume in less than 3 decades.

    “The current evidence is pointing to something going on” with arctic ice, says polar researcher John Walsh of the University of Illinois, Urbana-Champaign. If thinning continues at this rate, he notes, “there really are only a few decades before ice thickness reaches zero.” That would convert the Arctic Ocean from a brilliantly white reflector sending 80% of solar energy back into space into a heat collector absorbing 80% of incident sunlight, with effects on ocean and atmospheric circulation extending into mid-latitudes. These could include shifts in storm tracks, says Walsh, with changes in precipitation.

    However, adds Walsh, “there's quite a bit of debate about why” the ice is thinning and therefore whether it will continue to do so. Most fingers are pointing at the Arctic Oscillation (AO), an erratic atmospheric seesaw that alternately raises and lowers atmospheric pressure over the North Pole while lowering and raising it in a ring around the edge of the polar region.

    Through its changes of pressure, the AO can change wind patterns and thus affect ice thickness, notes Rothrock. In its positive phase, to which it shifted abruptly around 1989, the AO pumps more warm air into the Arctic, tends to warm the water entering the Arctic from the North Atlantic, and blows more thick, multiyear ice out of the Arctic—all changes that should thin the ice. Although the AO, like El Niño, is a natural oscillation, some climate modeling has recently suggested that building greenhouse gases may have driven the AO to its current extreme (Science, 9 April, p. 241). “I'm going to wait and see,” says Rothrock. “I would lean toward the view that this is a fairly extreme state [of the AO], and it will likely come back toward more normal conditions.”

    However, on page 1934, climate researcher Konstantin Vinnikov of the University of Maryland, College Park, and his colleagues suggest that increasing levels of greenhouse gases may be the prime mover behind the shrinking arctic sea ice. The Maryland team compared the observed ice loss and the ice loss in two climate models simulating the strengthening greenhouse. The chances that the losses seen since 1953 are just an extreme of a natural cycle and will swing back toward normal are less than 0.1%, they say. Whatever your faith in such model studies, says Walsh, “we've got a region to watch now.”

  7. PALEONTOLOGY

    Fossil Opens Window on Early Animal History

    1. Martin Enserink

    A fossil site in southern China that has held paleontologists captivated for a decade keeps relinquishing new treasures. Only 4 weeks ago, a Chinese-British team reported the oldest known vertebrates at the site, two fishlike creatures that lived 530 million years ago (Science, 5 November, p. 1064). Now, a rival team presents hundreds of astonishingly well-preserved fossils from the same site, which may represent some of the earliest chordates—a broad group that comprises not only vertebrates but also more primitive invertebrates such as sea squirts and lancelets.

    The researchers, led by Junyuan Chen from the Nanjing Institute of Paleontology and Geology, think this animal, too, may have been an early vertebrate—it has a relatively large brain and perhaps eyes—but there's still some doubt, because they didn't find anything resembling a skull. Either way, however, the new fossils give researchers another eagerly awaited peek at the animals that set the stage for the evolution of the backbone, an important transition in the animal body plan. And other researchers add that the sheer quality of the specimens, reported in this week's issue of Nature, may eclipse last month's findings. “I think this is going to be an icon that we'll see in the textbooks for many years,” says zoologist Nicholas Holland of the University of California, San Diego. “They're almost like a photograph of the anatomy of the animals,” adds paleontologist Philippe Janvier of the Muséum National d'Histoire Naturelle in Paris.

    Both discoveries were made at a site called Chengjiang near the city of Kunming, where 530-million-year-old fine-grained rocks have preserved even soft animal tissue in exquisite detail. After finding a few intriguing specimens in April, Chen's group stepped up excavations. “We knew we had found something very important,” says Chen, “and we started working really hard.” A month later, the group had collected and described 305 specimens, 30 of them complete, of what they have christened Haikouella lanceolata, after the nearby town of Haikou.

    Thanks to the stunning preservation, the researchers could not only discern a heart and a circulatory system in these 3-centimeter fossils, but also some of the hallmarks of chordates, such as a nerve cord and a notochord, a rod of stiff tissue that provides support along the back of the body and is present today in most embryonic vertebrates and adult chordates. Haikouella also has a puffed-up back that seems to contain segmented muscles—another key chordate feature. What's more, the animal seems to have a relatively large brain, and what appear to be two eyes, suggesting that it may be a very early member of the vertebrates—which would put it somewhere on the first steps of the long road to humans. Haikouella also clearly resembles a specimen Chen and colleagues found 4 years ago at the same site, called Yunnanozoon lividum, which also seemed to have a notochord and a nerve cord. Chen considered Yunnanozoon to be an ancient chordate, too, and says that the much better preserved Haikouella now confirms this.

    But others contested Yunnanozoon's right to a place within the chordates and seem likely to be skeptical about Haikouella, too. “I think they're trying to force too much advanced morphology into the animal,” says paleontologist Simon Conway Morris of Cambridge University in the United Kingdom, a co-author of the early vertebrate paper published last month. For one, Conway Morris isn't convinced that the bulging back does indeed contain chordatelike muscles. In his view, Haikouella may be even further down the evolutionary tree than Yunnanozoon—a progenitor to chordates and to other invertebrates such as the echinoderms, which include starfish and sea urchins. If so, the species might have been a kind of living fossil in its own time, offering a glimpse into an earlier phase of animal evolution about which even less is known. “In a paradoxical way it could be more interesting than [Chen's team] indicated,” Conway Morris says.

    But Chen and other researchers reject that idea. “There's no question that these things are chordates,” says Holland. Janvier agrees. “This puts an end to the discussion about Yunnanozoon,” he says. Everyone agrees, however, that more fossils from the Chengjiang site will likely be the key to the definitive history of the chordates. “It's quite astonishing how many new things we find there,” says Conway Morris. “It's inevitable that there's going to be a whole lot of surprises.”

  8. NUCLEAR FUSION

    Europe, Japan Finalizing Reduced ITER Design

    1. Judy Redfearn*
    1. Judy Redfearn writes from Bristol, U.K. With reporting by Dennis Normile and David Malakoff.

    Munich, Germany—European and Japanese fusion researchers have drawn up what they hope will be a winning design for a scaled-down version of the International Thermonuclear Experimental Reactor (ITER). Last week ITER scientists described key elements of the smaller and cheaper design at a seminar here for policy-makers, industrialists, and journalists. Details will be revealed next month, in time to influence political decisions to be made starting next summer in Europe and Japan, the two major ITER partners. “We have been preparing for a long time and we are ready,” ITER director Robert Aymar told the meeting attendees.

    ITER began in 1986 as a joint project of Europe, Japan, the United States, and the Soviet Union. But its future was thrown into doubt in July 1998 when concerns about the $6.8 billion price tag led the partners to extend the design phase to July 2001 and to investigate smaller, cheaper alternatives. The United States later withdrew its support, and Russia's economy precludes it from providing more than intellectual and political support.

    Despite those setbacks, a special working group was set up to look for cheaper ways of achieving a “next step” toward a prototype fusion reactor. It considered a series of smaller, cheaper experiments rather than one large one before coming down in favor of a single machine, similar to the original ITER, but with reduced technical objectives. Teams in Garching, Germany, and Nara, Japan, are now putting the finishing touches on the new design, which would have a radius of 6 to 6.5 meters instead of the original 8 meters and a price tag of about $3 billion.

    The reduction in size and cost was achieved largely by compromising on the machine's main scientific objective, ignition in a burning plasma. Fusion reactors such as ITER use magnetic fields to confine a deuterium or deuterium-tritium plasma within a toroidal vessel called a tokamak. When the plasma is heated to temperatures of about 100 million degrees Celsius, nuclei fuse and give off neutrons (whose energy is harvested) and alpha particles, which reheat the plasma. Ignition occurs when alpha particle heating is sufficient to sustain the fusion reaction indefinitely without further input of energy.

    Rather than achieving ignition, the new design will aim for a burning plasma, in which alpha particles provide at least 50% of the plasma heating. The new design will produce at least 10 times as much energy as it consumes, generating 400 megawatts of power in bursts of 400 seconds rather than the originally specified 1.5 gigawatts in 1000-second bursts. In a burning plasma, “alpha particles become the dominant source of plasma heating and the determinant of plasma behavior,” says Aymar. “These conditions cannot be reached by present machines or by upgrades, nor satisfactorily simulated.”

    Now that the design is in place, the ITER team must work quickly to convince the politicians. European funding comes from the European Union's Framework research program, a 5-year cycle that begins a sixth term in 2003. “The first strategy paper on the contents of the sixth Framework program will be issued in about June 2000,” Klaus Pinkau, co-chair of the special working group, told the Munich meeting. Hiroshi Kishimoto, executive director of the Japan Atomic Energy Research Institute (JAERI), says the agency has a similar timeframe to secure funding but details of the financial package must be resolved before any final agreement.

    The other big decision concerns choosing a site for the reactor. “JAERI has a strong interest in [bringing] ITER to Japan,” says Kishimoto, adding that Japan might be willing to pay “more than 50%” of the total project costs. While such an arrangement would ease the financial burden on Europe, several European officials suggested that they are more likely to back a bid by Canada, an associate ITER member. One big reason is that a North American site would appeal to the United States, should it wish to rejoin the project. Nuclear engineer Charles Baker of the University of California, San Diego, who led a U.S. ITER planning team, doesn't see that happening “anytime in the near term.” But he says that a formal decision by Japan or Europe to build the device “might create an opportunity for the U.S. to play a limited and modest role.”

    Taking an optimistic view, Aymar hopes that a site will be chosen in 2001 and a construction agreement signed in 2002. Adding in 2 years to prepare the site and sign licensing agreements, ITER could come on line around 2013.

  9. MICROSCOPY

    Helium Beam Shows the Gentle, Sensitive Touch

    1. Andrew Watson*
    1. Andrew Watson writes from Norwich, U.K.

    A microscope with unprecedented sensitivity, based on a beam of atoms rather than light or electrons, is a step closer to reality thanks to a German collaboration that has coaxed helium atoms into an intense, needle-fine beam. Atoms can play the role of light in a microscope because according to quantum mechanics, they too exist as waves, albeit thousands of times shorter than light waves. Because the crispness of a microscope image is governed by wavelength, matter, or de Broglie, waves could offer a very high resolution. And in contrast to energetic probes such as x-rays or electrons, helium atoms bounce lightly off the target surface without damaging it, explains Peter Toennies of the Max Planck Institute for Fluid Dynamics in Göttingen, Germany. “With helium atoms you see the flesh, whereas with all other probe particles you see the bones,” he says.

    An atom-beam microscope would be a unique instrument for examining surface structures nonintrusively, such as watching the buzzing vibrations on the surface of the minute crystals in metals, says Toennies. “Neutral atoms interact with matter in fundamentally different ways from other microscopic agents,” says Jabez McClelland of the National Institute of Standards and Technology in Gaithersburg, Maryland. The Göttingen work “is really significant” as a demonstration of how to manipulate these truly inert atoms, says Jürgen Mlynek of the University of Konstanz in Germany, who led an earlier atom-focusing effort.

    The starting point in the Göttingen experiment is a jet of helium atoms, spurting out of a fine nozzle. To trim down the spread of this jet to a fine pencil beam, the researchers pass the helium atoms through a “skimmer,” a drawn-out glass micropipette with a tip just 1 micrometer across. “Think of it as a funnel, and we shoot the beam in through the narrow end,” says Toennies. Atoms too far off the beam axis are guided away by the curving outer wall of the funnel.

    A meter farther on, a type of lens called a Fresnel zone plate focuses the beam. Whereas conventional lenses bend light to a focus using refraction (the deflection that occurs on entering or leaving a denser material), zone plates rely on diffraction (the spreading of waves emerging from tiny apertures). Beams pass through a set of concentric opaque and clear rings, sized so that light diffracting from neighboring clear rings combines. Wave-peak adds to wave-peak to reinforce the light along the beam axis, focusing the beam, while peaks and troughs meet to cancel out the light just off-axis.

    For the ultrashort wavelength of helium atoms, Günter Schmahl of Göttingen University's Institute of X-ray Physics, who co-leads the collaboration with Toennies, used electron beam lithography to create a zone plate in which the finest rings are just 50 nanometers wide and the entire zone plate is just a half a millimeter across. Without a zone plate, the atoms would illuminate an area 400 micrometers across. Using a zone plate, the researchers managed to create a spot just 2 micrometers wide (Physical Review Letters, 22 November, p. 4229)—10 times smaller and 100 million times brighter than in earlier atom-focusing efforts, says Toennies. What's more, unlike earlier efforts, the Göttingen group's atoms are in their lowest energy state, which is crucial for an atom microscope because they scatter from the surface more predictably. “What's new here is the use of ground-state helium atoms, which really have a pure ‘billiard-ball’ interaction with the surface,” says McClelland.

    Now the Göttingen group is trying to turn its beam into a full-fledged microscope. “The next step now is to detect the particles which have struck the surface within this narrow spot and then been deflected from the surface,” says Toennies. “And that is what we are tooling up to do.”

  10. EPIDEMIOLOGY

    No Meeting of Minds on Childhood Cancer

    1. Jocelyn Kaiser

    Although two U.S. agencies disagree on whether an apparent rise in childhood cancer is real or due to better diagnosis, their dispute may end up aiding the fight against this terrible killer

    When Richard Klausner, director of the National Cancer Institute (NCI), picked up The New York Times one morning 2 years ago, he was thunderstruck. Fueled perhaps by a “growing exposure to new chemicals in the environment,” claimed a front-page article in the 29 September 1997 issue, “the rate of cancer among American children has been rising for decades.” Klausner had assumed that the rate of new childhood cancer cases was stable.

    Klausner huddled with his institute's own experts, who persuaded him that his assumptions were sound—and the article's message, therefore, was off base. The alarming news had originated from a conference earlier that month, sponsored by the Environmental Protection Agency (EPA), on “preventable causes of cancer in children,” an event that Klausner says his office was never consulted on. He picked up the phone and tried to reach EPA Administrator Carol Browner, whom the Times had quoted calling for new research on air and water pollutants and pesticides “and their effects on children,” as well as “new testing guidelines” to confront what she described at the conference as a “dramatic rise in the overall number of kids who get cancer.” “I was concerned about an injudicious description of the trends,” says Klausner, who believed that EPA's “one-sided view” could mislead people into thinking the United States was in the midst of an epidemic of childhood cancers spurred by some environmental scourge. It would be weeks, however, before Browner got back to him.

    In the months since the conference, EPA scientists and outside advisers have co-authored a research plan for childhood cancer that appeared in the journal Environmental Health Perspectives, while the agency itself has begun to tighten its regulations of chemicals to take into account the vulnerability of children to toxic effects (see sidebar on p. 1834). Spurring the agency on have been environmental groups and some scientists concerned that pesticides and other synthetic substances could be driving up childhood cancer rates.

    Although EPA's critics do not necessarily disagree with the agency's focus on reducing risks to children, a debate continues to rage behind the scenes on the numbers underlying those thrusts. NCI sped up the pace of an already-planned review of childhood cancer rates, whose just-published conclusion is that there has been no dramatic rise in cancer among children. The authors attribute an uptick in the 1980s—seized upon by EPA as evidence of a problem—to better methods of detecting and classifying tumors rather than to a phantom environmental menace. “It's an easy and attractive hypothesis, but there is very little evidence that environmental risks are causing the majority of cancers,” says Freda Alexander, a statistician at the University of Edinburgh in the United Kingdom. EPA scientists reject that conclusion. “I've spoken to many experts in environmental cancer and epidemiology about the NCI concept that there really is no increase in childhood cancer. They just don't buy it,” says physician Steve Galson, former science director of EPA's children's health initiative and now in the agency's pesticides office.

    Stalking the young

    If there's one indisputable fact in the debate, it's that too many children still succumb to cancer. Despite huge strides in the last few decades in raising the odds that any particular cancer-stricken child will survive into adulthood, this devastating disease remains the second leading cause of death for children after accidents. Concerns about an accomplice lurking in the environment rose in the late 1980s, when studies pointed to a possible link between childhood leukemia and exposure to electromagnetic fields generated by power lines and home wiring. Overall, childhood cancer rates appeared to be creeping upward, driven by a 35% rise in pediatric brain cancers from 1973 to 1994. “It was really brain cancer that everybody was freaking out about,” says Jim Gurney, an epidemiologist at the University of Minnesota, Minneapolis. But the fresh leads on potential killers in the environment grew stale as study after study came up empty (see sidebar).

    What had been a low-profile debate burst into the public arena in 1997. That September, EPA's new Office of Children's Health Protection sponsored what it billed as the first-ever conference on children's cancer and the environment, where participants would hammer out “a blueprint for childhood cancer research for the next decade.” According to a conference brochure, “the occurrence of new cancer cases continues to rise, and we don't know why. One potential cause is environmental toxins.” Galson says EPA based this statement on data from NCI, as well as work by epidemiologist Les Robison of the University of Minnesota, Minneapolis, who co-authored a report in the journal Cancer in 1996 that found that childhood cancer rates had risen by 1% a year since 1974.

    Some attendees, however, say that although the conference stirred a lot of productive scientific discussion, it was clear where the blueprint was headed from the outset. Activists, parents of cancer victims, and journalists made up a large portion of the 240 participants. The scarcity of scientists in an effort meant to guide a research course made for “a very odd conference,” says attendee Seymour Grufferman, an epidemiologist at the University of Pittsburgh School of Medicine. But if arousing public concern over childhood cancer was a goal, the conference triumphed: It made newspapers coast to coast.

    In the wake of that publicity, Klausner asked NCI epidemiologist Martha Linet, who has tracked childhood cancer rates for 10 years, to explain the data to the National Cancer Advisory Board at its December 1997 meeting. Linet told the group that although health officials had indeed reported an overall rise in childhood cancer incidence since the early 1970s, recent data show that rates for most childhood cancers have been stable since the mid-1980s and that new diagnostic techniques could explain some earlier increases. The presentation left advisory board chair J. Michael Bishop, a Nobel Prize-winning oncogene researcher at the University of California, San Francisco, scratching his head. “How can federal agencies within the same city reach such diametrically opposed conclusions?” he asked. Klausner offered to sum up NCI's findings and disseminate them widely.

    With that in mind, Klausner set in motion an extensive analysis of the data, including the brain cancer results, which NCI pediatric oncologist Malcolm Smith had already begun to examine. The EPA conference “made these data an issue,” says Smith, whose team analyzed the surge in reported brain cancer cases around 1985, when hospitals were switching from computed tomography (CT) scanners to magnetic resonance imaging (MRI) machines as the main tool for finding brain tumors. In the September 1998 issue of the Journal of the NCI (JNCI), Smith argued that the switch to MRI—along with reporting changes in which some slow-growing tumors, previously classified as benign, were now counted as malignant—could explain much of the 35% rise between 1973 and 1994.

    In the meantime, Lynn Ries and colleagues at NCI finished a pediatric cancer monograph they had begun before the EPA conference. The work is an analysis of data from NCI's Surveillance, Epidemiology, and End Results (SEER) program, which tracks cancers in 14% of the U.S. population. Published last month, the monograph reports slight increases since 1975 in some very rare childhood cancers, such as testicular cancer and retinoblastoma. But overall, conclude Ries, Linet, and others in a report published in the June issue of JNCI, there has been “no substantial change in incidence for the major pediatric cancers, and rates have remained relatively stable since the mid-1980s.” The NCI team argues that the increases in the mid-1980s likely “reflected diagnostic improvements or reporting changes … rather than the effects of environmental influences.”

    Several outside experts consulted by Science say the two NCI teams together make a compelling case. “They're both superb” papers, says Susan Preston-Martin, an epidemiologist at the University of Southern California in Los Angeles. Although the findings don't rule out a long-standing mysterious cause of childhood cancer, she says, they show “there's nothing new in the environment that we need to scramble to discover.”

    To Klausner, the case is closed. NCI has issued a series of fact sheets and has brought EPA scientists to Bethesda, Maryland, to allow NCI epidemiologists to explain their methods. “I think they [EPA] fully agree with us,” Klausner says.

    Worlds apart?

    That's hardly the message coming from EPA scientists and colleagues outside the agency who have helped shape its childhood health program. Philip Landrigan of Mount Sinai Medical Center in New York City takes issue with Smith's brain cancer paper in particular. “I'm a pediatrician. I see children with brain cancer. It's inconceivable to me to imagine that 25 years ago we were missing one-third of children with this disease,” Landrigan says. A colleague at Mount Sinai, Clyde Schechter, argues that if MRIs pick up tumors once too small to detect, then the rates should have ebbed after the new technology had flagged all the cancers that would have been caught eventually by the previous technique. Smith counters that some of the nervous system tumors the MRI scans catch neither grow nor cause symptoms readily traced to the tumors—thus they would never have been detected by CT scans, so the rate should not necessarily recede.

    “One could conclude that [NCI] is trying hard to explain away the increased childhood cancer incidence demonstrated by their own data,” says Galson. “The increase has been going on over such a long period of time that it is just stretching the bounds of believability a little bit to say [the rise] is absolutely all the result of these 10 things [new diagnostics, etc.] that have happened and you really don't have to worry about it.” And as for NCI's conclusion that childhood brain cancers are not on the rise: “That's their opinion,” says an official in EPA's Office of Children's Health Protection. (Administrator Browner declined to be interviewed for this article.) Epidemiologist Devra Davis of the World Resources Institute in Washington, D.C., also questions Smith's results, noting that a 1992 Canadian study in which a neurologist did a blind review of hospital records found that in only about 20% of cases did doctors rely on MRI or CT scans to detect tumors.

    Despite their differences in “world view,” says Galson, he and others at both agencies say the conflict has spurred some constructive engagement. This is happening mainly through a children's environmental health task force chaired by Browner and Health and Human Services Secretary Donna Shalala. The panel has compiled a database of ongoing children's health research (www.epa.gov:6710/chehsir/owa/chehsir.page) and is laying plans for a cancer registry that would pool data collected by clinics. By expanding the number of cases far beyond the 14% of U.S. cases now studied by SEER, the registry could greatly increase the statistical power of population studies. The registry is part of EPA's research agenda, which also recommends toxicology tests using young animals, molecular biomarkers to identify susceptible subpopulations, and better exposure measurements.

    To some researchers, these fruits make the scuffle over cancer rates worthwhile. “Children have been ignored and neglected,” asserts University of California, Berkeley, epidemiologist Martyn Smith. It's “great” that EPA and NCI are ratcheting up efforts to understand the causes behind childhood cancers, adds Minnesota's Gurney. There may be no love lost between the two agencies, he says, but “I couldn't be happier.”

  11. EPIDEMIOLOGY

    The Elusive Causes of Childhood Cancer

    1. Jocelyn Kaiser

    Researchers who probe whether environmental hazards cause cancer in children have an advantage over colleagues who study adults: It should be simpler to track what children have been exposed to in their brief lifetimes than to sift through decades of exposures. But making an unequivocal connection between tumor and toxicant has proved to be anything but easy.

    Experts caution that each of the dozens of subtypes of childhood cancers must be grappled with on its own, as each takes root in different cell types at different ages—and thus may spring from a variety of causes. For the most common childhood cancer, acute lymphoblastic leukemia (ALL), the only confirmed risk factor is ionizing radiation—from x-rays of pregnant mothers, for instance, but apparently not from radon. Most experts believe that recent results from the largest ever ALL study in the United States, involving about 2000 cases, have eliminated two suspects: a mother's smoking during pregnancy or electromagnetic fields from power lines. Pesticide studies have been inconsistent: “We've found quite a few suggested associations,” for example with no-pest strips in homes, “but we're underwhelmed by the evidence,” says Jonathan Buckley, an epidemiologist at the University of Southern California (USC) in Los Angeles.

    Researchers haven't given up the hunt, however. Studies have shown that newborns later diagnosed with ALL often have a rearrangement in a carcinogen-detoxifying gene, called MLL-AF4. The genetic shuffle is common in infants whose mothers were treated during pregnancy with chemotherapy drugs that inhibit a DNA replication enzyme called topoisomerase II. That has fueled speculation that other chemicals that inhibit topoisomerase—such as benzene breakdown products, certain antibiotics, and flavonoids in foods—might also trigger the mutation.

    Some older children with ALL are born with a mutation in another gene, TEL-AML-1. A fraction of children who do not contract ALL also have this mutation, so study chief Melvyn Greaves of the Institute for Cancer Research in London thinks something in the environment may trigger a mutation in an unidentified gene that, combined with the TEL-AML-1 mutation, leads to cancer. Greaves thinks weakened immune responses in infants may be partly to blame. Childhood ALL, it turns out, is more common in families with higher income levels in developed countries, where children experience fewer infections—challenges that help gird the immune system (Science, 19 June 1992, p. 1633). Greaves speculates that, upon entering school and the attendant milieu of germs, a child with a relatively untested immune system might be no match for a pathogen that damages the DNA of the immune system's white blood cells, causing them to proliferate. The culprit might be a specific virus or a bacterium, or it could be a general response to any number of agents acting on a frail immune system.

    View this table:

    The evidence for either scenario is “equivocal,” says statistician Freda Alexander of the University of Edinburgh in the United Kingdom. Studies in several countries that surveyed parents about their children's infections and immunizations as well as proxies for infections—such as when a child began day care—have not always found that early infections protected against leukemia. On the other hand, U.S. researchers reported in the Journal of the National Cancer Institute in October that breast feeding appears to reduce the risk of childhood leukemia, which supports the immune system idea: Breast feeding is well known to protect against infections, apparently by passing antibodies to infants via the milk. Results from the largest study yet to probe the infections hypothesis, which looked at 1000 ALL cases in the United Kingdom, are due out next year.

    The idea that a toxicant may be to blame for childhood brain cancers also has little solid support. At a workshop at the University of Minnesota, Minneapolis, in July, researchers mainly discussed three possible culprits: n-nitrosopyrrolidine compounds in cured meats; polyomaviruses; and folate (a B vitamin) deficiency or defective folate metabolism. Some studies have indicated a twofold higher risk in children whose mothers ate a lot of cured meat during pregnancy, and the notion that hot dogs can cause brain cancer is “one of the most compelling still,” says USCepidemiologist Susan Preston-Martin. Polyomaviruses that are passed from mother to fetus such as the JC virus have come under suspicion because they can cause DNA mutations, while infants whose mothers take prenatal vitamins with folic acid—needed to repair and synthesize DNA—may have a lower chance of brain tumors.

    Still, childhood cancer experts say they have been seeking answers in studies ill equipped to provide them. Studies “haven't been able to disentangle exposure that well,” says molecular epidemiologist Federica Perera of Columbia University. Any major new efforts should depart from earlier ones in two key ways, Perera and others say: Instead of relying primarily on parents' memories of foods or chemicals they or their children were exposed to, researchers should collect direct evidence of exposure—for instance, molecular changes that occur when carcinogens latch onto DNA. And they should look for inherited variations in genes that may predispose children to cancer, say, by poorly metabolizing folic acid or carcinogenic phenols found in foods.

  12. EPIDEMIOLOGY

    A Broader Push on Childhood Health

    1. Jocelyn Kaiser

    The Environmental Protection Agency's (EPA's) interest in childhood cancer is part of a new focus on children begun 4 years ago by Administrator Carol Browner and President Clinton, who in 1997 ordered that all new federal safety standards take into account children's health. Although some observers charge that the drive is more politics than science, many researchers say it's an idea whose time has come.

    A large share of the credit for heightening interest in children's health belongs to a report, “Pesticides in the Diets of Infants and Children,” issued by the National Academy of Sciences in 1993. Prepared by a panel chaired by pediatrician Philip Landrigan of Mount Sinai Medical Center in New York City, the report argues that regulatory standards for pesticides and other chemicals may not sufficiently protect children. Babies and young children are more vulnerable because their bodies are developing, it says, and they are exposed to more toxicants by body weight, partly because of behaviors such as crawling and eating more fruits.

    Congress picked up on this theme, passing two laws in 1996 calling on EPA to take into account the risk to children when revising water and food safety regulations. EPA's initiatives include an Office of Children's Health Protection opened in 1997 and funding, with support from the National Institutes of Health, for eight university-based children's health centers that study asthma, exposure to pesticides on farms, and other environmental health issues pertaining to children. Among major regulatory steps in the works, EPA is expected to reduce exposure 10-fold for some pesticides and to work with manufacturers on a testing program for chemicals posing particular risks to children.

    Some observers have harshly criticized these initiatives. The academy report cites no studies showing that children are being harmed, points out Kenneth Chilton of the Center for the Study of American Business at Washington University in St. Louis. He argues that EPA is focusing on “very tiny” environmental risks: “Putting bicycle helmets on children is going to have way more impact.” But EPA has backing from many top scientists. “It's not all politics,” says Bernie Goldstein, director of the Environmental and Occupational Health Sciences Institute in Piscataway, New Jersey. He notes that ferreting out an individual's susceptibility to disease is the wave of the future as scientists exploit genomic findings. Focusing on children, he says, “is in a sense a part of it.”

  13. PLANT PATHOLOGY

    Geminiviruses Emerge as Serious Crop Threat

    1. Anne Simon Moffat

    Geminiviruses, once thought to cause only limited crop damage, are spreading worldwide, thus ratcheting up their potential impact on farmers

    About 15 years ago, when plant scientists first seriously studied geminiviruses, they thought they might have found some new friends. The viruses, so called because they consist of two twinned particles, were known to be pathogens, but their range seemed to be limited, and they looked like promising vehicles to carry genes for desirable new traits into plants. As it happened, though, the geminiviruses lack the necessary enzymes, and now, far from being friends, they are emerging as serious enemies that are devastating crops worldwide.

    “In the past, not much attention was given to geminiviruses because they hunkered down in remote places,” says Claude Fauquet, director of the International Laboratory for Tropical Agricultural Biotechnology at the Donald Danforth Plant Science Center in St. Louis. But global transportation networks have aided the spread of the virus and its carrier, the white fly, and new geminivirus strains have emerged thanks to the viruses' penchant for swapping genetic material. “Now, it's not unusual to see destruction of whole crops, such as tomato, cotton, and cassava, with the viruses causing serious plant disease in at least 39 nations,” says Fauquet.

    Year after year in the early 1990s, for example, geminiviruses destroyed up to 95% of the tomato harvest in the Dominican Republic, and in just the 1991–92 growing season in Florida, they caused $140 million in damage to the tomato crop. With the virus continuing to spread and other control measures, such as insecticides, falling short, plant genetic engineers are struggling to create resistant plants. So far they have had no lasting successes, although they do have promising leads.

    Robert Goodman, now at the University of Wisconsin, Madison, and Brian Harrison and colleagues of the Scottish Crop Research Institute in Dundee first described geminiviruses in the mid-1970s, but records from Africa suggest they probably caused diseases of maize and cotton there in the 1930s or earlier. They've also been attacking bean fields in the tropics of the Western Hemisphere for decades. But geminiviruses didn't draw much scientific attention until early in this decade, when serious outbreaks flared up in Africa, India, Pakistan, southern Europe, South and Central America, and elsewhere.

    The viruses also struck in the Caribbean in the early 1990s, and the first serious geminivirus infections appeared in Florida and Texas in the United States at about the same time. Since then, they have been spotted, in 1997, as far north as Virginia, South Carolina, and Tennessee. “The more you look for geminiviruses, the more you find them,” says John Stanley of the John Innes Centre in Norwich, U.K.

    Geminiviruses are versatile, too, infecting everything from monocots such as maize to dicots such as cassava and tomato. The infections can produce leaf mottling that interferes with photosynthesis, decreasing yields of starchy foods such as cassava, and they also disrupt flower and fruit formation in crops such as tomato, pepper, and cotton.

    The transportation networks that are spreading the virus from tropical to temperate regions have also brought different species—there are more than 66 of them—into close contact. This may have set the stage for the viruses to swap DNA, a phenomenon first noted in 1997 by Fauquet and Roger Beachy, also of the Danforth Center. So far, the researchers have identified more than 1000 recombinant strains, in some cases involving species whose DNA sequences indicate that they are not closely related. “This was not expected,” says Fauquet. “It is like having a recombinant between a human and a chimpanzee.”

    And while the virus generates new, virulent strains, measures to control it are falling short. Plowing under plant debris after harvest limits the food supply for the white flies and the virus, but the measure has limited effectiveness, as do insecticides. White flies are developing resistance to commonly used chemicals.

    So researchers are searching for resistant cultivars, created either by classical crossbreeding or through genetic engineering. Crossbreeding has produced more resistant cassava, beans, and tomatoes, although the new resistant varieties have such drawbacks as small fruit or poor taste. And although modern gene transfer techniques are faster and more precise than classical plant breeding, the diverse, changeable viruses have already demonstrated that they can outwit the genetic engineers.

    To create virus-resistant varieties, researchers equip the plants with the genes for certain viral proteins—a modification that, for unknown reasons, protects the plant cells from the same virus. In 1992, in the wake of the devastation of the Florida tomato crop, virologist Jane Polston of the University of Florida, Bradenton, and Ernest Hiebert of the University of Florida, Gainesville, put a gene for geminivirus replicase—the enzyme that copies the viral DNA—into tomatoes. The resulting plants had good horticultural qualities and were resistant to the virus, but before the new tomato variety could be commercialized, a new gemini variant moved in on Florida tomatoes that had no problems infecting the plants.

    So researchers are also attempting to endow plants with broad-scale resistance to the rapidly expanding family of gemini pathogens. One strategy Fauquet and his colleagues are exploring is to equip plants with a protein from a bacterial virus that binds viral DNA and prevents movement of geminiviruses within plants, thereby blocking infection. They report good lab results and plan to test their transgenic plants in greenhouses in the spring of 2000. Researchers are also hoping to arm susceptible plants with a protein that attacks a structural element common to the replicase proteins of many geminis.

    Whether growers will accept plants genetically engineered to resist geminiviruses, given the current controversy over genetically modified crops, is an open question (Science, 26 November, p. 1662). Fauquet, for one, believes that they will: “Growers, worldwide, will love it,” he maintains. “When you have plenty of food on your plate, you may choose against transgenic techniques. But when you lose 35% percent of your income to crop losses, you don't care whether disease resistance comes from transgenic means.”

  14. 2000 BUDGET

    Thanks to NIH, R&D Ends Up With 5% Boost

    1. David Malakoff

    After raising some hackles early on, Congress wound up delivering a hefty increase in federal R&D spending. But some worry that NIH's increases are skewing the balance

    Like a television hero who overcomes certain death midway through each week's episode, the federal R&D budget survived another harrowing adventure this year to emerge with just a few bruises. Saving the biggest for last, legislators bestowed a record spending increase on biomedical research to accompany a hefty boost for military science and raises to most other major science programs granted earlier this fall. Overall, federal R&D spending in the fiscal year that began 1 October will rise by 5%, to $83.3 billion, according to an analysis by the American Association for the Advancement of Science (AAAS, publisher of Science).*

    But although many in the science establishment are pleased by the outcome, this year's appropriations have further tilted the balance in federal science spending toward biomedical research. For the first time, the National Institutes of Health (NIH) will control more than 50% of the government's basic research pot, now at $19.1 billion, according to AAAS estimates. “It is a problem when biomedical fields get big increases and other disciplines don't keep up,” says Representative John Porter (R-IL), a key player in NIH's record $2.3 billion boost, to $17.9 billion (Science, 26 November, p. 1654). Adds Rita Colwell, director of the National Science Foundation (NSF), which received a $240 million increase to $3.91 billion, “It's a little disconcerting to see the share of federal funding for the natural sciences, engineering, and math drop from over 50% to 30% in a generation.”

    View this table:

    Some agencies, however, were happy to survive this year without suffering major cuts. In August, for instance, the House sent a chill through the space science community when it cut NASA's budget by nearly $1 billion. That vote, along with moves to hold down requested increases for the Department of Energy (DOE) and NSF, prompted D. Allan Bromley—a Yale University engineer and science adviser to former President George Bush—to proclaim in The Washington Post that “this year's federal budget for science is a disaster.” By last week, however, Bromley had changed his tune. “The final product was much better than I had feared,” Bromley said about compromises that reversed the NASA cuts and produced better-than-inflation increases for science programs at NSF and DOE.

    Boosters of defense-related research were also able to overcome early threats. In February, the White House submitted a request that shrank the Department of Defense's (DOD's) $4.3 billion basic and applied research accounts by 5%. The cuts would have pushed DOD research spending to its lowest level in 35 years when adjusted for inflation (Science, 11 June, p. 1749). But Congress rejected that plan and approved a 6% increase that boosted military R&D to nearly $4.6 billion and reversed years of decline. The White House plan “was dead on arrival,” says one Republican House aide. “They knew we were going to hear people screaming about the cuts—and we did.” In particular, he notes, universities that depend on the military for a majority of their math and engineering research dollars were especially vocal. Overall, defense-related R&D will come to $42.5 billion, or 51% of total federal R&D spending, according to AAAS.

    Republican leaders in Congress were generally successful in attacking programs seen as advancing the presidential hopes of Vice President Al Gore. A slew of Gore-backed environmental research initiatives within the Department of Interior were killed outright, for example, while NASA was told to suspend plans for Triana, a $75 million Earth-viewing satellite Gore backed. Republicans also trimmed $130 million from a $366 million information technology initiative that Gore championed, and approved just half of the $4 million that the National Oceanic and Atmospheric Administration (NOAA) had requested for a Gore-endorsed project to scatter data-collecting buoys across the world's oceans. Such projects, one Democratic House aide said, “might have fared better if the White House had been hands-off.”

  15. ACADEMIC EARMARKS

    Pork Takes a Bite Out of NASA's Science Budget

    1. Andrew Lawler

    Kentucky senator boosts state university's astronomy program—at the expense of NASA's regular research activities

    Western Kentucky University is not a center for cutting-edge research in physics and astronomy, and it did not rate a listing in the National Research Council's review of graduate research departments in 1995. But the Bowling Green school has bold plans to operate four telescopes scattered around the globe, and eventually one in space, and it has an important benefactor: Congress. Thanks to some legislative language inserted by Senator Mitch McConnell (R-KY), NASA will spend $1 million for the second straight year to help the university begin fulfilling its dreams of joining the elite ranks of U.S. physics and astronomy departments. Welcome to the world of academic pork.

    Across the country, dozens of universities, museums, and research organizations scored a windfall like Western Kentucky's in NASA's 2000 budget. It arrived in the form of unrequested spending, called earmarks, inserted by Congress. Although academic pork projects are nothing new in the federal budget, the dollar amounts at NASA are increasing dramatically, from $65 million in 1997 to $375 million in 2000. And space science's share of that figure has risen even more steeply—from $16 million in 1998 to $73 million in 2000. “Two years ago, you had to look hard to find an earmark in the space science budget,” says an agency source. The specially designated projects are also beginning to pose a significant threat to established agency research programs and peer-reviewed science, say NASA officials. At a time when the space agency has trimmed its workforce substantially, “each earmark is labor intensive and high maintenance,” says one agency manager.

    The nature of the earmarks began to change last year, when lawmakers included a whopping $250 million in unrequested funding in NASA's overall $13.5 billion budget. Previously, the typical earmark merely added funding to an existing NASA program. But starting in the 1999 budget, a large chunk began going to non-space-related research—and it came directly from NASA's already tight research budget rather than from other areas. “Congress used to be hands-off about science,” says one NASA official. “These earmarks started out in the NASA education budget, but that was too small to handle all of them, so they've been passed to other programs.” Those other programs are the core of NASA's research mission—space science, earth science, life and microgravity sciences, and advanced technology. “It's absolutely a threat” to peer-reviewed research, says Kevin Marvel, a spokesperson at the American Astronomical Society in Washington, D.C.

    For NASA space science chief Ed Weiler, the ballooning earmarks mean he will have to cut funding for research and analysis—money that primarily goes to outside scientists to study space science data—by about 5%, or $11 million. Data analysis for specific missions such as the Hubble Space Telescope and Ulysses will be cut a total of $8 million to $10 million, while work on a solar probe will be slowed to save $2 million, he says. And advanced technology will be trimmed by $16 million, affecting NASA's long-term plans to visit Europa and Pluto.

    Instead, Weiler will spend $2.5 million on the Bishop Museum in Honolulu, Hawaii, a favorite of Senator Daniel Inouye (D-HI); $4 million, courtesy of Representative Alan Mollohan (D-WV), on a museum attached to the Green Bank Radio Telescope Observatory in his home state; and $4 million for the Sci-Quest hands-on science center in Huntsville, Alabama, thanks to Representative Bud Cramer (D-AL). Another half-dozen museums will each receive from $500,000 to $3 million.

    In contrast to these brick-and-mortar pork projects, Western Kentucky's earmark is easier for NASA officials to swallow. “At least it is related to space science,” says one agency source, although it involves ground-based astronomy, which is the traditional responsibility of the National Science Foundation (NSF). The small school also has a respected faculty. “They have some good people” who do high-quality work, says Richard Green, director of the Kitt Peak National Observatory in Tucson, Arizona. A NASA official describes physics and astronomy chair Charles McGruder as “very bright and very impressive.”

    McGruder wants to operate a network of automated ground-based telescopes that will search for extrasolar planets and active galactic nuclei that may be powered by enormous black holes. The two projects demand continuous observation of the skies. Last year Western Kentucky received $1 million, compliments of McConnell's position on the Senate Appropriations Committee, to refurbish the university's 24-inch (61-centimeter) telescope and to plan how to operate three additional instruments. Separately, Western Kentucky is leading a consortium of universities negotiating with NSF to operate Kitt Peak's 1.3-meter telescope, which became available after the current manager, the National Optical Astronomy Observatories, decided to focus resources on larger instruments. The NASA money will help refurbish and operate that instrument in the coming year. In addition, McGruder intends to build a 24-inch telescope at a still-undetermined site in Hawaii, and a 24-inch telescope at the Wise Observatory in Israel, seeded with a bit of NASA money.

    The Kitt Peak consortium, which includes the University of California, Berkeley, and South Carolina State University in Orangeburg, will also kick in funding for these projects. McGruder says there are no firm figures for how much the total effort will cost, but he's definitely thinking big. “This is the first stage in a long-term project to lead us into space,” he says. “We want our own space mission.”

    Indeed, McConnell's 17 September press release touting the senator's help in securing the funds states that the $1 million will go “for the operation of an advanced satellite telescope.” Once the Kentucky and Kitt Peak telescopes are complete, “NASA will construct and launch the accompanying satellite,” according to the statement. That's news to NASA officials, who say they have no idea what the senator is referring to—and that orbiting telescopes cost at least hundreds of millions of dollars. McGruder says that the senator's statement “is not quite correct,” and that the funding is really for the ground-based observatories. But NASA officials worry that the earmark could turn into a downpayment on a mortgage the space agency does not want to hold.

    For NASA, such earmarks require more than just taking the money out of existing programs, mailing a check, and writing off the loss as the cost of doing business with Congress. “We have to make a silk purse out of a sow's ear,” says another space agency source, explaining that each beneficiary must write a specific proposal that must be reviewed—and often sent back for revisions. For universities lacking the experience of working within the complex federal guidelines, says one agency official, “there's a lot of handholding. And it comes at a time when NASA's budget is shrinking.” Western Kentucky's initial proposal, for example, was “embarrassing” in the vague way it defined how the $1 million would be spent, recalls one official. “We didn't know what to write,” McGruder admits. “We didn't have guidelines.” Later versions proved better, but the agency will have to go through the same process before disbursing the $1 million allocated in 2000.

    Politicians brush aside criticism of earmarking. “We're not spreading the wealth like we should,” says Senator Conrad Burns (R-MT), who succeeded in earmarking $2 million in the space science budget for Montana State University in Bozeman to study life in extreme environments. “The MITs of the country have a great lobby compared to Montana State, and we're just leveling the playing field. So maybe earmarks can do some good.” McGruder agrees. “There's no question a lot of people don't like [earmarking],” he says. “But overall it is a positive thing.”

    NASA officials concede that smaller universities are at a disadvantage, particularly in astronomy, where major research institutions dominate the discipline's landscape. “It's hard for a new group to develop observing capability and get into the game,” says one. “But the ultimate question is whether this is the right way to do it.”

  16. NEUTRINO PHYSICS

    Squabbling Kills Novel Korean Telescope Project

    1. Michael Baker*
    1. Michael Baker is a writer in Seoul.

    HANUL was Korea's ticket to global collaboration in basic research. But that teamwork remains as elusive as the neutrinos it hoped to detect

    Seoul, South Korea—The Korean government has ended funding for a novel aboveground neutrino telescope that was also meant to serve as the country's coming-out party for international research collaborations. Last summer's decision to kill the HANUL project has come to light only recently, however, amid a flurry of vituperative exchanges among the participants.

    Personal animosity between the project's creator and its principal investigator (PI), a rigid bureaucracy, a slipping schedule, and doubts about its technical feasibility all seem to have a played a role in the project's demise. But researchers also point to what they call the “Korean disease,” a zero-sum mentality Koreans blame for everything from unyielding cab drivers to brawling lawmakers. “We have to learn to collaborate and respect each other's opinion,” says Chungwook Kim, president of the Korea Institute for Advanced Studies and chair of the government's review of the canceled project. “Real cooperation in experimental high-energy physics in Korea appears to be very difficult.”

    The idea for HANUL, which stands for High-Energy Astrophysics Neutrino Laboratory but also means “sky” in Korean, came from Wonyong Lee, a Korean-born physicist at Columbia University. The telescope would have employed an unusual aboveground detector smart enough to pick out neutrinos—chargeless and nearly massless particles that can flow through matter like sunlight through glass—that are produced by exploding stars, black holes, and other sources in the distant universe from background noise. In addition to complementing existing experiments in Japan and under the South Pole, HANUL was seen as a way to draw international scientists to Korea and expose local students to cutting-edge research. “The idea was to do something in Korea,” says Lee.

    But Lee, who came to the United States in 1953 as a college student and has been at Columbia since 1962, is a U.S. citizen, and Korean rules bar foreigners from working as PIs unless they are employed by a Korean institution. So he and his colleagues invited Jin Sop Song, a physicist at Gyeongsang National University in Chinju, to be the PI and recruited other Korean scientists. In late 1997 the team won a $350,000, 2-year grant from the Korea Science and Engineering Foundation (KOSEF) to develop a prototype detector (Science, 6 February 1998, p. 802).

    The arranged marriage quickly turned sour, however. Observers say there was considerable friction between Lee, who spent a sabbatical year at Seoul National University overseeing the scientific collaborations and the experimental apparatus for a prototype, and Song, who was responsible for overall project management, including finances. Lee says that he chose Song not principally for his knowledge of physics but because he was a Korean citizen with few enemies, and observers say that at meetings Lee assumed the role of project leader. Song and others, according to observers, believed that Lee was acting “too American” and that his blunt style of speaking antagonized people. Song showed his displeasure in small ways, for example, by convening a meeting while Lee was in the United States and sticking to rules that blocked the flow of money to Seoul National, where Lee was working on detector components.

    Outside forces were also at work. Government policies also hindered the transfer of money to U.S.-based groups working on parts of the prototype and a visit from a team of Russian scientists. “I couldn't build a home base in Korea,” says Lee. “Even setting up an apartment was extremely difficult.”

    Scientific disputes about the design of HANUL further clouded the picture. HANUL was expected to observe rare collisions between neutrinos and atoms in small tanks of water. Such interactions spawn charged particles called muons, which generate flashes of light as they dart through the water. HANUL would measure the energy of the muons by seeing how much a strong magnet could force them off course. The energy levels, in turn, would let the investigators distinguish muons coming from nearby background radiation from muons spawned by far-traveling neutrinos.

    But Haeshim Lee, an astrophysicist at Chungnam University in Taejon and head of the telescope's theoretical division, doubted that HANUL would work, and some experimentalists worried that the magnets would be too costly to build. Haeshim Lee aired his doubts in an angry letter circulated among scientists and government officials in June 1998 and later quit the project. (A toned-down version of his critique was published in the Korean Physical Society's monthly journal in September 1999, prompting coverage of the affair last month in an online newsletter, Korean American Science and Technology News, published by Moo Young Han, a physicist at Duke University.)

    Worried about the status of the project, KOSEF called an emergency meeting in April 1999. Lee and other division heads urged agency officials to replace Song and to give the project greater flexibility. KOSEF declined to act on Song's status, with one official explaining that “we manage the research budget, not the team itself.” But in June it cut off funding for HANUL, noting that the scientists had not chosen where to assemble the prototype and could not meet an August deadline for its completion.

    “I don't know what to say. I'm just so disappointed,” says Jewan Kim, a physicist at Seoul National University who had helped build support for HANUL. “We had many meetings, but people just don't agree. There's nothing you can do about it.” Song blames the project's failure on disagreements over physics and cost, on stifling bureaucratic requirements, and on a “lack of warm personalities.”

    Others regret the loss of a chance to explore neutrino energies in a range between those covered by two other major experiments, the massive Super-Kamiokande underground water detector in Japan and the larger but less acute AMANDA project in Antarctica, which monitors a huge volume of ice. “The HANUL project was trying to make a bridge between these two techniques. It certainly was worthwhile,” says Francis Halzen, a physicist at the University of Wisconsin, Madison, and a co-PI for AMANDA.

    Five months after losing funding, Lee still hopes to resurrect HANUL elsewhere and somehow include Korea in it. He says that the experience leaves him eager to find international collaborations that can improve Korea's academic environment: “Korea needs more pure research projects so that young people can learn to think for themselves. I thought that, in a small way, I could accomplish that. But I guess the project came a little too early.”

  17. COMPUTER SCIENCE

    Physicists and Astronomers Prepare for a Data Flood

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    New accelerators and sky surveys that will spew data by the terabyte are spurring a search for new ways to store and disseminate the flow

    The end of a millennium is a time for warnings, and some scientists are joining in: They are predicting a flood. But unlike most millennial doomsayers, the scientists are looking forward to being inundated. Their flood is a torrent of data from new physics and astronomy experiments, and they hope it will sweep some long-awaited treasures within reach, such as the Higgs boson, a hypothetical particle that endows everything else with mass, and a glimpse of life-supporting planets in other solar systems. The greater the torrent of data, the better the chance that scientists will pull these and other prizes from it—providing they can find ways to store and channel the flow.

    The quantities of data expected in the next decade will be staggering. Planned experiments at the Large Hadron Collider (LHC), a giant particle accelerator due to be up and running in 2005 at CERN, the European particle physics center near Geneva, “will write data to a disk-based database at a rate of 100 megabytes per second,” says Julian Bunn of the California Institute of Technology's (Caltech's) Center for Advanced Computing Research, “and we expect these experiments to run for 10 to 15 years.” That is over 100 petabytes of data, roughly the equivalent of 10 million personal computer hard disks. (A petabyte is 1015 bytes.) RHIC, an accelerator at Brookhaven National Laboratory in Upton, New York, that collides heavy nuclei to create a primordial state of matter called quark-gluon plasma, is already spewing out data at a rate of nearly a petabyte a year—about 1000 times the volume of data in the largest biological databases.

    Astronomy is contributing to the torrent as well. Johns Hopkins University astrophysicist Alex Szalay expects the Sloan Digital Sky Survey (SDSS), which aims to image 200 million galaxies and measure distances to a million of them, to produce about 40 terabytes of information. (A terabyte is 1012 bytes.) Several planned sky surveys at other wavelengths, such as radio and infrared, will contribute tens of terabytes more.

    Organizing the data and making them available to the global community of scientists without swamping computers or networks will require rethinking the ways data are stored and disseminated. Researchers at institutions including Johns Hopkins, the Fermi National Accelerator Laboratory (Fermilab), Caltech, and Microsoft Corp. are now doing just that. By sorting the data as they flood in and dynamically reorganizing the database to reflect demand, they hope to provide prompt, universal access to the full data archives. “The volume and complexity of the data are unprecedented,” says Caltech particle physicist Harvey Newman. “We need a worldwide effort to get the computing capacity.”

    Two trends have converged to create the database challenge. New particle detectors and telescopes are starting to rake in data at an unprecedented rate. And the experiments themselves have ever larger numbers of far-flung, data-hungry collaborators. The full data sets will have to be stored in central repositories because of their volume: It could take months to transmit a full copy of a petabyte-sized data set over the fastest affordable Internet connection. But to make a mammoth central reservoir usable, says Szalay, “we need a double paradigm shift,” encompassing both data storage and dissemination.

    He and colleagues at Johns Hopkins, Caltech, and Fermilab are tackling the first step—organizing databases to make them easier to search. Current scientific databases often store data sequentially, as it is churned out by the experiment, which makes retrieving a specific subset of data (all blue galaxies in the SDSS, for example) very slow. Says Szalay, “We have to divide and conquer the data.”

    His team has designed software for the Sloan Survey that automatically presorts the stars and galaxies in each new image from the survey's telescope in New Mexico, putting them into separate buckets called objects. Electronic “labels”—“blue galaxies” or “11th magnitude stars”—indicate the contents of the buckets, and the database software will link each bucket to the other buckets to form what's called an object-oriented database. The SDSS data spigot is already open, and Szalay's software is hard at work distributing data into the appropriate buckets.

    Presorting the data in this way dramatically decreases the computer time needed to find relevant information. “It's like the difference between picking a song from a cassette tape and one from a compact disc,” says physicist Bruce Allen of the University of Wisconsin, Madison. To find a song on a tape, you have to fast-forward through all the other songs, but a CD player can skip over “Twinkle, Twinkle, Little Star” and go directly to “Blue Moon.” Similarly, when the whole SDSS object-oriented database is complete, the computer will be able to respond to a request for blue galaxies by going straight to the appropriate bucket.

    Data from particle physics experiments will be sorted and stored in much the same way. Collision events could be sorted by “the curvature of a particle track in a magnetic field or the energy collected from an electron shower,” suggests Bunn. Ongoing projects such as the Particle Physics Data Grid and the Globally Interconnected Object Databases—two Caltech-based projects—are already testing object-oriented database technologies for particle physics.

    The scientists are hoping that they can link and search the database objects with inexpensive, off-the-shelf software, such as Oracle or Objectivity. Whereas a high-end, custom-built database costs about a dollar per megabyte of stored information, says Microsoft database expert Jim Gray, a commercial database costs only a penny per megabyte. Preliminary work he and his colleagues did with 100-gigabyte Oracle databases stored on an off-the-shelf PC network looks promising, he says: “We think this is a design for the future.”

    Szalay's second paradigm shift would affect not the database itself but the computer network, transforming it from the traditional client-server computer network architecture to a hierarchical computer grid. In the client-server model, the database is stored at a single location. The client requests information directly from the server computer, and the server sends it back. But even if the data are presorted, as in an object-oriented database, millions of data requests from thousands of scientists in every corner of the world could quickly bring even the most powerful supercomputer to a screeching halt. “A single computer would be swamped,” says Bunn. “We are obliged to do something radically different.”

    What's more, even with a projected 1000-fold increase in network bandwidth in the next few years, network access “will remain a scarce resource,” says Newman. So a collaboration of particle physicists and astronomers funded by the National Science Foundation's Knowledge Discovery Initiative and headed by Szalay is drawing up plans to distribute both the SDSS and LHC databases over multiple computers, arranged in a hierarchy. At the highest level, one complete copy of the presorted object-oriented database will be split into pieces and distributed among a handful of “Tier-0” centers. To protect against any errors that may creep into the presorted data, a copy of the raw data will also be stored at the Tier-0 centers.

    Below the Tier-0 centers will be a series of three increasingly specialized layers of data-storing centers. Every Tier-0 center will be electronically linked to several Tier-1 regional centers, serving a particular region or country. The Tier-1 regional centers in turn will be connected to local universities (Tier-2) and finally to individual researchers (Tier-3). Each tier will house a copy of a progressively smaller piece, or cache, of the total database.

    The exact contents of a given cache will change over time. Initially, “we will assess how people might use the system,” says Newman, and then load each cache with the information most likely to be used by researchers connected to that branch of the hierarchy. For example, universities with large cosmology groups might choose to store a list of the sky coordinates of all the galaxies observed by the SDSS but ignore all the stars in the data set. Then, when a researcher queries the database for the locations of galaxies, his computer only has to go as far as the next tier for the information. On the other hand, another astronomer at the same university who wants the colors of nearby stars might have to search all the way up to a Tier-0 center. But because “caching will be triggered by access patterns,” as Newman puts it, data will constantly be redistributed among the centers to make the system as efficient as possible.

    So will it work? Yes, says computer scientist Krystof Sliwa of Tufts University in Medford, Massachusetts. He and his collaborators have constructed a detailed computer program to simulate the behavior of the proposed grids. “It's a classic optimization problem,” says Sliwa. “We look at the cost and time required to do a set of jobs.” Sliwa's models indicate that the existing Internet could handle the data requests to a layered computer network housing a petabyte-scale database. Newman cautions, however, that bottlenecks might develop as many users try to access the grid at once.

    If these schemes succeed in making the giant databases of the future accessible and flexible, many scientists believe that querying will itself become a new research mode, opening a new era of computer-aided discovery. “These databases will be so information-rich, they will enable science that their creators never envisioned,” says Caltech astronomer George Djorgovski. One approach is to model the properties of a new object and then go look for it in the database, says Djorgovski, “but that builds in prejudices.” He prefers the idea of unleashing software that automatically searches the database for entries with common properties, revealing unsuspected new classes of phenomena. “You will rediscover the old stuff,” but now and then, he says, you'll pull something completely new from the floodwaters.

Log in to view full text

Log in through your institution

Log in through your institution