News this Week

Science  09 Jan 2004:
Vol. 303, Issue 5655, pp. 150

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    A Tale of Two Landings, One Orbiting

    1. Richard A. Kerr

    One ended in silence, the other in ecstatic shouts. But both efforts to land on Mars in recent weeks shared a common design concept. If the final efforts to contact the European Beagle 2 lander during the next few weeks fail, the question will become why the far less massive instrument package heading for perhaps the safest spot on the planet should disappear without a trace, while the more massive U.S. spacecraft, Spirit, performed exactly as scripted, bouncing and rolling to a safe stop on target in blustery weather in Gusev crater. The air bag-encased lander even came to rest upright and in the midst of a scientifically promising, if visually monotonous, part of the crater floor.

    All ended well with Spirit, but it gave Harry “Hap” McSween a moment of extreme anxiety last Saturday night. Simple tones confirming completion of each crucial step toward landing had been coming down from the spacecraft with comforting regularity. NASA had mandated such telemetry after Mars Polar Lander— designed, like traditional landers, to settle gently to the surface on three legs—met its end in silence in 1999. If something was going to go wrong with Spirit's landing system—a Rube Goldberg contraption successfully tried out just once before by Mars Pathfinder in 1996—NASA intended to know what it was.

    Ready, set.

    Spirit (glimpsed at bottom of panorama, above) is poised to explore the abundant, but not too abundant, rocks of Gusev crater.


    There were a number of possible hazards. Mars Pathfinder had, without a hitch, slowed from 19,000 kilometers per hour to a dead stop some meters above the surface in just 6 minutes by blazing through the atmosphere like a meteor, deploying a parachute at 1500 kilometers per hour, and firing retrorockets only at the last second. Then the lander, encased in air bags, bounced for several minutes across Mars before coming to a stop. But Spirit was another matter. The Pathfinder lander and its bread box-size rover totaled 360 kilograms, while the golf cart-size Spirit and its landing platform hit 533 kilograms. That made bouncing a whole new ball game—especially in the high and turbulent winds common in Gusev during the mandatory midafternoon landing time.

    So when, at the end of the 6 minutes, intermittent signals came back suggestive of a bouncing lander on the surface, celebration broke out in the control room at NASA's Jet Propulsion Laboratory in Pasadena, California. Then there was silence from Spirit. Science team member McSween of the University of Tennessee, Knoxville, was thinking the worst: “Oh my gosh, we bounced on a real sharp rock, and this thing is toast.” But it wasn't. Ten minutes after touchdown, right on schedule, wild exuberance broke out again when an announcer's voice cut through the tension: “We have a very strong signal; we have confirmation” that Spirit is resting on the surface and is able to communicate.

    Success at Gusev crater was no mere stroke of luck. Engineers thoroughly reworked the Pathfinder concept that had emerged from the “faster, cheaper, better” days of the 1990s at NASA. “Nobody [on the mission] can say they were shortchanged,” said Edward Weiler, NASA associate administrator for space science, before the landing. “They got all the money they needed.” Pathfinder costs ran to $250 million. Mission costs for Spirit and its sister rover Opportunity, which is scheduled to land on the opposite side of Mars on 25 January, started at $700 million, but Weiler eventually chipped in another $120 million. “We tested these things as much as we could,” he said. “We've done everything humanly possible.” That included reinforcing the air bags and adding a system of small laterally firing rockets to compensate for any excessive winds. In the end, those rockets did indeed have to fire.

    If the Beagle 2 lander failed, no one will ever know for sure why. Like Mars Polar Lander, it lacked the costlier and weightier communication system required to maintain contact during descent toward the surface. A British-led effort under the European Space Agency's version of faster, cheaper, better, the Beagle 2 lander itself was funded at “more than” $42 million—but “a lot cheaper” than either of the U.S. landers—according to the lander's principal investigator, Colin Pillinger of the Open University in Milton Keynes, U.K. Deciding whether it was too cheap would require an intensive investigation that must await the final word on Beagle 2's fate.

    In the meantime, both Spirit scientists and European researchers checking out their instruments on the Mars Express orbiter that delivered Beagle 2 are contemplating the new science awaiting them. There are enough rocks around Spirit to satisfy scientists looking for samples of the presumed ancient lakebed that they think they landed on. At the same time, there are not so many rocks as to obstruct roving. By midmonth, Spirit should be stretching its legs and checking out its first rock. Mars Express scientists will be preparing a full range of remote-sensing instruments, including the first-ever radar flown to another planet that is capable of finding subsurface water. Two out of three may not look so bad in a while.


    A Surprisingly Ancient Cometary Visage

    1. Richard A. Kerr

    The Stardust spacecraft performed “miraculously” last week during its hypervelocity plunge through comet Wild 2's cloud of dust and gas, retrieving what could become the first solar system sample returned to Earth in 30 years and the first sample ever returned from deep space. Stardust plans to drop off the sample when it swings by Earth in January 2006, and cosmochemists will then take on the daunting challenge of dissecting nanoscale materials that hold clues to the solar system's formation and possibly to life's starting materials. “Each 0.1-micrometer component of a [cometary dust] particle has a story to tell,” says Stardust principal investigator Donald Brownlee of the University of Washington, Seattle. So if Brownlee and his colleagues retrieve the 500 dust particles large enough for study that they're hoping for, that's at least 500 million stories.

    The first result from the flyby came from a startling image returned by Stardust's camera. “This is an amazing object,” says planetary geologist Daniel Britt of the University of Central Florida in Orlando, who has studied flyby images of comet Borrelly. It is “completely different from Borrelly,” or comet Halley, the other comet nucleus imaged up close so far. Unlike those two comets, the roughly 5-kilometer nucleus of Wild 2 (pronounced “Vilt 2”) has spent all but a few decades of its billions of years in the outermost solar system, whose cold calm seems to have preserved much of the comet's original, impact-cratered surface. It presents a geologic record never seen before.

    Comets such as Halley and Borrelly, which have made many forays inside the orbit of Mars, are worn-down, rotten-looking remnants of the balls of ice, rock, and organic matter that formed in the outer solar system 4.5 billion years ago. Once a “dirty snowball” is gravitationally nudged into the inner solar system, the strong sunlight there vaporizes its surface ice, driving off its organics-blackened rocky dust. That's the material Stardust snagged as it zipped through Wild 2's tail last week.


    Comet Wild 2 still retains impact craters (center) on its ancient surface.


    Repeated sublimation leaves a typically oblong remnant of a nucleus resembling a dirty snowbank after a few warm rains. But Wild 2 hasn't yet reached that state, says Britt. Formed in the Kuiper belt beyond the orbit of Neptune (Science, 28 November 2003, p. 1491), it orbited in the chill beyond Jupiter until a close encounter with the giant planet in 1974 swung the former Kuiper belt object into warmer climes as close in as Mars. “It looks like a very old surface just beginning to sublimate,” says Britt. “We're seeing it at the early stages of erosion.” What Brownlee and his team members (none of whom are geologists) at first took to be mostly pits created by sublimation, Britt sees as mostly impact craters: “You see a lot of things that look like fairly large impact craters [smoothing] and eroding as the object loses mass; you have raised topography like hills; and you have stuff I don't know what it is.”

    Given the old-looking surface and the apparent spherical shape of the nucleus, says Britt, Wild 2 probably formed directly from the dust and gas of the presolar disk rather than being chipped off a larger body by an impact. That's just the sort of object the Stardust team was after, he notes.

    Cosmochemists should soon have at least 500 15-micrometer bits of comet to understand. It won't be easy. “We expect tremendous complexity at the finest scale,” says Brownlee. The complexities began with the interstellar dust and gas that came together to form the presolar nebula, the solar system's precursor. That dust originally formed around a variety of red giant stars and supernovae, to judge from its most stubborn minerals that managed to survive in primitive meteorites from asteroids. To get some handle on this starting material, Stardust has already made a separate collection of interstellar dust now streaming through the solar system.

    Things didn't get any simpler after the interstellar material arrived at the nebula. It could have vaporized or partially melted as it fell into the growing disk, and cooked material from the hot inner disk might have circulated out to the disk's edges, where future comets would form. Then, “there's a long list of complicated things that happens to these objects along the way,” says planetary scientist Alan Stern of the Southwest Research Institute in Boulder, Colorado. They range from impacts to internal heating to cosmic-ray irradiation.

    Researchers hope to see the signatures of some of these events once the dust arrives on Earth. “There are going to be some great discoveries,” says Stern. Cosmochemist John Bradley of Lawrence Livermore National Laboratory agrees: “Some of us will be embarrassed, and some of us will be thrilled.”


    Double Pulsar Gives Astrophysicists Many-Faceted Thrills

    1. Robert Irion

    Pulsars, the whirling neutron stars that sweep the galaxy with tight cones of light, have grown increasingly exotic since their discovery in 1967. But the latest find, reported online by Science this week (, has delighted pulsar aficionados as never before: two pulsars in a tight orbital embrace, blasting each other with radiation as they spiral toward a mutual doom.

    The system will provide the most stringent probe of how gravity behaves in the weird regime near compact stars, its discoverers say. Moreover, theorists think that watching how the two objects interact will produce deep insights into how pulsars—the ultradense remnants of some supernova explosions—generate their mysterious beams of energy. Already, early observations of the high-powered duel are “stunning and breathtaking,” says astronomer Donald Backer of the University of California, Berkeley.

    Named PSR J0737-3039A for its sky coordinates in the southern constellation Puppis, the brighter pulsar was detected by the 64-meter Parkes radio telescope in New South Wales, Australia. Astronomers reported in the 4 December issue of Nature that the pulsar spins 44 times each second as it loops around what was then thought to be an unseen, nonpulsing neutron star. With an eccentric orbit lasting just 2.4 hours, the pair was touted as the tightest of a half-dozen known pulsar-neutron star binaries. According to Einstein's general theory of relativity, the stellar corpses will converge by 7 millimeters per day—a rate the team plans to measure within months—and will crash in about 85 million years. That merger will unleash a galaxy-shaking burst of gravitational waves.

    Spin twins.

    Two newly found pulsars are close enough to affect each other's beams, shown in this artist's conception.


    The new report describes a rapid follow-up study, which revealed that the neutron-star companion also pulses. The “B” pulsar is a relative slowpoke, rotating just once every 2.8 seconds. But spotting its light makes the system “our greatest discovery,” says radio astronomer Andrew Lyne, director of Jodrell Bank Observatory in Macclesfield, U.K. In particular, the team has measured the shifting orbits of both objects as they tug each other to and fro. That yields the ratio of their two masses, a vital quantity not directly measurable for other binaries. “This is a very important constraint on relativistic theories of gravity that we haven't had before,” Lyne says.

    What's more, the pulsars orbit in a plane nearly edge-on to our line of sight—a “dead lucky” perspective, says radio astronomer Richard Manchester of the Australia Telescope National Facility in Epping, New South Wales. The alignment creates a regular eclipse as the speedy pulsar waltzes behind its slow companion, vanishing for 20 to 30 seconds each time. Further, the slow pulsar endures relentless trips through the fierce particles and radiation spewed by its energetic twin. Those blasts give the slow pulsar multiple personalities: It splits into double blips, dims, and nearly flickers out before regenerating.

    “It's fascinating to me that the pattern can be modified without being extinguished,” says astrophysicist David Nice of Princeton University in New Jersey. Nice and Backer both believe that the interactions will shed light on the origins of the pulsar's beam and the physics of its magnetosphere—a complex plasma akin to Earth's magnetic shield, which fends off the solar wind.

    Because the system is nearby—just 2000 light-years away—the discovery team suspects that many similar objects lurk in the galaxy's depths. “The Milky Way may contain hundreds or even thousands of these, but they are just jolly hard to detect,” Manchester says. But for now, pulsar experts are studying the dynamic duo with every available telescope. Scientists will even give up an afternoon of skiing at the Aspen Center for Physics in Colorado on 14 January for a hastily convened session on the pair.


    Salmon Survey Stokes Debate About Farmed Fish

    1. Erik Stokstad

    Salmon's popularity has boomed in the past 2 decades as aquaculture has made salmon available year-round at low cost. The fish is a good source of protein, vitamin D, and heart-friendly fats. But fish farms have boosted less welcome ingredients: The largest survey yet of pollutants in salmon, reported on page 226, has found that farmed fish have higher levels of polychlorinated biphenyls (PCBs) and other organochlorine compounds than do wild-caught salmon. The source, as many researchers suspected, is the feed.

    “This is a definitive study,” says nutritionist and toxicologist Miriam Jacobs of the University of Surrey, U.K., and Royal Veterinary College, London. “Further action has to be taken to reduce the contaminant levels in feed.” The authors argue that consuming more than one meal of farmed salmon per month may hike the risk of cancer. “The punch line is that eating the wrong kind of fish has real dangers,” says team member David Carpenter of the State University of New York, Albany, in Rensselaer.

    Other experts say the risk is outweighed by the benefits of eating farmed salmon. Avoiding the fish would mean giving up its nutritional benefits, including protection against heart attacks. What's more, they say, the contaminant levels aren't high enough to pose real dangers. “In my view, the study says we should be eating more farmed salmon,” says toxicologist Charles Santerre of Purdue University in West Lafayette, Indiana.

    While nutritionists debate the study's implications, consumers can use its data to try to select the cleanest fish possible. “I think we can begin to make informed choices about what kind of fish to eat,” says toxicologist Linda Birnbaum of the U.S. Environmental Protection Agency (EPA).

    The massive study, funded by the Pew Charitable Trusts' Environment program and conducted by six scientists, sampled about 700 salmon from around the world and analyzed them for more than 50 contaminants. The greatest difference between farmed and wild salmon was in organochlorine compounds. For 13 of 14 of these chemicals tested, farmed salmon were more contaminated than wild ones. Farmed salmon in Europe had the highest levels, followed by those from North America, whereas Chilean salmon were the cleanest. The researchers also tested the oil and meal fed to salmon and found a similar pattern. Feeding salmon with fish meal boosts their growth and nutritive value, but it also concentrates contaminants.

    Seeing red.

    Farmed salmon has more PCBs than wild salmon, but scientists don't agree on how much one should eat.


    The team took a closer look at PCBs and two other persistent pesticides called dieldrin and toxaphene, all of which have been correlated with risk of liver and other cancers. The researchers used EPA guidelines to calculate the maximum amount of salmon that can be eaten before boosting cancer risk by at least 1 case in 100,000. For the most contaminated fish—from farms in Scotland and the Faroe Islands—the limit came to 55 grams of salmon (uncooked weight) every month, or a quarter of a serving. One half-serving a month of farmed salmon from Canada or Maine adds no significant risk, they say; and double that is acceptable for fish from Chile or the U.S. state of Washington. Some types of wild salmon from Alaska or British Columbia are safe to eat eight times a month.

    Although no U.S. government agency has said how much fish one should eat, the American Heart Association recommends 168 to 336 grams per week. Consumption of the omega-3 fatty acids found in fatty fish reduces the risk of sudden cardiac death after a heart attack. For people with cardiovascular disease, that benefit outweighs any added cancer risk, Carpenter says.

    How much salmon people without cardiovascular disease ought to eat is less clear. Advice for pregnant women is the source of the most heated debate. Organochlorines can damage the developing endocrine system, immune system, and brain. The compounds build up in body fat and linger there for decades—where they can be passed to a woman's fetus during pregnancy or excreted in breast milk. “For a woman before menopause, especially for a young girl, [farmed salmon] really is not good to consume,” Carpenter says.

    Others disagree. “I think it's unconscionable to direct pregnant women away from farmed salmon,” says Santerre, who consults for the industry group Salmon of the Americas in Princeton, New Jersey. Omega-3 fatty acids are important for brain development, and they may reduce the risk of preterm births and slightly increase a child's cognitive abilities.

    Santerre says that 226 grams of farmed salmon per week is safe for anyone, and he points out that the PCB levels are all below the level determined by the U.S. Food and Drug Administration (FDA) to be safe for sale in supermarkets. That cutoff is 40 times higher than what EPA has determined is safe in recreationally caught fish, in part because FDA considers safety and nutrition whereas EPA looks solely at health risks.

    Fish farms are already working on the problem, such as by finding fish meal with low contaminant levels, says Alex Trent of Salmon of the Americas. Researchers are experimenting with ways to substitute fish oil in feed with vegetable oil. One variety of transgenic canola, for example, contains a precursor to the omega-3 fatty acids.


    Once Again, Muons Defy Reigning Theory

    1. Charles Seife

    The mu's screws are loose, and that's good news for scientists hoping for a glimpse of new physics. At a seminar at Brookhaven National Laboratory in Upton, New York, this week, physicists were expected to announce that Brookhaven's so-called muon g-2 experiment strengthens indications that muons flout the Standard Model of particle physics.

    The Standard Model describes, to a few parts in a billion or so, how a muon twists in a magnetic field. But in 2001, when experimentalists at Brookhaven first measured that twisting—its magnetic moment—with comparable precision, the results were higher than the model predicted (Science, 9 February 2001, p. 958). Despite an embarrassing round of backpedaling after theorists made an arithmetic error that exaggerated the clash between observation and theory, the discrepancy persisted as the experiment continued (Science, 9 August 2002, p. 916).

    Now, similar measurements of negatively charged muons, known as mu-minus particles, have supplied independent evidence that something odd is afoot. “We had to reverse every magnet in the experiment,” says Boston University physicist Lee Roberts, co-spokesperson for the g-2 (pronounced “g minus two”) experiment. The resulting mu-minus data bear out earlier measurements but disagree even more sharply with the Standard Model, Roberts says. Taken all together, the numbers show that the muon's twistiness is nearly 3 standard deviations, or sigma, away from what theory predicts—a serious, although not conclusive, suggestion that the Standard Model has failed.

    “Getting close to 3 sigma in this kind of experiment is really quite strong,” says Gordon Kane, a theorist at the University of Michigan, Ann Arbor, who suspects that the discrepancy is due to the subtle influence of as-yet-undetected particles required by an extension of the Standard Model known as supersymmetry. The g-2 experiment finished its last run in 2001, but Roberts is hoping that Japanese labs will make even more precise measurements of the muon's magnetic moment later in the decade. Kane, meanwhile, hopes that experiments with other particles will finally propel physics beyond the Standard Model.


    Political Tussle Looms Over Private R&D Center

    1. Vladimir Pokrovsky,
    2. Andrey Allakhverdov*
    1. Pokrovsky and Allakhverdov are writers in Moscow.

    MOSCOW—Russia's leading oil company, Yukos, took a bold step a year and a half ago when it announced that it would build the country's first private academic research institute. But when the Yukos R&D Center opened in Moscow last month, on time to the day—no mean feat in Russia—dark clouds hung over the company and its shiny new facility. The head of Yukos, billionaire business leader Mikhail Khodorkovsky, was in jail charged with tax evasion and fraud, and Yukos's business plans were uncertain. So far, however, Russian authorities have not sought to meddle with the new center, which will focus on chemistry.

    Khodorkovsky, who is thought to be Russia's richest person, built up an oil empire through controversial privatizations in the 1990s. But the planned merger of Yukos with the oil giant Sibneft—run by another of Russia's young tycoons, Roman Abramovich, the new owner of London's Chelsea Football Club—was blocked in November at the 11th hour. Khodorkovsky, who has been openly critical of President Vladimir Putin's administration and has funded opposition politicians, was arrested the same month; judicial authorities decided in December to keep him in detention until March. Shortly after his arrest, Khodorkovsky resigned his position as head of Yukos, although he remains its biggest shareholder.

    Doing time.

    Mikhail Khodorkovsky, supporter of science and Russia's wealthiest prisoner.


    Much to everyone's surprise, says Stanislav Kaminsky, executive director of the new center, it stayed on schedule in spite of these upheavals. The whole project cost Khodorkovsky $15 million, of which about $5 million was invested in state-of-the-art equipment. Yukos plans to spend about $10 million a year on running and developing the center, which will employ about 200 researchers. The center's new director, chemist Boris Rogachev, told Science that he is amazed at the standard of the facilities: “When I worked in an ordinary Soviet laboratory, I could not even dream of this.”

    The new center will develop and commercialize new chemical processes, model kinetic reactions, and study organic synthesis. Rogachev says he is still recruiting staff members, who will include Russians and foreign experts. The center's corridors already echo with conversations in English and a multitude of other languages. A major achievement, says Rogachev, is a reverse brain drain: Many Russian émigrés have been drawn back to Moscow, because the high-quality facilities and salaries are competitive with the West's. Kaminsky adds that the center has a wider significance: “It is the first sign that [Russian] industry feels a need for scientific research. This is a real breakthrough.”


    First U.S. Case of Mad Cow Sharpens Debate Over Testing

    1. Dennis Normile*
    1. With reporting by Eliot Marshall and Martin Enserink in Washington, D.C., and Gretchen Vogel in Berlin.

    TOKYO—What's the best way to ensure that beef is safe from “mad cow disease”? Until 23 December, the U.S. government thought it knew the answer: Keep tissue that harbors the disease's agent—the brain, the spinal cord, and the lower intestines—from contaminating meat products and the feed that cattle eat.

    But the country's first documented case of the disease, bovine spongiform encephalopathy (BSE), in Washington state, has created a groundswell for another, quite different approach already adopted by Japan: Test all cattle headed for slaughter. Stanley Prusiner, a University of California, San Francisco, researcher who won the 1997 Nobel Prize in medicine for his work on BSE and similar diseases—and who is the founder of a company that makes a BSE test (see table)—told The New York Times last month, for example, that the U.S. should test every cow that shows signs of illness and eventually every cow slaughtered for human consumption.

    View this table:

    Many scientists are skeptical of such an extreme approach, however. Because current tests cannot catch infected animals in the early stages of the disease, they believe testing should be primarily an epidemiological tool with little, if any, role in ensuring food safety. Even Markus Moser, CEO of BSE testmaker Prionics AG in Schlieren, Switzerland, acknowledges that ultimately the appropriate level of testing “is a cost-benefit question. … One complicating factor is that we don't know what the real risk is.”

    The U.S. Department of Agriculture (USDA) is now trying to figure out the complex equation of how much testing is enough. So far, it is steering a middle course: One week after the Washington case hit the headlines, it announced plans to expand its testing program as part of a series of steps to tighten its oversight of the food chain. Although details have not been worked out, officials hope the plan will be enough to ensure public health and restore confidence in the $50 billion U.S. beef industry.

    Up from downers. The United States is a latecomer in the global battle against BSE (see below), which emerged among cattle in the United Kingdom in the mid-1980s and was later linked to a form of a rare human brain-wasting illness called variant Creutzfeldt-Jakob disease (vCJD). At least 188,000 cattle have been confirmed with BSE worldwide. BSE and several similar brain-wasting diseases are believed to be caused by misfolded proteins called prions. These pathological agents spread among cattle as they eat meat-and-bone meal (MBM), which is ground-up slaughterhouse waste.

    Humans likely contract vCJD by eating contaminated meat products. The U.K. banned feeding MBM to cattle in 1988, and most of the rest of the world followed suit in the late 1990s. There have been about 150 human vCJD cases in Europe, two-thirds of them in the U.K., and some epidemiological models indicate that there could be thousands of additional cases, which can take decades to develop.

    The European Union now tests all animals over 24 months old that die on farms or are obviously ill, called downers, and all cows over 30 months old slaughtered for human consumption. Japan tests all cows slaughtered regardless of age. “This testing is not a public health measure,” says Danny Matthews, head of prion disease studies at the U.K.'s Veterinary Laboratories Agency in Weybridge. “But it does reassure consumers.”

    In the early 1990s, the Paris-based World Organisation for Animal Health (OIE, from its French name) started recommending that all countries each year begin testing to determine whether BSE had crossed their borders. The recommended number of tests was low, roughly 1 for every 10,000 head of adult cattle in the national herd. Alex Thiermann, who is president of the OIE council that writes the recommendations, says the levels were set to include even those countries, such as Argentina and Uruguay, at minimal risk because they feed their cattle grass and import little, if any, MBM. By the late 1990s, most governments had also started to require slaughterhouses to remove and dispose of the specified risk materials—the brain, the spinal column, and parts of the intestines—in which the prions are found.


    This Holstein calf in Chiba, Japan, may someday encounter the world's most stringent BSE-testing program.


    In 1999 Switzerland began testing other “at risk” populations, and it later started random sampling of healthy slaughter cattle. The results, says Dagmar Heim, a veterinarian who heads studies of prion diseases at the Swiss Federal Veterinary Office, yielded valuable epidemiological data. But officials concluded that sampling healthy cattle at the slaughterhouse was not worth the cost. Then in late 2000 several European countries simultaneously reported their first cases of BSE. “Consumers panicked,” Heim says, avoiding beef and demanding government action, which led to the current E.U. rules.

    Japanese consumers got a similar scare from Japan's first BSE case in September 2001. Demand for beef fell by half, with domestic beef taking the brunt of the blow, and recovered only after the Japanese government started testing all slaughtered animals regardless of age. “There was a perception among consumers that the Ministry of Agriculture wasn't handling the problem properly,” says Rieko Ozawa, who follows the issue for the Japanese Consumers Cooperative Union.

    Speed limits. Of course, governments can't order more tests at the slaughterhouse without the appropriate technology. The gold standard for BSE testing is immunohistochemical screening, in which a tissue sample is treated with a stain carrying antibodies that cling to the prions. A pathologist then looks for an accumulation of stained antibodies in the sample. Immunohistochemistry is considered reliable, and it can catch the disease before the characteristic vacuoles, or lesions, form in the brain. But this process takes up to 5 days or more.

    Several companies have recently developed rapid BSE tests that detect prions in brain or spinal tissue. It's possible to screen large numbers of samples and get results in a day or less. This made practical mass screening of animals going for slaughter and allowed both the E.U. and Japan to dramatically expand testing. The new rapid tests are used for the initial screening. Positive tissue samples are typically sent to designated national labs for immunohistochemical confirmation. If the diagnosis is confirmed, the meat, held until results come back, is destroyed.

    View this table:

    Many Japanese consumers believe their country's test-all policy is proving its worth. Of the nine cases it has caught so far, the most recent occurred in cows 21 and 23 months old. Prion infections are difficult to detect in such young animals, before they become concentrated in neural tissue. William Hueston, director of the Center of Animal Health and Food Safety at the University of Minnesota, Twin Cities, speculates that these animals may have been exposed to large doses of the infectious protein, which is now known to cause early onset of the disease. He notes that Japan only officially banned feeding MBM to cattle around the time these cows were born.

    Even if Japan's BSE problem differs from that of the rest of the world, many scientists think that its testing practices go too far. Dean Cliver, a professor of food safety at the University of California, Davis, and a member of a U.S. Institute of Medicine panel that reported on prion disease research in November, believes that much of Japan's testing regime is “a wasted effort” because BSE isn't detectable in calves. Advocates of testing, however, see the extra surveillance as a form of insurance.

    Expanded testing in the United States faces at least one bottleneck: None of the five rapid tests currently used in Europe is approved for use on U.S. cattle. The California-based Bio-Rad hasn't applied for U.S. approval for its ELISA test, says Brad Crutchfield, vice president of its life sciences division, because the USDA's Center for Veterinary Biologics “made it very clear to us that they were not accepting any applications.” USDA's Byron Rippke confirms that, until 23 December, the department felt that its current approach was “adequate” to handle the volume. “But my gut feeling is that it is going to change,” he added.

    The new rapid-response testing would cost about $60 per animal, estimates Minnesota's Hueston, although companies say the kit itself could cost less than $25. But the economic impact of such a policy would depend on how the test was used. The majority of the 35 million U.S. beef cattle slaughtered each year are killed before they are 24 months old. But some 6 million older dairy cows are also slaughtered, often for low-grade hamburger, when they can no longer produce sufficient milk. (That was the case with the 6-year-old Holstein dairy cow that tested positive for BSE in December.) So adopting an E.U.-style testing regimen for cows over 30 months would cost $360 million, Hueston says.

    Prion hunters.

    German scientists prepare a sample from a cow's brain as part of a BSE testing program begun in 2000.


    But Hueston thinks that “would not be a wise use of resources” because there's no evidence at present that BSE is widespread. Most scientists think that greater enforcement of feed bans and proper slaughterhouse practices will do more to protect public health than will an expanded testing regime. The requirements “can't be just on paper, they have to be enforced,” says Heim.

    Hueston also questions the motives of some of the testing advocates. He says Prusiner's recommendations amount to “a conflict of interest” given his financial ties to InPro Biotechnology in South San Francisco, which he founded to market one of his tests. Prusiner did not respond to phone messages. Even some of InPro's competitors think a “test all” policy is over the top and recommend that the government take advice from impartial third parties. “It is more effective if an expert panel recommends [levels of] testing than if a company does,” says Prionics' Stephanie Musshafen. “A company always has a commercial interest.”

    The U.K.'s Matthews hopes the United States will settle on a pragmatic and rational program that might set an example for the rest of the world. But even he foresees the need for expanded testing. While declining to suggest a specific number, he says that any program should adequately assess the level of risk. “One case, even in an imported cow, likely indicates other cases have gone undetected,” he says.


    Microbes Made to Order

    1. Dan Ferber

    A new breed of bioengineers aims to create microbes from off-the-shelf parts. The parts are coming, but will researchers be able to put them together?

    In the dead of a New England winter, 16 students worked day and night for a month trying to make Escherichia coli blink like a lighthouse. No one really expected a blinking bacterium to be all that useful. Instead, the exercise was meant to teach students—and their instructors—how to make reprogramming bacterial behavior more routine. The first class of its kind, held last January at the Massachusetts Institute of Technology (MIT) in Cambridge, also marked the emergence of the hot new field of synthetic biology.

    Bacterial blinking circuits are just one element in the MIT researchers' “registry of standard biological parts,” which is akin to an inventory that electrical engineers or basement tinkerers might consult when they design a new device, says class co-instructor Drew Endy of MIT. Researchers at MIT and elsewhere are working on sensors and actuators, input and output devices, genetic circuits to control cells, and a microbial chassis in which to assemble these pieces. If they're successful, the registry will help them reach one of the goals of synthetic biology: to allow researchers to “go into the freezer, get a part, hook it up,” and have it work the first time, Endy says.

    The parts list is itself just one piece of a hugely ambitious plan: to engineer cells into tiny living devices. Some of the engineered devices these researchers envision will function as molecular-scale factories. Others will help detect chemical weapons, clean up environmental pollutants, make simple computations, diagnose disease, fix faulty genes, or make hydrogen from water and sunlight. “We're going to modify the whole behavior of the cell,” says bioengineer Ron Weiss of Princeton University in New Jersey. Synthetic biologists aim to build cells from the ground up rather than tinkering with a handful of genes or tweaking a metabolic pathway or two, as do today's genetic engineers.

    The fledgling field, which is attracting engineers and biologists in equal measure, means different things to different people. Engineers view it primarily as an engineering discipline, a way to fabricate useful microbes that do what no current technology can. But many biologists see it instead as a powerful new way to learn about cells. Unlike systems biologists, who analyze troves of data on the activity of thousands of genes and proteins (Science, 5 December 2003, p. 1646), synthetic biologists simplify and build. They create models of genetic circuits, build the circuits, see if they work, and adjust them if they don't—learning about biology in the process. “I view it as a reductionist approach to systems biology,” says biomedical engineer James Collins of Boston University.

    Blinkers on.

    A synthetic gene circuit that works like a clock turns on fluorescent proteins that make these E. coli flash on and off.

    CREDIT: M. ELOWITZ ET AL., SCIENCE 297,1183 (2002)

    However it's defined, synthetic biology is catching on. A growing cadre is publishing in top journals. Researchers at Lawrence Berkeley National Laboratory (LBNL) in California established the world's first synthetic biology department last June. A European Commission program designed to support “unconventional and visionary research” has issued a request for synthetic biology research proposals. The inaugural synthetic biology conference (Synthetic Biology 1.0) is set for next June at MIT. “I think we're going to see some spectacular new science and engineering,” says Eric Eisenstadt, a program manager who oversees synthetic biology funding for the Defense Advanced Research Projects Agency (DARPA). J. Craig Venter, who heads the Institute for Biological Energy Alternatives in Rockville, Maryland, predicts that “engineered cells and life forms [will be] relatively common within a decade.”

    Rewiring the cell

    Nothing is more basic for a parts list than reengineered genetic circuits that direct the behavior of made-to-order microbes. Along with parts, genetic-circuit designers need simple principles to guide their work, just as engineers use Ohm's law of resistance or Kirchhoff's rule on conservation of charge at a junction to guide the design of electric circuits. But biologists are just beginning to grasp the rules.

    The library of such principles was inaugurated decades ago when microbiologists François Jacob and Jacques Monod of the Pasteur Institute discovered the first gene circuit—a set of genes that help E. coli digest lactose. A regulatory gene called a repressor is normally on, keeping the lactose-digestion circuit inactive. When lactose is present, however, the bacterium turns the repressor off. Such gene circuits can be diagramed with nodes representing genes and arrows indicating which other genes they regulate. “If you squint hard enough, it begins to look like [an electrical] circuit diagram,” says bioengineer Jeff Hasty of the University of California (UC), San Diego.

    The analogy falls down on the details, however. Electronics engineers know exactly how resistors and capacitors are wired to each other because they installed the wiring. But biologists often don't have a complete picture. They may not know which of thousands of genes and proteins are interacting at a given moment, making it hard to predict how circuits will behave inside cells.

    To simplify the problem, physicists Michael Elowitz of the California Institute of Technology (Caltech) in Pasadena and Stanislas Leibler of Rockefeller University in New York City built a genetic clock from scratch—the original blinking bacterium that last winter's MIT students were trying to improve upon. The two, then at Princeton University, designed a circuit of three repressor genes (call them genes A, B, and C), which they dubbed the “repressilator.” It worked like the game “Rock, Paper, Scissors”: Gene A turned off gene B, gene B turned off gene C, and gene C turned off gene A. Gene C also turned on a jellyfish gene that turned the cell green. In physicists' terms, the device was a limit-cycle oscillator: an oscillator that reestablishes the same behavior after it's perturbed. When they put the circuit into E. coli, the cells blinked. The work, reported in Nature in 2000, is “the high-water mark of a synthetic genetic circuit that does something,” Endy says.

    More recently, Michael Savageau of UC Davis, Alexander Ninfa of the University of Michigan, Ann Arbor, and their colleagues rewired some well-studied bacterial gene circuits to make a less noisy oscillator. With minor modifications, the oscillator also functioned as a toggle switch, turning a circuit on or off. The results, published in Cell in April, show that as researchers understand the design principles of genetic circuits, they learn to control their behavior, Savageau says.

    So far, most designers have made gene circuits that mimic simple physical devices used routinely by engineers, including toggle switches, oscillators, and feedback loops. But evolution “might have come up with new designs that engineers never thought of,” says Savageau. So they have begun drawing design principles from biology. By examining patterns of gene expression in E. coli, for example, physicist-turned-biologist Uri Alon of the Weizmann Institute of Science in Rehovot, Israel, and colleagues identified three widespread gene circuit designs. They synthesized a circuit using the most common design, called a feed-forward loop, and installed it in bacteria. As they reported in November in the Journal of Molecular Biology, it enables bacteria to turn genes on slowly but off quickly—a property that seems to help the cells filter out molecular noise and activate genes only when they're needed.

    Biosensors to go.

    Homme Hellinga and colleagues retooled bacterial sensor proteins, like the one on the computer monitor, to bind desired chemicals.


    Biodesign will truly resemble engineering when researchers can construct models that accurately predict how a gene circuit will behave inside cells, says engineer-turned-biologist Harley McAdams of Stanford University. To do that, Collins, biomedical engineer Timothy Gardner, and their Boston University colleagues used a mathematical method from a branch of engineering called system identification to infer the design of a network—in this case, the SOS pathway that turns on genes in response to DNA damage—by monitoring parts of the network. Given messenger RNA levels produced by some genes, the algorithm correctly determined the entire circuit's wiring. The algorithm, reported in Science in July (4 July 2003, p. 102), also predicted which points in the network were blocked by a DNA-damaging drug. Pharmaceutical companies may be able to adapt the method to see if candidate drugs affect parts of the cell aside from their intended target, Collins says.

    Despite the recent successes, it will take years for systems biologists to fully understand the logic of gene circuits, in part because there are so many of them. Engineers such as Endy, meanwhile, are happy to get information from any source about genetic modules that can direct the behavior of engineered microbes. They don't plan to wait around for the systems biologists, according to Endy: “Synthetic biology says, ‘Screw it. You want modules? We'll build modules.’”

    Like LEGO bricks

    In the eighth-floor playroom of MIT's Artificial Intelligence Laboratory, teams of students in the synthetic biology class designed circuits to improve upon the repressilator, the circuit driving Elowitz and Leibler's original blinking bacteria. One group drew up plans to add a logic gate that would let a chemical switch the blinking on or off. Another designed a system to flash more often. A third group planned a “synchronator” that would make all the cells blink in concert.

    Each module was composed of parts from MIT's standard synthetic biology parts list, a data book dubbed “BioBricks.” It's based on a parts list called the transistor-transistor logic data book, from which electronic circuit designers select compatible, LEGO-like modules for complex circuits. Each BioBrick is a piece of DNA; each can be spliced to any other BioBrick. Each either makes up or encodes a functional element familiar to any molecular biologist: promoters and terminators to start and stop transcription, antisense RNAs to block gene expression, ribosome-binding sites that spur cells to make protein from messenger RNA, and reporter genes that make cells glow green.

    As often happens when engineers test electronic circuit designs, the MIT students were forced to improvise when their grand plans collided with reality. Each group was allowed to play with a budget of 5000 base pairs of DNA; all of them ran over budget. They learned to economize and share parts. Future synthetic biologists, like engineers, will also have to learn to specialize, says course co-instructor Gerald Sussman, with some designing circuits, others fabricating them, and still others making the larger components. “Eventually we'll be able to design and build in silico and go out and have things synthesized,” says Jay Keasling, head of LBNL's new synthetic biology department.

    Circuit logic.

    To control synthetic microbes, scientists are reengineering genetic circuits like this one.


    Synthetic biologists eventually aim to make bacteria into tiny programmable computers. Like electronic computers, the live ones would use both analog circuits and digital logic circuits that perform simple computations. Rudimentary components are already taking shape. Princeton's Weiss, MIT's Tom Knight, and their colleagues made an amplifier and other analog circuits. They also made a set of genetic on-off switches that can perform basic Boolean computations and used them to fashion eight genetic circuits that work as logic gates, including a NOT gate and an AND gate. And chemist Milan Stojanovic of Columbia University and computer scientist Darko Stefanovic of the University of New Mexico in Albuquerque created a digital logic circuit made of DNA that's unbeatable at ticktacktoe. Such computers would never rival the raw computing power of their electronic cousins, Weiss says, but they'd be able to direct the operation of engineered cells.

    Peripheral components are being developed, too, such as engineered cells that can sense chemicals in their environment and respond with a signal. Bacteria are already good at sensing the molecules they care about, but biochemist Homme Hellinga of Duke University in Durham, North Carolina, and colleagues have devised an algorithm to direct natural biosensor proteins to bind whatever chemical the designers want. The method, reported in Nature in May, allowed the team to reengineer a single E. coli sugar-binding protein to bind the explosive TNT; a metabolite called lactate; or serotonin, a compound brain cells use to communicate. The team members plugged the redesigned protein into an engineered gene circuit, which they stuck into a bacterium to create a bug that glows green when it sees its target chemical. Similar microbial biosensors could detect underwater ordnance or environmental pollutants or be used in medical diagnostics. “We'd be the ones who give you different components for an electronic breadboard, beyond what nature offers,” Hellinga says.

    Designers are working on programmable cells that would need the equivalent of a keyboard to receive input. So far researchers have used chemicals to send signals to reengineered microbes, but the MIT team is exploring ways to use light as well. And to provide a readout on what's going on inside cells—the equivalent of a monitor—the researchers will need something beyond the fluorescent jellyfish protein that's been used until now. Ideally, engineered cells would communicate with one another, allowing them to act in concert, Weiss says. To do that, he and his colleagues are rigging E. coli with quorum-sensing proteins, which other bacteria use to send and receive signals.

    All together now

    Like a computer or a car, engineered microbes require a chassis. One would-be organism frame builder is Venter, who directs a high-profile effort to engineer synthetic microbes that can make hydrogen from sunlight and water (Science, 14 February 2003, p. 1006). His strategy is to install special components and a new genetic agenda into a microbe with a stripped-down genome.

    But getting installed functions to work reliably—and safely—will be a tremendous challenge (see sidebar on p. 159). For starters, devices put together in the lab may not work well in cells, where they'll sit in close quarters with hundreds or thousands of other biological parts, says DARPA's Eisenstadt. But Weiss, Caltech chemical engineer Frances Arnold, and their colleagues have come up with a possible way around that problem. Last year in the Proceedings of the National Academy of Sciences, they reported fine-tuning the connections of a genetic circuit using directed evolution, a test tube method by which researchers mutate microbial genes until the bugs perform the way the researchers want. The method allowed two previously incompatible parts of the circuit to work together.

    Command center.

    Students and instructors in MIT's first annual synthetic biology class labored to make microbes that do their bidding.


    Making the components compatible isn't enough, McAdams says. Synthetic biologists will have to test their prototypes for robustness—whether they work under a wide range of conditions. They'll need computer programs that predict how designed circuits would behave in the cell, he adds. And once circuits are installed, Savageau says, researchers will need the equivalent of a voltmeter to test how well the circuit is working. “What you'd like is some magic instrument where you could look at the concentration” of proteins, RNAs, and metabolites, he says. That doesn't exist, but with microarrays and other technology, “it's coming,” he says.

    Despite many early successes, synthetic biologists might be getting ahead of themselves. Much more needs to be known about the basics of cellular “device physics”— including where proteins are located, how fast they turn over, and what other proteins they talk to, says Eisenstadt. “We'd like to be building life forms from first principles,” says Venter, “but it's kind of hard when you don't know all the first principles.” And after all is said and done, researchers may never be able to make a synthetic cell at all, Venter says: “People should not accept as a fait accompli that this will work.”

    Back at MIT, it's still not clear whether last year's bacterial class projects will blink. The modules were made, and Endy and co-instructor Knight's teams are still installing them in E. coli to test them. But whether they work or not, the MIT engineers are pressing on. For the second annual synthetic biology class, which kicked off this week, they'll challenge the students to make bacteria communicate with their neighbors on a petri dish to turn genes on or off. The goal this time: genetically encoded polka dots.


    Time for a Synthetic Biology Asilomar?

    1. Dan Ferber

    Synthetic microbes might one day clean up pollutants, produce hydrogen for fuel, improve gene therapy, and more. But synthetic biology also raises some real dangers, say bioethicists such as David Magnus of Stanford University: “The greater control we have over bacteria, the greater potential we have for good but also for harm.”

    Like today's genetically engineered microbes, many synthetic microbes would be confined to laboratories or biotech factory vats. But some proposed uses of engineered microbes, such as sensing explosives or cleaning up pollutants, would require robust bugs that could survive outside the lab. New methods may be needed to keep them from spreading, Magnus says. “I don't think any engineered species made at this stage should be released, and if it's accidentally released it should … no longer survive,” says J. Craig Venter, head of the Institute for Biological Energy Alternatives in Rockville, Maryland.

    Bioengineers, like other engineers, should be drilled to put public “health, safety, and welfare” first, says engineering ethicist Aarne Vesilind of Bucknell University in Lewisburg, Pennsylvania. But the new field ups the ante. “A sleazy bioengineer could develop something … that affects the entire global ecosystem,” Vesilind says. He and others say that synthetic biologists and ethicists should hold a summit meeting to define the bioengineers' “responsibilities to society,” perhaps modeled on the 1975 Asilomar Conference, at which biologists defined safeguards needed to contain genetically engineered microbes. “These guys have got to get on it,” Vesilind concludes, “because otherwise it's going to get away from them.”


    Deep Repositories: Out of Sight, Out of Terrorists' Reach

    1. Richard Stone

    The threat of terrorism and shifting economics are spurring efforts to entomb nuclear wastes deep underground; Sweden is helping pave the way

    ÄSPÖ, SWEDEN—A thin stream of water trickles down the rough-hewn black granite of a tunnel deep beneath the Simpevarp Peninsula on the Baltic Sea. It's in this kind of crystalline bedrock that Swedish authorities intend to imprison the most pernicious isotopes of uranium, plutonium, and other radioactive elements, some of which will remain dangerously hot for 100,000 years. If local residents agree, thousands of tons of spent-fuel assemblies accumulated by the country's 11 civilian nuclear power plants will be loaded into copper canisters and entombed for perpetuity. Experiments here at the Äspö Hard Rock Laboratory are intended to show that the crypt will withstand everything from crushing pressures to the unremitting heat of the nuclear material to the most difficult problem of all: relentless attack by moisture.

    Äspö and other unusual labs of its ilk are prepping nuclear scientists around the world for some of the most important and costly engineering projects ever undertaken: the construction of geological repositories for spent nuclear fuel. For more than 2 decades, the prospect of high-level waste underfoot has sparked determined opposition from the general public. Indeed, as protests last year in Italy and South Korea show, repositories continue to be a hard sell. And in Europe, many countries are reluctant to be the first to open a repository for fear that they will end up taking waste from their neighbors.

    But the tide may be turning in favor of building repositories. The 11 September terrorist attacks have highlighted the potential vulnerability of aboveground storage of vast quantities of spent uranium fuel laced with plutonium and other radionuclides. “The risks we face from terrorism and nuclear proliferation are immediate,” contends Kenneth Brill, U.S. ambassador to the International Atomic Energy Agency (IAEA) and the Vienna office of the United Nations. Such concerns spurred the U.S. Congress in July 2002 to override the state of Nevada's objections and approve Yucca Mountain as the U.S. national repository. Across the globe, at least two dozen national efforts are now in motion.

    Deep heat.

    Sweden's Äspö Hard Rock Laboratory is testing a machine that deposits canisters of highly radioactive waste in deep boreholes.


    Another watershed is that specialists have embraced eternal entombment as the best option. “All experts in the world agree this is the safest solution,” claims Bernard Frois, director of energy, transport, environment, and natural resources at France's science ministry. Indeed, geological repositories are “the only sustainable solution achievable in the near term,” IAEA director-general Mohamed ElBaradei told a conference* in Stockholm last month. A recent report from Harvard University's Managing the Atom Project argues, moreover, that entombing spent-fuel rods is far more cost-effective than reprocessing them to extract fissile material such as plutonium—and that it will remain so for decades. Yet there remains a Catch-22, ElBaradei says: Although public skepticism hampers efforts to build repositories, one or more in successful operation would dramatically boost public confidence.

    When the first repositories open, they will become potent symbols with starkly contrasting meanings. In places such as the United States and Russia, a solution to the long-standing dilemma of what to do with highly radioactive waste could breathe new life into an industry suffocated by Three Mile Island and Chornobyl. “We believe there will be a second nuclear era,” says Thomas Sanders, manager of the Global Nuclear Future program at Sandia National Laboratories in Albuquerque, New Mexico. But in Sweden, Germany, and other countries that have decided to abandon nuclear power, the repositories may well be seen as laying to rest their nuclear past.

    Hothouse flowers

    Ever since the first nuclear power stations came online a half-century ago, engineers have struggled to find a way to close the uranium fuel cycle—that is, dispose of spent fuel. “This is the only step not yet put into practice,” says Christian Waeterloos, director of nuclear safety and safeguards at the European Commission (EC). Many countries possess the know-how to mill uranium ore into ceramic uranium dioxide fuel pellets, resembling small black rubber stoppers, that are stocked in 4-meter-long fuel assemblies. After 5 years of burning in a reactor, or when roughly 5% of the uranium has been split into fission products such as cesium and strontium or transmuted to plutonium and the efficiency of the chain reaction has eroded, the assemblies are removed. Some countries process the spent fuel to extract uranium for reuse, whereas others store it as waste. According to some estimates, more than 1000 tons of plutonium alone have piled up in spent-fuel facilities worldwide.

    In Sweden, spent-fuel assemblies are loaded into 78-ton casks and shipped on the Baltic, in a special boat, to Oskarshamn, a town near Äspö. Here at the Central Interim Storage Facility for Spent Fuel (CLAB) of the Swedish Nuclear Fuel and Waste Management Company SKB, the piping-hot assemblies are removed from the casks and plunged into an Olympic-sized pool of deionized water for cooling. Every day, several cubic meters of water evaporate and must be replaced, making the cavernous hall feel like a hothouse. In another 1000 years, the nuclear materials will have lost 99% of their radioactivity, but they will remain above the natural background for another 100 millennia. The time scale is mind-boggling: “We have to plan for 4000 generations,” says SKB's Brita Freudenthal.

    Not in my backyard.

    Plans for a repository in South Korea were greeted with protests last year.


    Partly because of the unpopularity of repositories, some countries have considered storing spent fuel in interim facilities such as CLAB for a century or longer. Such a prospect worries some experts. In the United States, for example, most cooling pools at the 77 spent-fuel storage sites in 33 states are so densely packed that the assemblies must be separated with neutron-absorbing boron panels to prevent the spent fuel from going critical. If for some reason one of these pools lost its water—because of sabotage, for instance—the cladding of those assemblies recently removed from a reactor could catch fire, releasing volatile fission products such as cesium-137, a group led by Robert Alvarez, senior scholar at the Institute for Policy Studies in Washington, D.C., warned last year in the journal Science and Global Security. If the fire spread to older spent fuel, the authors contend, contamination of the surrounding countryside “could be significantly worse” than that from the Chornobyl disaster in 1986.

    Another concern is that the longer spent fuel remains aboveground and accessible, the more susceptible it is to terrorism or societal meltdowns. “Human interactions are inherently more vulnerable to failure than passive physical barriers,” says ElBaradei. In this respect CLAB, constructed in the early 1980s, is ahead of its time. The storage pool, nearing its capacity of 5000 tons' worth of assemblies, is underground and thus extra-secure. “When we built this plant, we thought about human intrusion, terrorism, even war,” Freudenthal says. “People in the nuclear industry laughed; they thought, ‘Typical Swedes.’ They aren't laughing anymore.”

    Red-hot potatoes

    In various nooks off Äspö's main tunnel, which spirals down to a depth of 460 meters, a few dozen scientists have set up experiments to test all the facets of a repository: from the integrity of the canisters and the clayey buffer surrounding them to the ability of liquids to diffuse into the surrounding rock. The lab, which cost about $35 million to construct and requires about $20 million per year to run, is one of several worldwide that have helped build a technological case for repositories. “We cannot afford to do second-class science, or we'll lose credibility” with the public, says Piet Zuidema, science chief at the National Cooperative for the Disposal of Radioactive Waste in Wettingen, Switzerland. “We're now into minor improvements,” adds EC nuclear expert Derek Taylor. Building a repository, he says, “could be done tomorrow.”

    Indeed, Äspö's latest experiments may offer a way to defeat the archvillain of geological storage: moisture. The fear is that unstable oxygen compounds liberated from water could act like a drill, over decades corroding holes in the canisters that would allow radioisotopes to filter into the groundwater. But an unexpected ally might neutralize this menace. At various spots in the Äspö maze, the slick walls are streaked with orange and black, the residues of bacteria that reduce iron, sulfur, and manganese. Scientists are probing whether water-loving microbes will consume the oxygen before it overpowers the corrosion-resistant copper. “This is one of the most important experiments we're running. It's a scenario that's impossible to model away,” says mining engineer Christer Svemar, director of repository technology at Äspö. So far, the results are promising.

    The big challenge, though, “is to merge the technological rationale with the societal state of mind,” says Yves Le Bars, president of the French National Radioactive Waste Management Agency. Although not-in-my-backyard opposition to repositories is as strong as ever, some countries have pulled out security concerns as a trump card. The U.S. Department of Energy (DOE) has cited homeland security as an additional impetus for pressing ahead with Yucca Mountain, its choice for a repository that remains controversial because of the site's complex geology (Science, 28 June 2002, p. 2333). The terrorism and proliferation threats have “have changed the debate over Yucca Mountain,” says Sanders. After more than 20 years, 36 million hours of labor, and $4 billion spent on studies of the suitability of the Nevada site, DOE anticipates applying for an operating license from the Nuclear Regulatory Commission by the end of 2004; the application is expected to undergo up to 18 months of review.

    A central facility such as Yucca Mountain would certainly be appealing to the European Union, but the EC “would in no way require any member state to accept waste from any other country,” says Waeterloos. Nevertheless, some countries that are advanced in their planning worry that their eventual repositories could be designated Europe-wide facilities. Such is the sentiment in Scandinavia. Finland's parliament in 2001 gave the green light for authorities to apply for a construction license for a repository at Olkiluoto. “We're afraid that local people will perceive that the facility will be compelled to accept waste from other countries,” says Jussi Manninen, deputy director-general of Finland's Ministry of Trade and Industry. SKB president Claes Thegerström agrees: “Each country should take care of its own waste.”

    Russia, on the other hand, is only too keen to open its planned repository to the world. “There are lots of places to put waste in Russia,” says Taylor. The problem, he says, is inadequate legislation and regulation governing the nuclear industry. Moreover, Russia has its own problems with securing spent fuel, especially leftovers from its decommissioned nuclear submarines. “If they can't manage that properly, why send them more?” asks Taylor.

    One fear of antinuclear campaigners is that some countries will use repositories to help resurrect their nuclear programs. Closing the fuel cycle would deprive critics of a potent argument: that it is irresponsible to build new nuclear plants until there is a solution to the problem of high-level waste. “Nuclear waste has been seen as the Achilles' heel of the industry,” Taylor says.

    China, meanwhile, is already forging ahead with an aggressive nuclear program. It has eight operating reactors and plans to bring another three online next year, which all told would generate 9000 megawatts, or 2.3% of China's generating capacity. The country is aiming for 32,000 megawatts by 2020, says Huazhu Zhang, chair of the Atomic Energy Authority in China. For the past few years, Chinese nuclear scientists have been probing a potential repository site near the Gobi Desert. They plan to build an underground lab like the one at Äspö in the next decade.

    “It's likely that more countries will look to nuclear power to meet rising energy demands,” Brill predicts. “We can either enable the safe and peaceful use of nuclear energy, or we can continue to worry about the ever-expanding problem of spent-fuel accumulation.” That's an alluring sales pitch, perhaps, unless a repository is being planned in your backyard.


    Wanted: One Good Cosmic Blast to Shake the Neighborhood

    1. Charles Seife

    As scientists wait for our galaxy's next supernova, experiments around the world are poised to capture short-lived streams of critical data that may still be decades away

    On 23 February 1987, sudden flashes of light in water tanks buried deep underground provided the first indication of a violent event in our cosmic neighborhood. Some 170,000 light-years away, in a cloud of stars that orbits the Milky Way, a star exploded. The cataclysm flung fragments of matter and energy across the universe. Even before astronomers could see the stellar pyre blaze forth, particle physicists knew that a star had died when they spotted the flashes of 20 subatomic particles, neutrinos, in detectors in Japan and the United States. For 13 crucial seconds, neutrino physicists turned into astronomers.

    That volley of neutrinos provided some key information on the workings of a supernova. But the next time a star explodes nearby, the scientific fallout could be revolutionary. Physicists now have a battery of instruments tuned and ready, waiting for such an event. Neutrino observatories around the world are primed to trap particles born in a supernova, and gravitational-wave detectors are listening for the shudder in space and time caused by the demise of a star and the birth of a black hole. And teams of scientists themselves are on alert around the world, their pagers set to go off when a supernova explodes. Yet it could be a long and frustrating wait. Their pagers may not ring for decades, and their careers are likely to end before a nearby supernova erupts. “One of the fundamental problems of a supernova watch is that the supernova frequency is smaller than the life span of the people who want to do the science,” says David Cline, a physicist at the University of California, Los Angeles.

    The Godot-like wait is due to the fact that neutrino hunters and gravitational-wave physicists are interested only in a relatively close-by supernova—one in our galaxy or its immediate environs. Too far away, and the gravitational waves would be too weak and the neutrinos too few for even the best detectors to spot. Although dozens of supernovae pop off somewhere in the universe each year, flaring and winking out in optical images, scientists think that a local supernova explodes about three times a century, on average. Thirty years is a long time to wait for 10 or 20 seconds of data, but those few seconds will contain a huge amount of information about supernovae—and about the subatomic world—that can't be acquired in any other way.

    Bursting with promise.

    Nearby supernova 1987A gave researchers a foretaste of things to come—someday.


    A supernova explodes at the end of a heavy star's life, when its nuclear fuel has run out. As the fusion reactions in its center die out, the core of a star collapses violently under its own gravity. This collapse causes an explosion that blows away the star's outer envelope and releases an enormous amount of energy—1045 joules or so—and is visible halfway across the universe. In its wake is left a neutron star or a black hole. But although scientists have a good general understanding of supernovae, the details are devilishly elusive.

    “One of the biggest open questions is how supernovae explode at all. On computers, the explosions do not succeed,” says John Beacom, a theorist at Fermi National Accelerator Laboratory in Batavia, Illinois. “The supernova does not blow up. The shock wave goes out, pushes out the envelope, and then collapses back in. We're missing something.” And this something is probably not contained in the light from the supernova, because the light is emitted long after the explosion has happened.

    Only the central part of a star collapses in a supernova, and whatever light is emitted during the core collapse is trapped in the envelope of surrounding matter, at least until the shock wave rips the envelope apart. “It takes time for the light to get out,” says Beacom. On the other hand, neutrinos, which are created immediately by the raw energy of the collapse, barely interact with the envelope as they speed away. Even the dense matter of the collapsing core can't stop the neutrinos from scattering out and escaping. “They come out in about 10 seconds,” Beacom adds.

    This means that neutrinos give a much earlier snapshot of the collapsing core than light does—and because about 99% of the energy of the supernova comes out in neutrinos, there are plenty of them to go around. The trouble is detecting them.

    The same property that makes neutrinos such a great probe of early supernova dynamics makes them hard to spot. They stream through matter without interacting very much, and without interaction, there's no detection. Even though more than a billion billion neutrinos from supernova 1987A struck the two neutrino detectors, only a score announced their presence. Nowadays, the more sensitive modern detectors would see several hundred neutrinos from a similar event.

    Nevertheless, according to Sanjay Reddy of Los Alamos National Laboratory in New Mexico, even the 20 neutrinos from 1987Awere important; the mere fact that the neutrinos streamed out of the supernova “over a time scale of seconds rather than in a very short burst gave support to the idea that neutrinos play a key role” in supernova explosions, he says. As the neutrinos fly through the superdense matter of the collapsing star—matter whose properties are not well understood at all—and through its less dense envelope, neutrinos scatter, slow down, and deposit energy in the matter.

    Conversely, the energy that the neutrinos have when they escape and, even more important, the time it takes for neutrinos to escape the supernova, reflect the nature of the matter that they stream through. “The observation is so simple: How long do you see the neutrinos for?” says Reddy. “If you just focus on time structure, in a few seconds you learn about high-density matter” in a manner that a laboratory could never reproduce, he says. Pulsations or sharp spikes and dips in the neutrino signal would yield clues about the dynamics of core collapse. “It is going to tell you what matter is doing. If it's oscillating or it's very convective, you are going to see signatures of that,” says Reddy. But to him, the ultimate prize would be if the neutrino burst stopped abruptly: “It would be extraordinary. It would be the first time we would actually see a black hole form.”

    Data dearth.

    Computer models of supernova explosions are mathematically sophisticated but need more real-world observations.


    Just as exciting to neutrino physicists is the possibility that a nearby supernova would reveal some of the properties of neutrinos themselves. Supernovae produce all types of neutrinos in vast quantities, including the tau neutrino, which is extremely difficult to produce in the laboratory because its sister particle, the tau lepton—which plays a role in the neutrino's creation and detection—is a rare beast. As a result, scientists don't yet have a good handle on the properties of tau neutrinos—more specifically, how often tau neutrinos transform, or oscillate, into electron neutrinos, one of the key missing properties in the reigning model of particle physics. “What could be very exciting is that you could extract oscillation information from the supernova,” says Kate Scholberg, a physicist at the Massachusetts Institute of Technology (MIT). “The supernova physics and neutrino physics feed on each other. If you had a signal, it would be really important.”

    This is why Scholberg and colleagues have set up the Supernova Early Warning System (SNEWS)—an alert service to let them and other scientists know if neutrino detectors around the world pick up a sudden burst of neutrinos marking a supernova. The most capable neutrino detectors—the Super-Kamiokande Observatory in Japan and the Sudbury Neutrino Observatory in Canada—are part of a growing network of labs hooked up to the SNEWS computer at Brookhaven National Laboratory in Upton, New York. When a local supernova erupts, SNEWS will immediately inform physicists and astronomers. “We're putting ourselves on pagers so we'll know if an alarm happens,” she says. The neutrino physicists will quickly try to determine where the supernova is in the sky—perhaps yielding enough information to allow optical and high-energy astronomers to point their telescopes in the right direction before the light from the supernova begins to stream forth.

    Neutrino observatories are not the only physics facilities waiting for a nearby supernova. A new generation of gravitational-wave detectors, such as the Laser Interferometer Gravitational Wave Observatory (LIGO) in the United States and GEO 600 in Germany, may well spot a timequake when a star collapses and forms a black hole (Science, 18 July 2003, p. 293).

    Einstein's theory of relativity states that a massive object will radiate gravitational waves—ripples in the very fabric of space and time—if it collapses asymmetrically. And although scientists don't yet know how symmetrical a supernova collapse is, they have high hopes of spotting a galactic supernova when it collapses. “That's our goal,” says Erik Katsavounidis, an MIT physicist who works on the LIGO project. “It depends on several assumptions, but the ballpark number is that we'll be able to cover the galaxy.”

    According to Katsavounidis, it may be possible to use the gravitational-wave signature of a collapsing black hole to see how the core is rotating and how and when mass from the supernova is drawn in or expelled—matter including neutrinos. “Knowing how the power is distributed as a function of frequency can tell a lot about how neutrinos are transported” from an exploding star.

    And because gravitational waves plow through even very dense matter and travel at the speed of light, the signal would arrive even before the neutrinos—and would mark the very start of a supernova, the precise moment at which the core begins to collapse. “Neutrinos are relatively prompt, but there are uncertainties about how they are generated,” says Scholberg. “The time delay between neutrinos and gravitational waves would be very valuable.”

    Of course, the crucial time delay might be the expected 30 years before the next nearby supernova. “I've spent 4 or 5 years working on it,” says Reddy. “If there are no supernovas at all in the next 5 to 10 years, I'll work on something else.” With luck, though, a nearby star will burn out before the supernova hunters do.