News this Week

Science  23 Nov 2001:
Vol. 294, Issue 5547, pp. 46

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Shock Wave May Have Knocked Out Japanese Neutrino Detector

    1. Dennis Normile

    TOKYO—The tank was three-quarters full on the morning of 12 November when the technicians in the control room of the $100 million Super-Kamiokande neutrino observatory heard a roar that lasted half a minute or more. When it was over, all the finely tuned light-detecting sensors below the water had imploded, and the observatory lay crippled. It may be 2007 before the facility is back at full capacity, and repairs could cost $15 million to $25 million, but scientists and government officials have vowed to resume some experiments within a year.

    “We will rebuild the detector, there is no question,” says Yoji Totsuka, a professor at the University of Tokyo's Institute for Cosmic Ray Research and director of the observatory, located 230 kilometers west of Tokyo deep in a mine. “It's inconceivable that it would not be rebuilt,” says Henry Sobel, a University of California, Irvine, physicist and spokesperson for the U.S. side of the collaboration.

    The underground lab made headlines worldwide in 1998 when it provided convincing evidence that neutrinos have mass. That finding, which many physicists believe is worthy of a Nobel Prize, ran counter to decades of theoretical predictions. The data strongly suggest that a certain type of neutrino from the atmosphere is “disappearing” by changing, or oscillating, into another type of neutrino the detector can't see. By the laws of quantum mechanics, only particles that have mass can oscillate.

    Neutrinos cannot be detected directly. So Super-Kamiokande contains 50,000 tons of highly purified water in a tank 39 meters in diameter and 41 meters high lined by 11,146 photomultiplier tubes. The tubes watch for a characteristic glow, known as Cerenkov radiation, that results from the rare interaction of neutrinos and atomic particles in the water.

    Observations were halted in July for maintenance, and workers drained the tank for the first time since the facility was completed in 1996 to replace some 100 burned-out tubes. The accident occurred as the tank was being refilled in preparation for the resumption of experiments in December. The water had reached the 41st of 51 rows of tubes when the implosion took place, destroying all 7000 tubes that were submerged.

    Officials have yet to determine what triggered the accident, although the most likely theory is that a tube on the floor of the tank burst and started a shock wave that was amplified in the water to set off a chain reaction of implosions. The shock also apparently cracked the tank. Totsuka says an existing tube could have been damaged by pressure from workers standing on thick Styrofoam pads placed atop the tubes during repairs, or perhaps one of the replacement tubes was defective. However, he also notes that a smaller version of the current facility ran for more than a decade without encountering any such problems.


    A view down into the Super-Kamiokande neutrino detector taken after the 12 November accident shows several rows of intact photomultiplier tubes at the top of the tank and shattered tubes below.


    While the investigation continues, scientists are already planning how to get back online. The facility's main governmental sponsor, the Ministry of Education, Culture, Sports, Science, and Technology, is prepared to help. “We want to provide not just financial support but whatever support we possibly can to restore the facility so it can restart observations as soon as possible,” says Minister Atsuko Toyama.

    A priority is restarting the K2K experiment, in which an accelerator at the High Energy Accelerator Research Organization (KEK) in Tsukuba will shoot a stream of neutrinos 250 kilometers to the underground tank. Results from this so-called long-baseline experiment are considered more reliable because scientists will know the precise number of neutrinos being aimed at the detector instead of having to infer them based on theory. The KEK physicists recently reported results showing that the probability of no oscillation is less than 3% (Science, 2 November, p. 987). They hope to restart this experiment before the accelerator is shut down in 2 to 3 years.

    Totsuka says that it may also be possible to resume experiments within a year with only half the full complement of tubes and then add the rest as they are manufactured, a process that could take several years. “This would give us reduced sensitivity, but we would still be able to search for atmospheric neutrinos in addition to running the K2K experiment,” he says. The facility would not have the sensitivity to detect most solar neutrinos, however.

    The long-term goal is to be at full strength by 2007, when Japan's planned $2.7 billion High Intensity Proton Accelerator Facility is scheduled to come online. The proton accelerator, a joint project of KEK and the Japan Atomic Energy Research Institute (JAERI), is under construction at JAERI's campus in Tokai, Ibaraki Prefecture. This accelerator has been designed to send Super-Kamiokande 20 times as many neutrinos as in the K2K experiment, allowing researchers to tease out more details of just how neutrinos oscillate and probe other neutrino properties. “We've barely scratched the surface of understanding neutrinos,” says John Learned, a Super-Kamiokande collaborator at the University of Hawaii, Manoa.

    Support for rebuilding Super-Kamiokande goes beyond the collaborators. “Super-Kamiokande has provided some of the most important results in physics and astronomy in the past decade and has the potential for continuing to make great contributions if restored to its full capacity,” says John Bahcall, a neutrino expert at the Institute for Advanced Study in Princeton, New Jersey. “If I had the necessary skills, I would [go] to Japan to help with the repairs.”

    Equally encouraging is the supportive stance taken by the education ministry. Akira Yoshikawa, head of the ministry's Research Institutes Division, says that “the minister well understands the importance of this facility.” Sobel says that he hopes to meet soon with officials at the Department of Energy, which funds the U.S. side of the collaboration, to see what support it might be able to provide.


    European Programs Face Another Squeeze

    1. Daniel Clery

    European space scientists got an unsettling sense of déjà vu last week. The European Space Agency (ESA) had asked its 15 member governments for a 4% annual increase for its much-praised science program, but instead, government ministers meeting in Edinburgh approved only 2.5%—barely enough to keep up with inflation. A similar setback occurred in 1999, which means that space science funds have been stagnant for 6 years.

    In contrast, Galileo, a program to build a European version of the U.S. Global Positioning System, and a plan to upgrade the Ariane 5 launcher received substantial boosts. “It's utterly unjust,” says physicist Hans Balsiger of the University of Bern in Switzerland, a former chair of ESA's Science Program Committee (SPC). “I can see no reason why we are treated worse than everyone else.”

    The delegates also sent a strong signal of disapproval to the U.S. government on moves to cut back the size of the international space station. They reluctantly approved funds to meet ESA's obligations to the project, but they froze some 60% of the money until NASA makes clear its funding plans for the station and the number of astronauts that will live and conduct research there. SPC vice chair Giovanni Bignami, director of space science at the Italian Space Agency, called this a “wise decision,” adding, “I would have made [the amount held back] bigger.”


    Some medium-term projects such as the Bepi-Colombo mission may be delayed.


    The SPC will meet early next month to decide how to carve up $1.65 billion for space science in 2002–06. Researchers contacted by Science believe that most missions planned for launch before 2010 seem secure, but some later ones, such as the Bepi-Colombo mission to Mercury, may have to be delayed. Missions beyond that, still in their planning stages, are threatened. David Southwood, ESA's head of science, told those meeting in Edinburgh that Gaia, an astrometry mission, was the most likely casualty. “Something has to give,” he told Science.

    Apart from the science program, to which all ESA members must contribute, a new optional program to develop missions to look for signs of life in the solar system, called Aurora, also got shortchanged. ESA had asked for $35 million to plan a series of robotic missions to other planets, moons, asteroids, and comets but came away with just $12 million. This should be just enough to set the ball rolling, however. “We can create a plan,” says Paul Murdin, director of space science at the British National Space Centre. Italy had been one of the prime movers behind the Aurora project, but following a change of government last month the promised funds were not forthcoming.

    Although space science was out of favor at the meeting, Ariane 5—the latest in a line of rockets that now account for more than half of all commercial launches worldwide—got a warm endorsement. It will be upgraded to increase its payload capacity, at a cost of $620 million. And in a groundbreaking collaboration with the European Union, ESA will launch its own fleet of 21 navigation satellites to help planes, trucks, ships, and even hikers pinpoint their positions with centimeter accuracy. Member governments pledged $470 million to design and develop the system, more than ESA asked for.


    Insider Takes Over at NASA

    1. Andrew Lawler

    Just over a week ago, Sean O'Keefe was publicly criticizing NASA for cost overruns and poor management. Now those problems are his responsibility. President George W. Bush nominated O'Keefe, 45, currently the deputy director of the Office of Management and Budget (OMB) and an influential Washington insider, on 14 November to NASA's top job, vacated on 16 November by Dan Goldin.

    O'Keefe's assignment is clear. “He is being sent to NASA to ensure fiscal responsibility,” says one senior Administration official. “He will force things to be on time and on budget.” Another manager who has worked closely with O'Keefe calls him “the consummate dealmaker.” He has close connections to both Bush presidents and to Vice President Dick Cheney, having served in the first Bush Administration as Navy secretary and Defense Department comptroller. Senate confirmation is expected to be speedy.


    O'Keefe is plugged into the Bush White House.


    O'Keefe's immediate task likely will be to address the concerns of NASA's international space station partners, who are angry at moves to scale back from six to three astronauts on the station, initiated by O'Keefe at OMB. European ministers warned in a press conference on the day of O'Keefe's nomination that they are prepared to scale back their own support in protest (see previous story). Meanwhile, NASA will be hard-pressed to resolve station cost overruns even if it adheres to O'Keefe's more modest version.

    A second major crisis is brewing in the outer planet exploration program. Congress put $30 million into the 2002 budget for a flyby of Pluto, a program the White House had terminated. But an OMB official warned on 15 November that the White House is unlikely to support Pluto funding in 2003 and that there is barely enough money to fund a mission to Jupiter's moon Europa, slated for launch around 2008. “The bottom line is we have no good options for planetary science in '03,” OMB examiner Brant Sponberg told a National Research Council panel. As a result, he warned, “you guys are likely to lose money [for 2003 planetary efforts], not gain it.” That would put the Europa mission, already estimated to cost about $1.1 billion, in jeopardy.

    O'Keefe, who has been a defense appropriations aide in the Senate and more recently taught business and government at Syracuse University in New York state, has unusually strong connections to senior Administration officials for a NASA chief. “The Bush Administration clearly didn't want a space cadet,” says John Logsdon, a political science professor at George Washington University in Washington, D.C. And that, after 10 years of a strong visionary with limited political clout, could work to NASA's advantage.


    Dusty Young Star Gets New Birth Mates

    1. Robert Irion

    Astronomers craving their first image of a giant planet beyond our solar system now have fresh targets to explore: newly identified siblings of Beta Pictoris, the most famous dust-shrouded star in the sky. A survey of the motions of nearby stars suggests that more than two dozen stars were conceived in the same womb as Beta Pic, thus exposing the closest and youngest stellar group yet known. Their youth and proximity to Earth make these stars “fantastically suitable for direct searches for warm, newborn planets,” says astronomer Ray Jayawardhana of the University of California (UC), Berkeley.

    Beta Pic's fame dates to 1983, when the Infrared Astronomical Satellite photographed its vast cocoon of dust—the first glimpse of a suspected planetary system in the making. Theories predict that most stars arise in groups, so astronomers expected the apparently juvenile Beta Pic to have nearby companions hatched in a cluster from the same gaseous nursery. However, the first confirmed nest mates didn't turn up until 1999, when a team analyzed two youthful dwarf stars with trajectories that closely mimicked Beta Pic's path in space. The dwarfs raised hopes that more siblings were out there.

    The family has indeed grown, according to a report in the 20 November issue of Astrophysical Journal Letters. Astronomers Ben Zuckerman and Inseok Song of UC Los Angeles and their colleagues describe 17 single and multiple star systems moving through space with Beta Pic. The stars were known before, but the team determined that the three spatial components of their velocities all match Beta Pic's three-dimensional motion to within 2 kilometers per second—an expected rate of drift from a dispersing cluster. Each star also exhibits at least one hallmark of adolescence, such as copious x-rays, rapid spin, or a dusty disk of its own. “Used together, the velocities and ages are powerful tools,” says Song. “The chance that we have misidentified random stars as members of the Beta Pic group is extremely small.”

    Nuclear family.

    Dust-shrouded Beta Pictoris is the most visible member of a newly identified group of nearby young stars.


    The clan may hold the most promise for studies of emerging planetary systems, because young planets should be warm enough to shine brightly at infrared wavelengths. Although the stars have wandered to span one-quarter of the sky in the Southern Hemisphere, their average distance from Earth is a mere 100 light-years—the next town over, in cosmic terms. At about 12 million years old, they are younger than stars in the 30-million-year-old Tucana-Horologium association, which along with the Beta Pic group is the closest assemblage known. Another group, TW Hydrae, has the same age as Beta Pic's stars but is twice as far away.

    Moreover, Beta Pic's group has the widest cross section of stellar types, including massive stars, dwarfs, and many stars like our sun. “These are the optimal stars to watch the evolution of dusty disks and to look for forming planets, especially if we're trying to make an analogy to our own solar system,” Zuckerman says.

    Others agree that the group is a boon for attempts to see planets directly. Adaptive-optics systems at the Keck Observatory in Hawaii and the twin Gemini Observatory telescopes in Hawaii and Chile have a shot at resolving the pinprick infrared glows of newly coalesced gas giants in the outer parts of planetary systems around the Beta Pic stars, says astronomer Thomas Greene of NASA's Ames Research Center in Mountain View, California.

    Still, Greene maintains, the group's provenance needs more study. Uncertainties in the stars' positions and velocities make it hard to trace their motions back in time to determine whether they shared a birthplace. “If the original cloud was big enough, it could have formed several small clusters with age differences of 5 or 10 million years,” he says. It does seem clear that Beta Pic's cluster was a low-mass, loosely bound assemblage that scattered once the stars formed, unlike the tighter Pleiades cluster, Greene says. NASA's Full-sky Astrometric Mapping Explorer, tentatively scheduled for launch in 2004, should track the stars with enough precision to settle the issue, he adds.

    In the meantime, Zuckerman and Song believe that further scouring of nearby stars will turn up more relatives. Already, their list contains two stars more massive than the group's namesake. Beta Pic, it appears, is no longer the pick of its own litter.


    Questions Arise Over Second Japanese Site

    1. Dennis Normile

    TOKYO—A team of archaeologists has cast strong doubts on claims that a cave in western Japan contains evidence concerning the extent of early human habitation of the archipelago. The accuracy of the cave findings is also the subject of a suit filed this month by the family of the site's lead scientist, who killed himself after a Japanese news magazine reported that the findings might be bogus. It's the second time in a year that the veracity of an archaeological dig has made headlines in Japan (Science, 10 November 2000, p. 1083).

    Archaeologist Mitsuo Kagawa led excavations in 1961 and 1962 of the Hijiridaki Cave in Oita Prefecture, on Kyushu Island in western Japan. The digs produced human and animal bones and stone artifacts, some of which Kagawa and his colleagues concluded date back 10,000 years or more. Although the dating has always been controversial, Hijiridaki made its way into Japanese textbooks because it was the only site in Japan where stone tools and human bones have been found together. The cave site was revisited in December 1999 by a research team studying the origins of the Japanese people. After examining both previously and newly collected artifacts, the team issued a lengthy report in June that will be summarized this month in the Japanese journal Paleolithic Archaeology.

    The report concludes that the bones and charcoal found in the cave are no more than 600 to 700 years old, based on radiocarbon dating. The report does agree that some of the artifacts recovered both in the early 1960s and in the recent excavation are from the late Paleolithic period. But it points to several anomalies. Artifacts ranging from 2000 to 20,000 years old were found mixed together, and in a stratum above the one yielding material that is 600 to 700 years old. The artifacts are made of obsidian, almost certainly from a distant part of Kyushu Island and unlike the chert and rhyolite artifacts found in the area around the Hijiridaki Cave. “The results of the 1999 excavation,” the paper concludes, “indicate that the recovered artifacts were not part of the original cave, but were rather the result of a secondary intrusion.”

    Digging up dirt.

    A 1999 excavation at Hijiridaki Cave raises doubts about earlier findings at the site in western Japan.


    Hideji Harunari, an archaeologist at the National Museum of Japanese History in Sakura City, near Tokyo, and one of the organizers of the recent investigation, believes that “the best explanation for these conditions is that [the 1960s findings] are fake.” But Masanobu Tachibana, the team leader and an archaeologist at Beppu University, says he cannot rule out more benign explanations. It is clear, he says, that medieval people used the cave and could have brought in the collection of stone implements for their own purposes: “There are explanations other than Harunari's.”

    The paper does not speculate further on how the artifacts may have ended up in the cave. And even Harunari says he does not believe that Kagawa was at fault, noting that Kagawa has long held that the Hijiridaki findings needed to be reexamined.

    Speaking at an archaeological conference in August 2000, Harunari called the placement of the artifacts “very unnatural.” But his comments went unreported until newspaper reporters caught amateur archaeologist Shinichi Fujimura planting artifacts at a second, unrelated archaeological dig in northern Japan last November. Shukan Bunshun, a weekly news magazine, ran four articles between January and March of this year suggesting that Hijiridaki might be another example of archaeological fraud. Although the magazine did not identify a culprit, it said that Kagawa was the leader of the 1960s excavations. Kagawa hanged himself on 9 March, leaving a note saying that he was acting “to protest articles alleging our discoveries were faked.”

    On 1 November his family filed a suit in Oita District Court against Shukan Bunshun's publisher, editor, and the reporter who wrote the stories. The family is seeking $460,000 in compensation and a published apology, claiming that the articles defamed Kagawa and inflicted mental trauma. A written statement from the magazine expresses surprise. “We did not mention an individual's name or print anything defamatory [about Kagawa],” says Seigo Kimata, Shukan Bunshun's editor in chief.


    Sequences Reveal Borrowed Genes

    1. Elizabeth Pennisi

    New data emerging from microbial genome sequences are so perplexing that “we can no longer comfortably say what is a species anymore,” says Daniel Drell, who manages the Department of Energy's (DOE's) microbial genomes program. Two bugs in particular, described at a recent meeting,* seem to have nabbed enough genes from other organisms that they no longer resemble their supposedly closest relatives—raising fascinating questions about how and why they obtained these new traits.

    The genome data may have practical applications as well, notes Drell: Because both microbes also play key roles in geochemical cycles, they may suggest opportunities for cleaner energy sources, more effective pollution control, and better recycling of both natural and humanmade products—which is why DOE funds microbial genetics in the first place.

    One newly deciphered microbe is Methanosarcina mazei, a methane-generating archaea. Unlike most of its brethren that live in thermal vents and other hot environments, M. mazei thrives in freshwater sediments worldwide. Versatile in other ways as well, it can harvest the carbon it needs from acetate and so-called methylamines—and not just carbon dioxide. That makes M. mazei a “really major player” in the production of methane, a greenhouse gas, says Gerhard Gottschalk, a microbiologist at the University of Göttingen in Germany.

    When Gottschalk and his colleagues started sequencing M. mazei's genome 3 years ago, they expected it to be a tidy 3 million bases or less, as are the genomes of two other methanogens sequenced to date. Instead, Gottschalk reported at the meeting, M. mazei's single circular chromosome proved to be 4.1 million bases long, about the size of the bacterium Escherichia coli. Its chromosome contains several sets of the same genes, apparently providing unanticipated redundancy for particular functions.

    But it's the source of many of its genes that has researchers excited. “The amazing thing is that there were so many eubacterial genes,” comments James Lake, an evolutionary biologist at the University of California, Los Angeles. Often microbial genomes reveal instances of horizontal gene transfer from one organism to another. “But it's never happened quite like this,” says Lake. Of the 3300 predicted genes, about 1100 look like they used to belong to bacteria, Gottschalk reported. No one understands why this happened, but these numbers drive home “how little we understand about species' definitions,” adds Judy Wall, a biochemist at the University of Missouri, Columbia.

    Genetic pack rats.

    M. mazei (top) and R. palustris both borrowed lots of genes from other organisms.


    The other microbe, Rhodopseudomonas palustris, a so-called purple nonsulfur bacterium, also comes with a panoply of unexpected genes. “The biggest surprise,” says Lake, “is that it carries circadian rhythm genes.” These genes were not thought to be part of the repertoire of bacteria or archaea, with the exception of an unusual group called the cyanobacteria, says Caroline Harwood, the microbiologist at the University of Iowa, Iowa City, who has been analyzing this microbe's gene content for the past year. Their presence suggests that these organisms are more sophisticated than microbiologists had suspected.

    Another surprise is that this bacterium's genome more closely resembles the genomes of rhizobium bacteria that fix nitrogen for plants than those of other purple nonsulfur bacteria, Harwood reported. In particular, it has an unusual cluster of photosynthetic genes that are very similar to those in a rhizobium that infects soybean stems. Either this microbe borrowed a lot of genes from the rhizobium, or else the two are closely related. Finally, its genome revealed a plethora of genes that enable this microbe to break down complex organic matter—something other purple nonsulfur bacteria don't do—as well as fix nitrogen and produce hydrogen gas. “It's just an amazing collection of pathways,” says Drell. But one would expect nothing less of a genetic pack rat.


    Coated Nanofibers Copy What's Bred in the Bone

    1. Robert F. Service

    If imitation is flattery, Sam Stupp has just paid nature a high compliment. On page 1684, Stupp, a materials scientist at Northwestern University in Evanston, Illinois, and his postdocs Jeffrey Hartgerink and Elia Beniash report creating a self-assembling material, made from organic molecules with a mineral coat, that closely mimics bone. The feat opens the door to making a synthetic replacement for bone. And because the chemistry of the self-assembling molecules is simple to change, it also gives researchers a general strategy for forming a wide array of organic-inorganic fibers.

    “It's a major step forward” for the field of self-assembled materials, says Ulrich Wiesner, a chemist at Cornell University in Ithaca, New York. The new work, Wiesner says, distills the essential lessons that have been learned about how bone forms and incorporates them into a synthetic molecule that is simple to produce. “It connects the synthesis of artificial materials with lessons from biology. That interface is very, very exciting,” Wiesner says.

    Researchers at the frontier of materials science have long looked to nature for inspiration in synthesizing complex materials. Bone has been among the most enticing to emulate because of its strength and structure. At its simplest level, bone is a composite made when proteins in collagen fibers coax calcium, phosphate, and hydroxide ions in solution to condense atop the fibers and grow into a rigid structure of tiny crystallites of hydroxyapatite all aligned in the same direction. Hydroxyapatite gives bone its toughness.

    Over the years, several research teams have induced hydroxyapatite crystallites to grow atop other materials such as polymers. But they've never managed to align the crystallites with any material other than collagen, the protein fibers that nature picked for the job. So Stupp and his colleagues decided to see if they could design purely synthetic molecules to carry out the task.

    Close copy.

    Synthetic molecules assemble into fibers that coax minerals into growing on top, a structure that mirrors bone.


    From previous work in his own lab and others, Stupp knew that synthetic molecules could at least carry out the first task, assembling themselves into fibers. The trick was to make two-part molecules, with a water-friendly group at one end linked to an oily hydrocarbon at the other. When placed in water, these spontaneously assemble into loosely connected fibers called micelles as the hydrocarbon tails pack tightly together to avoid associating with water.

    The Northwestern researchers designed two-part organic molecules called peptide-amphiphiles (PAs), in which the oily hydrocarbon chains were connected to a series of peptides, essentially short protein fragments. To carry out the second part of their task—growing the hydroxyapatite on top—the scientists had to design in a couple of other functions as well. First, they added peptides to their PAs that could form links with one another to lock the flimsy micelles into resilient fibers. Second, they added negatively charged peptides rich in phosphoserine groups, which previous biochemical studies had shown help collagen attract the positively charged ions that form hydroxyapatite crystallites.

    Much to the team's surprise, the PAs not only formed fibers and slipped on a coat of hydroxyapatite crystallites, but also got the crystallites to adopt the same crystallographic organization as in bone. Stupp says the team is still trying to understand this bit of good fortune. “The bottom line is, we don't know the exact mechanism why our crystals end up aligned just as in bone,” Stupp says. But it's clear the fibers play a vital role. “If we don't have the fibers [in solution], the crystals don't form,” Stupp says. He suspects that the fibers are so small that they allow the crystals to grow in only one direction, along the length of the fibers.

    Stupp says much more work is needed to understand the new bone-mimicking molecules. But his team is already looking beyond bone. In their paper, the researchers also describe how they added peptides with a trio of amino acids—arginine, glycine, and aspartic acid—which readily attract cells and encourage them to bind to a particular surface. Eventually, Stupp hopes, his team will be able to use PA fibers to repair damaged nerve tissue by coaxing neurons to attach and grow on the fibers. And by changing the peptides, Stupp believes, he and his colleagues will be able to assemble other types of crystals, metals, or even polymers to make everything from high-strength composites to nanowire circuitry for molecular-based computers. In that case, Stupp and his team may find themselves flattered with a little imitation of their own.


    Congress Clears Way for Rodent Rules

    1. David Malakoff

    Animal rights groups have won the latest round in their long-running fight to force the U.S. government to more tightly regulate the use of mice, rats, and birds in scientific research. Congress last week approved an agriculture spending bill that allows the U.S. Department of Agriculture (USDA) to start developing the new rules, which biomedical groups blocked last year in an 11th-hour lobbying victory.

    “Finally, we can get started [on regulations] that will be good for animals and for science,” says lobbyist Nancy Blaney of the Working Group to Preserve the Animal Welfare Act, a coalition of animal rights groups. But Tony Mazzaschi of the Association of American Medical Colleges, which opposes the idea, says the decision is “disappointing; all this would do is create costly paperwork for research institutions.”

    The controversy stems from a 30-year-old USDA policy that exempts mice, rats, and birds—which account for 95% of all experimental animals—from regulation under the Animal Welfare Act (AWA). Last year, after several court battles, USDA signed a pact with animal rights groups and agreed to draft caging and care rules. The deal outraged biomedical groups, which argued that USDA regulation would duplicate existing government and voluntary rules and drain millions of dollars from research accounts. They quickly convinced Senator Thad Cochran (R-MS) to block USDA action by adding a ban to the 2001 agricultural appropriations bill (Science, 13 October 2000, p. 243).

    This year, Congress seemed ready to continue the freeze after the House included the ban in its version of the agriculture measure. But the Senate balked after Democrats took control this spring. As a result of the switch, Senator Herb Kohl (D-MI) took over the subcommittee that oversees agriculture spending from Cochran. Lobbyists say Kohl—who is considered friendlier to animal rights groups—was instrumental in hammering out the compromise language included in this year's bill. It allows USDA to begin writing the regulations and seek public comment, but it bars the agency from finalizing any rules before 30 September 2002, when the annual measure expires. The open deadline gives lawmakers a chance to revisit the issue next year.

    The lack of closure “won't have an effect on the process,” however, because getting new regulations approved routinely takes years, says John McArdle of the Alternatives Research and Development Foundation in Eden Prairie, Minnesota. He has long pushed for AWA regulation because the law requires researchers to consider alternatives before using animals for experiments.

    It's not clear how rapidly the USDA will push ahead, however. USDA officials were not available to comment, and Mazzaschi says the Bush Administration “isn't anxious to move forward.” But Blaney expects the agency to “start work as soon as the president's signature is dry” on the funding bill, which could be this week. If it doesn't, both sides agree, the matter could end up back in court.


    Safer and Virus-Free?

    1. Dan Ferber

    New vectors for gene therapy aim to mimic viral vectors' pros without their dangerous cons

    If fields of science go through life stages, then childhood ended abruptly for gene therapy on 17 September 1999, when a teenage volunteer named Jesse Gelsinger died in a gene therapy clinical trial at the University of Pennsylvania in Philadelphia. Sunny talks describing future therapies for genetic diseases were replaced by public scrutiny, congressional hearings, and new rules. Gelsinger's death was blamed on an out-of-control immune response to the virus physicians had used to ferry the useful gene into tissue, and it prompted a hard look at the safety record of so-called viral vectors. It also spurred renewed interest in nonviral methods to deliver genes, methods that have been quietly gathering steam for more than a decade.

    Today, no gene therapy using any type of vector has been approved for clinical use. But researchers are working doggedly to develop methods that will deliver useful genes safely, to the right spot, and turn them on and off at will. Originally envisioned as treatments for hereditary diseases, gene therapies are now being developed to prevent and treat infectious diseases, cancer, heart disease, and other ailments. All of them rely on a gene's ability to produce a key protein when and where it's needed.

    Viruses such as adenovirus and retrovirus are still the most popular vectors in lab studies and clinical trials. Viruses are well suited to gene delivery: They've evolved to home in on specific tissues, invade cells, and manipulate the cell's machinery to make viral proteins. But often they can be injected into a person only once or twice before the immune response they provoke poses a safety threat, as in Gelsinger's case. That response can also destroy the viral vector or the cells it infects, blocking production of the useful protein.

    A spate of recent work has suggested that genes can be delivered effectively without using viruses. Most nonviral vectors fly under the radar of the immune system, and they're cheaper and easier to manufacture than viral vectors. But most of them have not been as efficient as viruses in shuttling genes into cells, and the genes that were delivered didn't remain active for long. That has begun to change in the past few years.

    Muscular gene delivery.

    When naked DNA is injected into an artery that feeds the leg muscle of rhesus monkeys, up to 30% of the muscle fibers (blue) take up and activate the foreign gene.


    In the race to develop a reliable gene-delivery method, researchers are putting money on a wide array of vectors, and so far no single method has taken the lead. Gene therapist Malcolm Brenner of Baylor College of Medicine in Houston, who is president of the American Society of Gene Therapy (ASGT), suspects that both viral and nonviral gene-transfer methods will be needed, depending on the disease being treated. “Nobody has the perfect vector,” he says. “What we're looking for is horses for courses.”

    DNA, naked and otherwise

    Back in 1989, when human gene therapy was still a dream, dogma had it that viruses were the best and perhaps only way to ferry therapeutic genes into animal tissue. But Jon Wolff, a gene therapist at the University of Wisconsin, Madison, suspected otherwise. Philip Felgner, then at Vical, a San Diego biotechnology company, had just devised a way to shuttle genes into lab-grown animal cells by coating them with positively charged lipids—basically, shrink-wrapping the DNA. The charge helps the construct, called a lipoplex, stick to cell membranes and pop genes inside the cell. Wolff tested the method in animals, injecting mice with RNA lipoplexes and then checking their tissue for the presence of an enzyme encoded by the RNA. To his surprise, mice injected with lipid-coated RNA failed to activate the gene. But the control mice, which had been injected with uncoated, or “naked,” RNA, did crank out the enzyme. “I thought my technician screwed up and reversed the two samples,” Wolff recalls. “But he repeated it, and it kept getting better and better.”

    Wolff was equally surprised a few months later, when genes ferried into muscle cells by loops of DNA called plasmids were expressed for weeks at a time. Researchers had thought that the only way to get long-lasting gene expression in animal tissue was to use a virus whose DNA stitched itself into the chromosomes of the recipient cells. But somehow, the naked plasmid DNA stuck around inside muscle cells, and the genes turned on and stayed on. “Even now I'm amazed,” Wolff says. Naked DNA injections are still the simplest nonviral gene delivery method and so far one of the most successful.

    Following Wolff and Felgner's early report, researchers quickly applied the naked DNA approach to a practical problem: building better vaccines. The method entails injecting a plasmid that encodes a protein from the unwanted microbe; the protein then provokes an immune response that would stop an infection. So far, clinical tests have been promising. For example, Stephen Hoffman, then of the Naval Medical Research Institute in Bethesda, Maryland, and his colleagues reported in 1998 that injections of plasmid DNA encoding a protein from a malaria parasite provoke a strong immune response in humans (Science, 16 October 1998, p. 476). And earlier this year, Harriet Robinson of Emory University in Atlanta and colleagues reported that a naked DNA vaccine helps confer immunity in a monkey model of AIDS (Science, 6 April, p. 69).

    Naked DNA therapies are also being tested against heart disease, cancer, and other disorders. The late cardiologist Jeffrey Isner of Tufts University School of Medicine in Boston and his colleagues developed a gene therapy for patients with coronary artery disease. The team injects a gene called VEGF, which boosts blood vessel growth, directly into patients' heart muscles by threading a special catheter through the arteries much as a surgeon would during angioplasty. The treatment gave promising results in a small phase I trial: New arteries sprouted from existing arteries, detouring blood around blockages to supply the heart muscle, according to work presented last week at the annual meeting of the American Heart Association. The technique, which is owned by a company Isner founded, called Vascular Genetics Inc. in Durham, North Carolina, apparently eases the severe pain of heart disease and improves patients' ability to exercise on a treadmill. The treatment could one day offer an alternative to bypass surgery, Isner told Science shortly before his death on 31 October (see p. 1670), and a similar method could help save the legs of diabetes patients and others whose circulatory disease is so severe they are candidates for amputation.

    But naked DNA injections haven't worked well to deliver genes to tissues other than liver and muscle. To sneak genes into other tissues, researchers have tried coating the DNA with different combinations of lipids and polymers, which have been shown by trial and error to help cultured cells take up DNA.

    Straight to the heart.

    The late Jeffrey Isner injects naked DNA to treat a patient with coronary artery disease.


    Some such therapies are now being tested in the clinic. For example, Vical researchers have developed a lipid-coated plasmid that is injected directly into tumors to deliver the HLA-B7 gene; this gene encodes a protein that sparks an immune response against the tumor. In a phase II trial, the immune response shrank tumors and prolonged life in eight of 73 patients with aggressive melanoma who had failed to respond to other treatments, company collaborators reported in May at the annual meeting of the American Society of Clinical Oncology.

    A different gene-delivery strategy for head and neck cancer—composed of a gene called interleukin-2 coated with cholesterol and a synthetic lipid—also gave promising results in a phase II trial in patients with tumors that could not be surgically removed, a team from Valentis Inc. of Burlingame, California, reported at the ASGT annual meeting in June. The treatment kept cancer from spreading for more than 4 months when combined with traditional chemotherapy—38% longer than patients receiving chemotherapy alone.

    Up-and-coming vectors

    The number of clinical trials using nonviral vectors for gene therapy is growing (see table below), but many diseases can't be treated using the nonviral gene delivery methods that are farthest along. That's because most methods have delivered only low levels of active genes for short periods of time. Researchers are currently hammering out other approaches in the lab. They're trying to improve upon current vectors by finding ways to penetrate a higher percentage of cells in target tissues and make imported genes last longer once inside the cells.

    Short-lived gene expression is fine for vaccines, cancer therapies, and angiogenesis. Indeed, Isner called it “a major-league safety advantage” for vascular gene therapy, because only temporary gene expression is needed to grow new vessels, and because insertion into the genome—the goal of some viral-based gene therapies—could disrupt other genes, possibly causing cancer. But to treat other diseases, therapeutic genes might have to pump out more protein for longer periods. Today's viral vectors still do this better than nonviral ones do, but lab experiments with new nonviral methods are closing the gap.

    For example, a method called electroporation, developed by immunologist Richard Heller's team at the University of South Florida in Tampa, transfers genes more than 80 times as efficiently as naked DNA injections. The team injects DNA into the target tissue—usually skin, muscle, or tumors—and uses a specially designed electrode to apply an electric field, which punches temporary holes in cell membranes that allow DNA into the cell. The method hasn't been tested in the clinic, but it's close: Gene therapist Lou Smith of Valentis and his colleagues recently used electroporation to transfer a blood-clotting gene to hemophiliac dogs, temporarily eliminating symptoms of the disease, according to work presented at a meeting in May sponsored by the National Hemophilia Foundation. And Heller's team reported at the June ASGT meeting that the method helped deliver a cancer-fighting gene called interleukin-12 into skin tumors, causing some of them to disappear in mice. They're now testing the method to see if it can provoke an immune response powerful enough to clear tumors in animals with melanoma.

    Another novel nonviral strategy, developed by geneticist Richard Selden's team at Transkaryotic Therapies in Cambridge, Massachusetts, also improves gene-transfer efficiency. Instead of ferrying genes into cells inside the body, the researchers remove cells, insert genes, grow lots of modified cells in the laboratory, and then inject the cells into the abdominal cavity. The researchers used the method to transfer a gene encoding a blood-clotting protein called factor VIII into skin cells taken from six hemophiliacs, they reported in the 7 June issue of The New England Journal of Medicine. When the cells were returned to the body, they produced the clotting protein. Four of the six patients needed less of their usual injected form of clotting protein and exhibited less bleeding for up to 10 months after the injection.

    Long-lived gene expression has proved elusive for most nonviral vectors, in part because none of them stitch the useful gene into the genome of the host cell. But Mark Kay's team at Stanford has recently devised the first nonviral vector that has this power. Two plasmids are simultaneously injected into the tail vein of a mouse. One plasmid includes a therapeutic gene connected to pieces of a transposon, or jumping gene. The second plasmid encodes an enzyme that helps the hybrid gene on the first plasmid jump into the chromosome. When both plasmids were simultaneously injected, they sewed a key blood-clotting gene into liver cells of hemophiliac mice, where it pumped out enough protein to allow blood to clot normally, the team reported in the May 2000 issue of Nature Genetics.

    Kay's team also happened on a new way to achieve long-lived expression by delivering linear DNA fragments that don't insert themselves into the genome. These fragments persist in mouse liver cells for at least a year—about half the lifetime of a mouse, the team reported in the March issue of Molecular Therapy. “The persistence issue is being solved,” Kay says.

    To get these long-lived plasmids into the liver, Kay used a method called hydrodynamics, developed by Wolff's team and Dexi Liu's team at the University of Pittsburgh. The method involves quickly injecting the tail vein of a mouse with naked DNA in a huge volume of saline, roughly the entire blood volume of the animal. The pressure somehow forces DNA out of blood vessels in the liver, where many of the liver cells take up and express the foreign genes.

    No one proposes injecting people with a proportional amount—nearly 5 liters—of DNA-containing saline. But hydrostatic pressure could still help deliver genes to human tissue. For example, Wolff's team injected DNA into arteries that feed the arm and leg muscles of rhesus monkeys, using a blood-pressure cuff to temporarily increase blood pressure. As they reported in March in Human Gene Therapy, the method delivers a reporter gene to about 30% of the muscle cells—a level of efficiency that rivals that of viral vectors. Wolff's team and colleagues at a company he founded, Mirus Corp. (a subsidiary of PanVera Corp. of Madison, Wisconsin), and at Transgene of Strasbourg, France, are planning a small clinical trial next year to see whether the pressure-cuff method can replace a defective muscle gene in young adults with Duchenne muscular dystrophy.

    Surgically clamping blood vessels does the gene-delivery trick, too, and can reach muscles that are inaccessible to a pressure cuff. In the July issue of Molecular Therapy, Leaf Huang's team at the University of Pittsburgh reported inserting a key gene to repair the diaphragm muscle of mice with muscular dystrophy (MD)—a crucial target because many MD patients die of suffocation when their diaphragm muscles fail to pull air into the lungs. The researchers surgically clamped the outgoing blood vessel for a few seconds, raising the blood pressure enough to deliver the therapeutic gene; Huang suspects that similar clamping methods could help push therapeutic genes into other organs as well.

    View this table:

    Whether or not it can be adapted to the clinic, hydrodynamics proves that high-efficiency gene transfer is possible without viruses, Kay says. It's also the first method to rapidly pinpoint the best candidate genes for gene therapy. Researchers create small pools containing different genes, inject each pool into mice, and see quickly which contains a gene that helps treat the disease. With their candidates thus narrowed down, researchers can inject mice with each gene in the pool to identify which one helped. That's much quicker than cloning each candidate gene into a viral vector, and it could be important for diseases such as cancer, in which no one's sure which genes will prove effective. Liu says that the discovery of new therapeutic genes, together with more efficient delivery, “will make the field jump.”

    Vectors tailored to tissues

    When viral gene therapy vectors are injected into the bloodstream, the viruses protect their gene payload, home in on their target tissue, and deliver the genetic goods—as viruses have been doing for eons. Some researchers are devising complex nonviral vectors that act more like viruses, using tools developed by a generation of drug-delivery specialists. The long-term goal is to transfer genes to the correct tissue to produce the desired clinical effect, says drug-delivery specialist Sung Wan Kim of the University of Utah in Salt Lake City.

    Custom-designing vectors, Kim says, relies on several strategic decisions: whether to inject into the bloodstream or directly into the tissue; which combination of polymer, lipid, and other molecules to use for a particular tissue; and whether to attach another molecule to help target the complex to the correct cells. Despite the complexity, it's beginning to work: In the August issue of Gene Therapy, Kim's team reported a three-part system called TerplexDNA that delivers genes to rabbit heart tissue 20 to 100 times more efficiently than naked DNA. The vector includes DNA, a positively charged polymer to help protect DNA from enzymes that would chop it up, and a lipid that heart muscle cells recognize and take up.

    The team has also developed a way to deliver useful genes by injection into the bloodstream. The method uses DNA wrapped in a soluble, degradable polymer to target white blood cells. In the July issue of Gene Therapy, the team reported that one injection in mice helped deliver two genes to white blood cells throughout the body. They pumped out proteins that made their way to the pancreas and blocked the autoimmune reaction believed to cause juvenile diabetes.

    Gene therapist Leonard Seymour's team at the University of Birmingham, U.K., has developed another way to ferry genes through the bloodstream to target tissue: cloaking the genes in a two-part polymer shell and freeing them where they're needed. A polymer called polylysine packs the DNA into small particles, and a second polymer makes it slippery and able to evade immune proteins and cells. Once inside the target cell, the chemical environment causes the polylysine to break apart, liberating the DNA for expression. “It works amazingly well,” Seymour says. Eventually, the team would like to add guidance molecules—such as a specific antibody, peptide, or sugar—that are recognized and taken up only by particular tissues, making targeted delivery possible.

    Complex nonviral carriers are a long way from the clinic, but they may offer a glimpse of future gene therapies. Years from now, gene therapy vectors might be a sort of semisynthetic virus, combining the best of today's viral and nonviral carriers, ASGT president Brenner predicts. Such a vector would make precise and permanent fixes to genetic defects that underlie disease by homing in on a specific tissue and replacing or fixing a defective gene, while safely avoiding the potential dangers of viral vectors. But other experts see a different future, in which genes are given temporarily and produce a precise dose of protein for just as long as it's needed. In short, says Felgner, “the idea would be to inject genes like any other drug.”


    Repair Kits for Faulty Genes

    1. Dan Ferber

    A balky appliance forces a choice: repair or replace. Defective genes impose the same choice. Most gene therapists have gone the replacement route, providing intact genes to make up for the defective version nature provided. But a few researchers are developing molecular toolkits to correct mutations in the genome. These so-called molecular targeting approaches don't touch the stretches of DNA flanking the faulty gene that help regulate its expression, so after the gene is repaired, the cell can still properly control when and how much protein the gene produces. That differs from gene replacement approaches, which don't necessarily replace all the normal expression signals. This strategy could make the difference in treating diseases that require the right amount of therapeutic protein at the right time. So far, gene-repair methods have corrected mutations involving the insertion, deletion, or substitution of only a handful of nucleotides at a time, and only a few of the methods have been tested in animals. But the following techniques offer potential means to achieve a longtime dream of gene therapists: a lasting cure for genetic disease.

    Triplex-forming oligonucleotides (TFOs). These snippets of single-stranded DNA recognize double strands of DNA with identical or nearly identical sequences and nestle themselves into the double helix there to form a triple helix, or triplex. There are two versions of the method, one of which corrects mutations and the other of which purposely introduces mutations that stop production of a dangerous protein. To correct mutations, the TFO is linked to another snippet of DNA, this one double-stranded, that has the correct sequence of the defective gene. The double-stranded fragment shuffles itself into the genome near where the TFO has bound, replacing the misspelled portion of the gene.

    To stop production of a protein, a single-stranded TFO is used alone, without a linked fragment of DNA. It snuggles into the misspelled portion of the gene, forming a triplex. The cell's repair enzymes are attracted to the triplex but don't know how to fix it. Instead they make new mistakes, introducing random mutations into the target gene. One drawback: TFOs work only on the minority of genes that have DNA sequences capable of forming triple helices.

    Small fragment homologous replacement. This method takes advantage of the cell's ability to shuffle different copies of a gene by exchanging stretches of DNA between chromosomes, a process called homologous recombination. It uses a 400- to 800-base DNA fragment that's identical to part of the defective gene, except for the stretch that's to be repaired. The cell exchanges the fragment into one or both chromosomes. In the August issue of Gene Therapy, Dieter Gruenert's team at the University of Vermont in Burlington reported fixing a mutation that hampers breathing in mice with cystic fibrosis.

    Viral gene targeting. Gene therapists usually use adeno-associated virus to deliver intact genes to replace defective copies. But apparently the virus can also be used to repair defective genes in the chromosome. Part of a normal gene is stitched into the single-stranded viral DNA, and the cell's repair machinery uses it to correct the mistake in its own genome. So far, the technique has repaired a variety of mutations in cultured human cells, including nucleotide deletions, insertions, and substitutions.

    Chimeraplasty. Sickle cell anemia and many other genetic diseases are caused by misspellings of a single nucleotide in a single gene. In this approach, researchers create dumbbell-shaped hybrid molecules, part DNA and part RNA, that contain the correct spelling of the gene; the molecules seem to bind to the misspelled portion of the genomic DNA and fix the mistake. But this technique has met with hard questions since it was introduced in the mid-1990s. “It's fair to say there's been some controversy with regard to reproducibility,” says molecular biologist Peter Glazer of Yale University School of Medicine. A handful of researchers defend the method, but few are pursuing it.


    Viral Vectors Still Pack Surprises

    1. Eliot Marshall

    Viruses may be lowly parasites, but their power to invade cells has won them a big part in gene therapy. Stripped of disease-causing elements, they work as natural syringes to inject DNA into human cells. Such “viral vectors” now dominate gene therapy: Nearly three-quarters of all protocols use them. Even so, researchers view their parasitic past with suspicion and worry about unforeseen problems in the clinic. The tamest viruses have produced surprises, as researchers using adeno-associated virus (AAV) learned recently.

    In September, federal overseers asked Stanford University's Mark Kay to put “on hold” a clinical trial using an AAV vector to treat hemophilia B, an inherited blood disorder. The reason: Signs of AAV in the patient's semen raised a concern that gene therapy might have changed the man's inheritable DNA.

    It's not unusual to detect traces of a vector after gene therapy, Kay says. But in this case, the signal persisted “at a low level” for weeks before it cleared, he says. Kay alerted the Recombinant DNA Advisory Committee (RAC), an oversight group at the National Institutes of Health (NIH), and the Food and Drug Administration (FDA). The FDA asked for a pause; the case will be discussed in the RAC on 5 December.

    The RAC forbids any gene therapy that changes the “germ line”—eggs or sperm—either inadvertently or for genetic enhancement, because germ line mutations could be passed on to future generations. Kay already takes steps to prevent inadvertent alterations. His team informs patients that there is a small risk of germ line changes and, before therapy, offers to bank the sperm of male patients and asks them to use barrier contraception until their semen is clear of vector signal.

    Kay doubts that germ line changes occurred in this hemophilia patient. Instead, he thinks the AAV signal probably came from typical “shedding” of vector seen in body fluids. But he hopes the RAC discussion will lead to a consensus on risk. “We're changing germ lines all the time in cancer therapy” with DNA-mutating chemotherapy—and that doesn't bother people, Kay notes. But he understands that gene therapy is “new territory.” He favors guidelines that would allow these safety trials to continue if the probability of germ line alteration remains low.

    Widely regarded as ultrasafe, AAV ran into another hurdle earlier this year. Although wild-type AAV infects many people, it doesn't seem to cause illness. But researchers got a scare last winter when mice that had been injected with an AAV vector developed liver tumors. This discovery prompted a short pause in two clinical trials using an AAV vector and an inquiry by U.S. health agencies in March. A joint review by FDA and RAC concluded that the AAV vector probably did not cause the mouse cancers. Clinical trials using AAV have resumed.

    Shifting focus.

    This sample illustrates the relative decline of retrovirus vectors (active only in dividing cells) and the rise of adenovirus and adeno-associated virus vectors (active in dividing and nondividing cells).


    The cancer scare arose when molecular biologist Mark Sands of Washington University in St. Louis, Missouri, was reviewing data on mice in a gene therapy test. Sands is developing an AAV vector to treat people with inherited enzyme deficiencies, concentrating on a fatal disorder called mucopolysaccharidosis type VII, in which the body fails to process waste in lysosomes. Sands created knockout mice with this disorder and successfully treated them with AAV-vector gene therapy. But during a routine pathology review last year, he discovered that three of five mice sacrificed late in life—at 18 months, the human equivalent of 55-year-olds—had massive liver or blood vessel tumors. “It scared me. I had never seen tumors like this,” says Sands, although he had used identical mice in many experiments—and this particular group of 59 had seemed tumor-free until the end of the study. On reexamination, three additional animals, the youngest sacrificed at 8 months, were found to have had tumors.

    Sands was concerned that the AAV vector might have inserted new genes into the mouse DNA in a way that triggered cancerous growth. After reviewing the data, experts at a joint FDA-RAC meeting in March ruled out “insertional mutagenesis” as a cause of cancer. Sands agrees. But that does not rule out other possible vector-induced changes, Sands notes.

    What actually caused the cancers remains unclear. Some panel members suggested that the knockout mice may have been prone to liver cancer. R. Jude Samulski of the University of North Carolina, Chapel Hill, a vector expert who took part in the RAC review, suggests that when these mice are cured of their inherited enzyme disorder, another genetic flaw may cause cancer in old age. But Sands hasn't seen evidence that the mice are prone to cancer. And it troubles him that other researchers have not allowed mice to live as long as he did for safety testing.

    Although the scientific puzzle remains unsolved, Mark Kay and Terence Flotte, a gene therapist at the University of Florida, Gainesville, are confident that AAV vector can be used safely in gene therapy. The NIH and FDA, meanwhile, have asked Sands to do another mouse study to see if he can repeat the results. The research will require “hundreds” of animals, he says, and “years” to complete.


    Terrorism, Money, Contacts Top Science Adviser's Agenda

    Long-awaited appointee arrives amidst new war on terrorism and ongoing battles over science funding and priorities

    John Marburger's job is to advise the president on science. But he isn't expecting extensive face time with George W. Bush. Rather, his experiences as a university president and director of a national laboratory have taught him the importance of chain of command. “I would regard having to talk with the president as an indication [that] something is very seriously wrong somewhere,” says the 63-year-old physicist, who on 23 October became director of the Office of Science and Technology Policy (OSTP) as well as assistant to the president for science and technology.

    Marburger steps into a job very different from what he expected when he was nominated in June. The events of 11 September have put terrorism at the top of his agenda, he says, adding duties as science adviser to the new White House Office of Homeland Security. In his first few weeks on the job, he has been busy meeting with groups and individuals inside and outside the government, and he has been “deeply involved” in preparing the 2003 budget request, which will be sent to Congress in January.

    Marburger is the 14th scientist to hold the White House post, created by President Dwight Eisenhower in 1958 to give top politicians easy access to technical advice. After earning a doctorate in applied physics from Stanford University in 1967, Marburger taught and conducted research at several universities. He spent 14 years as president of the State University of New York, Stony Brook, before taking over an embattled Brookhaven National Laboratory in 1997. He is credited with improving the Upton, New York, lab's relationship with its neighbors, who had forced the shutdown of an aging research reactor after revelations that it was leaking hazardous materials.

    Reaching out.

    One of John Marburger's (left) first tasks has been to forge links with science community stalwarts, such as House Science Committee chair Sherwood Boehlert (R-NY).


    In a 14 November interview with Science's news staff, Marburger emphasized that “my self-image is as a scientist rather than an administrator.” But he also showed plenty of bureaucratic savvy, offering some strong but softly voiced views on what he brings to the task of steering the government's $95 billion research enterprise. He also defended pending changes at OSTP that have drawn criticism from some scientists, and he refuted persistent rumors that his job has been marginalized (see sidebar on p. 1645).

    The following excerpts were edited for brevity and clarity by Science.

    On why he took the job

    The president didn't give me any marching orders. We had a good conversation. I didn't put any conditions on him, and he didn't put any conditions on me. I did not insist on being able to pick up the telephone at any time and talk with him. But I've felt very reassured about accessibility, and I'm quite satisfied with my ability to get things done. I accepted the job after I met the people I would be working with. I liked and respected them. It's a very result-oriented, businesslike Administration, and from my point of view that's very positive for getting things done. If something isn't working, and you come up with a suggestion to make it work better, it tends to be accepted.

    On the delay in appointing a science adviser

    They did pretty well [without one]. I do think that during this period, the links between the White House and the science and higher education communities were weaker than they should be, and I've been working to strengthen those links. So I've spent a lot of time at the National Academies and [in] meetings with scientific societies and higher education groups. I'm responding to virtually all requests to appear at events where I can speak directly to scientists. I've met less with the industrial community, but I'm reaching out to them and trying to understand their issues.

    Within the Administration, I've been meeting systematically with middle-level policy-makers to find out their attitudes toward OSTP and how they see the landscape and where I fit in. OSTP is a valued agency, and they are glad to see us back in business with a director. [Some people said] the office has not always been sufficiently responsive, mostly in terms of time; I'm interested in doing something about that.

    On OSTP's role

    My philosophy is that OSTP is primarily a broker between the federal government's policy-making and budgeting apparatus and the communities they serve: higher education, industry, and science. The primary source of expertise about the needs and opportunities in these sectors comes from them, not OSTP. The office is really there to bring the agencies together and to tap in quickly to talent and information needed to feed the policy-making process. OSTP shouldn't use its people and expertise to do a lot of in-house analysis and lengthy reports, [because that is what] the agencies, the National Academies, and others are prepared to do.

    On management

    I come from a strongly administrative background, from universities and national laboratories, where the talent is really on the faculty or at the bench. I've always viewed the administration as a tool, as a service to the faculty. What do university presidents do? They fund-raise. They are the marketing people. I'm carrying that philosophy into OSTP. [My senior staff will be] people who can effortlessly plug into the stakeholders—plug into the White House, Congress, and the agencies—and who have a lot [of] knowledge about how they work.


    On OSTP staffing

    I'm putting less emphasis on senior positions. I don't intend to fill all four of the Senate-confirmed associate director positions. In the previous Administration, there was one for science, one for technology, one for national security and international affairs, and one for environment. In my view, pulling those four categories out and doing something special with them was artificial. [It also] created [administrative] stovepipes under those directors, which made it harder to work on issues across the board. This is a relatively small office, and that was an inefficient way of organizing it.

    I plan to fill two [of the associate director slots], although I can fill the others later if necessary. I'm handling the environment and national security areas differently, and we'll just have to wait and see how it works. I and my two senior deputies [for science and technology], which I call the directorate, will work as a team. Beneath us will be seven departments, headed by assistant directors, that will cover the spectrum: science, technology development, environment, education and the behavioral sciences, space and aeronautics, information technology and telecommunications, and national security. I regard the departments as categories of expertise that are available to the directorate as we go about our business of linking scientific expertise with the government's decision-making apparatus. The directorate will also handle crosscutting international issues.

    I expect the staff to be the same size, or maybe even larger. We have 60 slots, with maybe 35 or 40 [of those] currently filled. We need more people, because there is lots of business. I don't see a budget problem [hindering expansion] unless we hire very expensive people.

    On talking to the president

    I would regard having to talk with the president as an indication [that] something is very seriously wrong somewhere. Under those circumstances, I believe the president would talk to me. For the immediate needs of the office, when it looks like something awful is about to happen to us, I'll usually call up [economic adviser] Larry Lindsey. But when there is some more difficult issue, it's [White House Chief of Staff] Andy Card or [his deputy] Josh Bolton.

    On terrorism

    We're very heavily involved in homeland security issues. Coming into the job, I expected to be spending a lot of time educating myself about missile defense, about which I know only what I read in the newspapers. That priority has been displaced by the war against terrorism. The most significant thing that has happened is the creation of the Office of Homeland Security under [former Pennsylvania Governor] Tom Ridge. He has asked OSTP to provide the technical support for his office, and we are doing so. There is a provision in the presidential directive for a senior director for R&D, and currently we are filling that function. [We are also] setting up a network within the agencies and with the National Academies to help us evaluate the thousands of proposals on how to win the war on terrorism that are coming in over the transom. I've met twice now with the senior science people at the agencies to talk about how to organize the agencies to [help] with this issue.

    On regulating visas for foreign students

    I've sent a strong and consistent message every time I've met with higher education organizations: Each campus really has to have a dialogue on these issues. Universities have a lot [of] assets to bring to the table in the war against terrorism, but there are also vulnerabilities. They are vulnerable to exploitation by terrorists, and they are also vulnerable to society's reaction, [such as] restrictive regulations. Universities have to be prepared to state what will work and what won't. I certainly carry my [academic] experience into the policy-making circles. And it's very nice to have [former Stanford University provost] Condoleezza Rice as the National Security Advisor; she understands universities and is very interested in this issue.

    The president understands the importance of freedom, not just for society at large but also for the functioning of higher education and research. Nobody wants to create conditions that will impair the research effectiveness of the United States [or] create conditions that will make it impossible to get a high-quality education.


    On next year's budget

    I arrived late, but yes, I'm deeply involved in the 2003 budget process. I've been in the budget meetings, and I'm providing advice. There is widespread recognition that strength in science is a highly important national objective. The president himself has made a commitment to increase the [National Institutes of Health] budget. He is very interested in math and science education. There is concern throughout the agencies and OMB [the White House Office of Management and Budget] about balance and making sure that the physical sciences don't get left behind. That is very much on my mind. Although it is obvious that the war on terrorism is expensive and there is a global economic downturn, there is a commitment to keep science strong. I don't see a lot of new initiatives, [but] I can't be much more specific.

    I'm not uncritical of how science tends to ask for money. I'd like to try to get some feedback about how to do that more effectively. In general I don't like arbitrary formulas, like “let's double or triple” an agency's budget. That doesn't work well in the long run. You've got to have a reason for doing that.

    On avoiding duplication

    If there are two agencies trying to do the same thing, I get them together and we work it out. It works much better than I expected. I've been an outsider looking in at a sort of impenetrable maze of working groups and crosscutting committees. Now I can see it from the other side, and these groups actually work and crosscut! One thing that helps is that I've been working very closely with [OMB, which sets budgets and oversees spending]. There is nothing like OMB to help straighten out turf issues between agencies.

    On research infrastructure

    There is an infrastructure problem. The problem of aging facilities is probably greater than the need for new facilities. But exactly how to do something about that is a rather complicated policy matter. There could be a facilities initiative of some kind, or it could be something involving strategy with facilities and administration cost recovery. That discussion is open.

    On judging his performance

    That's a hard question to answer, because so much of what I do is preventing disasters, fixing things, or gaining access. [Members of the community] need to see science being taken seriously by the agencies and the Administration. They need to see rational budget proposals. They need to see action, which includes [incorporating] science at the highest levels of national decision-making. I don't guarantee miracles.


    "Explain to Me What I'm Not Getting"

    1. David Malakoff

    In the 4 months between being chosen by President George W. Bush and confirmed by the Senate, the Washington rumor mill filled with talk of how the White House had downgraded John Marburger's new job. Exhibit A was a change in his title, which dropped an “assistant to the president” tag held by his three immediate predecessors. The whispers also included a long list of perks that he supposedly had lost. But Marburger says it just ain't so.

    The altered title “hasn't made a difference,” he says. As for the rest, here's his blunt assessment of where things stand: “I have access to a limousine and a driver. I'm on the list for the White House mess [staff dining hall]. I'm going to senior staff meetings. I'm written into presidential directives. People return my phone calls. The things that I've asked for—and I've tried to be reasonable about my requests—I've gotten. I have White House stationery. I have my badge that gets me into the White House. What else do you need? If somebody could explain to me what I'm not getting, I'll be happy to ask for it.”


    After the Fall

    1. Robert Koenig

    Researchers have found some, but not all, of the answers about why the twin towers of the World Trade Center collapsed—and why they stood for as long as they did. Did floor trusses or internal columns fail first?

    NEW YORK CITY—Chief engineer Francis Lombardi was sitting in his New York Port Authority office on the 72nd floor of the North Tower of the World Trade Center at 8:48 a.m. on 11 September when he felt the building shudder. It must be an earthquake, he remembers thinking. But the smoke that began pouring from the floors above him—and the fireball that shot out of the South Tower 15 minutes later—soon made it clear that something worse, and more deadly, had struck both buildings. Two months later, scientists and engineers have gathered reams of data on what happened. But they still lack definite answers on why the 110-story towers collapsed almost vertically after standing long enough to allow more than 25,000 people to escape.

    Meeting last week at a forum at Columbia University,* engineers agreed that the impact of the hijacked planes themselves probably would not have brought down either building without the intense heat from the fires that followed. “I was flabbergasted” when the towers fell, concedes Charles Thornton, chair of the Thornton-Tomasetti Group, the structural design firm for the world's tallest buildings, the Petronas Towers in Malaysia. Thornton and others investigating the collapses feel they are making progress toward pinning down the structural cause of the initial failure that led to the floor-by-floor pancaking and subsequent collapse of each entire building. They are far from reaching agreement, however. Determining what triggered the deadly cascade is “the $64,000 question,” says Lombardi. “The answer is not yet clear to me.”

    Engineers and scientists are pursuing several lines of inquiry, from forensic analysis of the towers' twisted steel columns to computer modeling of the smoke plumes that followed the crashes. While the exact mechanism of the towers' collapse may never be proven, there are plenty of strong opinions. Some experts think that the first structural element to fail in the intense heat and pressure were the steel floor trusses—which spanned the distance between each tower's inner core of 47 columns and its perimeter columns (59 across each side of the towers). But other engineers suspect that some inner columns damaged by hurtling pieces of the aircraft failed when high temperatures weakened the steel. Several experts also say the loss of fireproofing—which was stripped off trusses or columns by the impact of chunks of the aircraft during the collisions—may have been an important factor. It may take months to resolve which of these processes, or perhaps a deadly combination of all of them, brought the buildings down.

    Researchers are focusing in particular on the fires set off by the ignition of an estimated 75,000 liters of jet fuel splattered from each aircraft. The inferno may have exposed the columns and the floor supports to temperatures of 800 degrees C or higher, which would have further weakened the steel columns and support structures already damaged by the collisions. “Fire and fire protection is one of the most important issues we should examine as a result of this collapse,” says Matthys Levy, a partner of the civil and structural engineering firm Weidlinger Associates Inc. and author of Why Buildings Fall Down.

    Sifting for clues

    W. Gene Corley, senior vice president of Construction Technology Laboratories in Skokie, Illinois, leads a team of 21 engineers assembled by the American Society of Civil Engineers (ASCE) shortly after the tragedy. The team hopes to issue a preliminary report next spring on the mechanism of the twin towers' collapse. Corley says some members are now trying to identify and analyze thousands of steel columns found in the rubble, a process aided by markers stamped on each one when the towers were erected showing their exact location in the building. Of particular interest are columns struck by the aircraft or contorted by the intense heat. But other experts are dubious about the lessons to be learned from that exercise, because some of the damage may have been caused by the fires smoldering underneath the rubble after the collapse. Forensic steel specialists have been identifying and analyzing such columns both at the site and at landfills where the rubble has been taken.

    Taking another tack, some members of the ASCE team are analyzing numerous videos and detailed still photos of the towers between the time of the initial aircraft impacts and the collapses. “With the right photos, we can count every [exterior] column and determine whether it was damaged or destroyed by the crash itself,” Corley says. “And we can analyze windows to see how many were broken, allowing us to estimate how much air was coming into the fires in the buildings.” The computer modeling may help explain the way the fire burned and perhaps the approximate temperatures it reached, contributing to the separate collapses of the two towers.

    Many investigating engineers believe that the South Tower—which was struck 15 minutes later than the North Tower but fell 29 minutes before its twin—collapsed more quickly because the two planes slammed into the buildings at different places. The second crashed off-center and likely damaged more of the interior columns. It also hit lower, meaning that the weakened columns had to support the weight of 15 more floors above them. The collapses of the twin towers also took down several other buildings in the complex (see map), a process that engineers hope will teach them lessons about collateral damage.

    In the rubble.

    Engineers examine steel structures from ground zero.


    In fact, some engineers say that Buildings 5 and 7 may yield more valuable tips on safety than the twin towers themselves, because structures are far more likely to suffer damage from fires and projectiles than from an aircraft's collision itself. The 47-story Building 7, which collapsed in the evening after burning all day, offers investigators an unusual example of a protected steel structure brought down by a fire. The unusual projectile damage to Building 5 is also drawing scrutiny.

    The revelation that an intense fire could destroy a damaged steel-girdered structure has triggered several research projects on the dynamics of the blaze. The goal is to determine its hottest points and how that heat weakened the structural steel. According to one calculation, the amount of jet fuel in the aircraft would have burned out in about 10 minutes if spread evenly over a concrete surface the size of one level of the World Trade Center. A member of the ASCE team, Jonathan Barnett of Worcester Polytechnic Institute's Center for Firesafety Studies in Massachusetts, says that the extended burning indicates that the flammable contents of the plane and the buildings themselves were significant factors in stoking the fires until they were hot enough to weaken steel.

    Barnett plans to use a computer model to see how the buildings' design (and the broken windows and gaping holes left by the aircraft) contributed to the flow of oxygen to the fires. He also wants to interview firefighters and listen to 911 calls from people describing the fires from inside WTC offices. “It's possible that the structural failure occurred at lower temperatures [than currently assumed],” Barnett told Science. “If there are high stresses on the steel, you don't need extremely high temperatures to weaken it.”

    Another line of inquiry involves the plumes of smoke that emerged from the towers before their fall. William Grosshandler, chief of the fire research division of the National Institute of Standards and Technology's Building and Fire Research Laboratory in Gaithersburg, Maryland, says his lab hopes to use its smoke-plume analysis software to estimate the “rate of heat release” from the fires. “By examining the trajectory and intensity of the smoke plumes, we may be able to work backward to tell the structural people a bit more about the fires,” says Grosshandler. But that sort of analysis requires high-quality video and still photos of the smoke plume, which have been hard to come by. His lab did the same sort of heat-release analysis following the Kuwait oil fires lit by retreating Iraqi troops during the Gulf War.

    Collateral damage.

    Fire and projectiles from the collapse of the twin towers damaged several buildings in and around the World Trade Center complex. These structures may provide clues to the behavior of buildings subjected to less extreme forces than those that brought down the towers.


    James A. Milke, a member of the ASCE team and the department of fire protection engineering at the University of Maryland, College Park, is looking at the interaction between the fire and the steel columns, using a computer model to recreate their rising temperatures. For that analysis, he needs data on the fire's intensity and the type of steel used in the towers' columns and trusses. The lingering fires in the rubble are making it harder to determine the initial temperature of the fires that attacked the steel and to pinpoint where the metal failure originated. Milke so far has found no previous examples of steel-frame buildings that collapsed after a fire. But he has documented several cases—in Los Angeles, London, and Philadelphia—in which tall steel-frame buildings remained standing in the midst of lengthy fires.

    Whereas some other engineers suspect that the failure of bolts connecting floor trusses to columns may have been the first fatal step to disaster after the collisions and fires, Levy thinks that the damage to the inner core of columns—both from the impact and heat—initiated the actual collapses. He suspects that the fireball blast from the burning jet “most likely took away the fire protection that the interior columns had.” The twin towers' structural engineer of record, Leslie Robertson of Leslie E. Robertson Associates in New York City, says that all the trusses and columns were “fully fireproofed.” But he told the symposium audience that the impact of the aircraft may have stripped the fireproofing from some of the steel.

    Will the demise of the World Trade Center towers dampen plans for future giant skyscrapers? Both Thornton and Levy expect to see efforts to make the next generation of big buildings more resistant to blasts and fires. But Levy says the decision to build a superskyscraper is “a political and economic question, not an engineering one.” Construction already has begun on a new building in Taiwan that may exceed the height of the Petronas Towers, notes Thornton. And he doesn't think the events of 11 September will alter the plans. “I think they'll keep building it,” he says.

    • * “The Technical Implications of the World Trade Center Collapse,” Columbia University, New York City, 12 November.