News this Week

Science  16 Apr 1999:
Vol. 284, Issue 5413, pp. 406

    Drug Firms to Create Public Database of Genetic Mutations

    1. Eliot Marshall

    Private investment in genome research has changed the way biology is done in the 1990s and created some odd partnerships. But a remarkable venture, announced this week, could set a new standard for high-impact science by an unlikely set of collaborators: Ten large pharmaceutical companies and a British charity will spend $45 million to create an archive of human genetic variation —and give it away. These fiercely competitive companies will team up to bankroll work by a network of academic labs, and the collaboration will release proprietary data free of charge to all comers. In a field known for cutthroat competition and secrecy, this arrangement is, to say the least, an anomaly.

    The project's goal is straightforward: It will identify variable points in the human genetic code—known as single nucleotide polymorphisms, or SNPs—as quickly as possible. SNPs are single-base variations that can serve as physical landmarks along the 3 billion bases of the human genome. SNPs will be used as analytical tools, making it easier to trace inherited disease risks and abnormal responses to drugs. The nonprofit SNP Consortium, or TSC (its official name), will publish data every quarter on the Internet, organizers say, no strings attached.

    TSC isn't altruistic, though. The companies backing the enterprise—a Who's Who of the drug industry—expect that SNPs will enable them to develop and sell drugs more effectively. And by creating a public database, they will avoid having to buy multiple, private data collections from the half-dozen or so biotech firms that have been collecting SNPs since 1997, hoping to stake a proprietary claim on the data. In addition to the Wellcome Trust philanthropy of Britain, which is contributing $14 million to the project, the sponsors include Bayer Group AG, Bristol-Myers Squibb Co. Glaxo Wellcome PLC, Hoechst Marion Roussel AG, Monsanto Co. Novartis AG, Pfizer Inc. Roche Holding Ltd. SmithKline Beecham PLC, and Zeneca Group PLC. Each of the 10 companies put in $3 million apiece.

    Arthur Holden, a former biotech executive who heads TSC, says the nonprofit labs that agreed to do the research have specific marching orders: They are to find 300,000 SNPs in 2 years. SNPs are believed to be evenly distributed along the human genome at a frequency of about one per 1000 bases. Once found, the SNPs will be tracked to positions on the genome. The goal is to have 150,000 mapped in this way by mid-2001.

    Few SNPs are likely to be directly involved in disease, but a database of several hundred thousand, researchers hope, will make it easier to track smaller segments of the genome and identify patterns of inheritance that affect health. The SNP map, they anticipate, will make it possible to diagnose illnesses earlier and avoid giving drugs to patients likely to experience side effects, including drugs already in use. This last possibility is why the companies are eager to get SNPs as soon as possible.

    Some scientists are so excited about the open nature of this effort that they're calling it “a new model” of public-private collaboration. “This is absolutely unique,” says Eric Lander, director of the genome center at the Massachusetts Institute of Technology's Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, a partner in TSC. The companies have recognized, he says, that the SNP map ought to be “common infrastructure” and that “locking it up as private intellectual property didn't advantage anybody.” The private sector “wanted all these SNPs,” Lander says, but company executives “realized it didn't make sense to compete on them.” Instead, these companies will compete on developing drugs. Lander hopes this will inspire other companies to donate research tools to biology.

    Francis Collins, director of the National Human Genome Research Institute (NHGRI), also sees it as a “groundbreaking” event, adding that “I know of no previous example of this kind of collaboration” between companies and basic biologists. However, in its technical objectives, the project echoes a government-funded effort to create a database of 60,000 to 160,000 SNPs, which Collins and NHGRI launched last year (Science, 19 December 1997, p. 2046). Collins says the two efforts are “nicely complementary.” TSC will be collecting random data across the entire genome, he explains, while half of the government-funded investigators will be focusing SNP hunts on specific genes involved in disease. In any case, Collins adds, the more SNPs people find, the better: The new consortium “in no way implies that we should diminish our efforts. …”

    The TSC's effort is divided into several stages: hunting for SNPs, mapping and archiving them, and releasing them to a public database. The hunt will be carried out by three large human genome sequencing teams: Lander's group at the Whitehead, a group under David Bentley at the Sanger Centre near Cambridge, U.K. and a third team under Elaine Mardis at the Washington University genome center in St. Louis. According to staffers, all three will use newly developed biological tricks and software to identify SNP variations in randomly sequenced human DNA.

    After candidate SNPs have been found, they will be forwarded to an archive at the Cold Spring Harbor Laboratory in New York. There, bioinformatics leader Lincoln Stein will double-check the data and run a computerized scan against previously banked human genome sequence data, looking for matches. Wherever the computers find a “hit,” they will note the genomic location—a process Stein calls “mapping in silico.”

    At present, only a fraction of the human genome is available for this kind of SNP mapping. As genome centers release more DNA sequence over the next year, Stein says, in silico mapping will become more powerful. Meanwhile, as insurance, researchers at Stanford University and the Sanger Centre will use a “radiation hybrid” marker system based on cloned fragments of the genome to fix the approximate location of SNPs until they can be located more precisely. Holden, TSC's chief executive, says the mapped SNPs will then be sent to a law firm, which will file patent applications but convert them to simple registrations of invention to prevent others from claiming priority. Every quarter, starting on 15 July, Cold Spring Harbor Lab will release the data on a public Web site. No one—not even key sponsors like Glaxo Wellcome (GW)—will get an early peek.

    Allen Roses, GW's worldwide director of genetic research, says the organizers didn't set out to create this public resource; it just grew naturally. Glaxo got interested in SNPs about 18 months ago, Roses says, when an in-house experiment demonstrated that they could be used to speed up the search for disease susceptibility genes. Glaxo suggested to other companies that they jointly fund a collection of SNPs and a map of the human genome, dubbed “Atlas.” The cost estimate was high, though: $150 million. “We didn't think that anybody's board would go along if it was a public sort of thing,” Roses says, so Glaxo considered creating a private venture that would not release all the data. But when the partners insisted on public release, all agreed.

    Last summer, according to Roses, there came “a critical point that made it seem like [a public consortium] was going to work” after all. The academic centers leading the human genome sequencing project expressed a willingness to collaborate. The Wellcome Trust helped in negotiations with academic centers. At the same time, Celera Genomics Inc. of Rockville, Maryland, and NHGRI announced that they were going to speed up the pace at which the human genome will be sequenced (Science, 18 September 1998, p. 1774). If Celera and NHGRI produced a whole genome, it would be easier to conduct in silico mapping of SNPs, the consortium leaders realized. The cost estimate for the SNP project dropped sharply—to about $40 million. By the end of 1998, the goals were set. And in January 1999, the Wellcome Trust funded SNP-hunting pilot projects at the three big sequencing centers. Soon, the project was on its way.

    Will it really be possible to find 300,000 SNPs and map half of them in 2 years? Perhaps so, if indications from the pilot projects hold up. “They have come in with more SNPs than we had anticipated finding,” says Lander. Bentley and Mardis echo his optimism. As new sequencing technology gets installed, Roses concludes, “I think we're going to exceed the goals.”


    New Virus Fingered in Malaysian Epidemic

    1. Martin Enserink

    Scientists have unmasked a killer responsible for the deaths of at least 95 people in Malaysia in the last 6 months, most of them pig farm workers. The culprit, named the Nipah virus for the small town from whence the strain was first identified, is a previously unknown virus that replicates in pigs and seems to be easily transmitted to humans. It is closely related to another notorious agent, the Hendra virus, which surfaced in Australia in 1994 and killed two people and more than a dozen horses. But the new virus spreads much more rapidly, making it “an emerging virus of grave concern,” says John Mackenzie, head of the department of microbiology and parasitology at the University of Queensland in Brisbane, Australia.

    The country's health authorities initially assumed they were dealing with an outbreak of Japanese encephalitis (JE), which causes similar symptoms, and some critics have accused the Malaysian government of being slow to consider alternative causes. Indeed, even now authorities insist that the country is battling a “dual epidemic” of JE and the new disease. But last week the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta said that the Nipah virus is the main culprit, and that the JE virus has played at best a marginal role in the continuing tragedy. The epidemic peaked around the middle of March and seems to be waning now. Early this week, the Malaysian Ministry of Health put the total number of cases at 251. Since early March, Malaysian health authorities have killed over 800,000 pigs to halt the spread of the virus.

    The first cases of the disease occurred in late September near the city of Ipoh, in the northern state of Perak. The victims, all of whom worked in the pig industry, came down with high fever and encephalitis (an inflammation of the brain), and some died. Officials concluded that they had succumbed to JE, which is transmitted by Culex mosquitoes and known to replicate in pigs.

    Circumstantial evidence supported that theory. A few dozen people contract JE in Malaysia yearly, and the numbers usually rise at the end of the year. In addition, tests at the University of Malaya in Kuala Lumpur and the Institute of Tropical Medicine in Nagasaki, Japan, confirmed that blood and cerebrospinal fluid of some patients contained antibodies against JE. To contain the outbreak, Malaysian authorities fogged thousands of pig farms and nearby houses with insecticide and inoculated tens of thousands of people at risk with a vaccine for JE.

    But the disease kept spreading. By late December, when several dozen cases had been reported, it reached the southern state of Negeri Sembilan. Scientists also began to notice that the outbreak wasn't behaving like JE. For one, it was killing pigs, which are carriers of JE but rarely its victims. For another, it was felling adults, whereas JE mostly kills children. Third, it seemed to affect only those who had been in close contact with pigs while their family members stayed healthy, which didn't fit the pattern of a mosquito-borne disease. Furthermore, some people contracted the disease after being vaccinated for JE. And finally, scientists were unable to isolate live virus from any of the patients whose blood contained JE antibodies.

    At the time, the investigations were still handled by the country's Institute for Medical Research, a part of the Ministry of Health, which stood by its initial diagnosis. But when all prevention measures failed and the epidemic spread, the government sought help from Lam Kai Sit, head of the University of Malaya's department of medical microbiology. Five days after they obtained the first patient blood and cerebrospinal fluid samples on 1 March, Lam and his colleague Chua Kaw Bing had isolated a virus that, judging by its appearance, belonged to the Paramyxoviridae, a family that doesn't include the JE virus. They noticed that the virus caused cells to clump together in giant multinucleate cells or “syncytia.” Mackenzie then suggested that the Malaysian samples be tested for Hendra, which also produces syncytia and causes encephalitis in humans, and for Menangle-virus, a paramyxovirus that was recently isolated from Australian pigs.

    Chua took the samples to the CDC in Atlanta, and tests showed that Mackenzie's hunch was correct. The new virus reacted with antibodies to the Hendra virus, indicating a similarity between the two. The CDC then sequenced the viral genome and showed the new virus to be about 20% different from the Hendra virus, says Brian Mahy, director of CDC's Division of Viral and Rickettsial Diseases. The Nipah virus was found not only in the tissues of patients, but also in sick pigs and in 11 abattoir workers from Singapore who had fallen sick in March after contact with Malaysian pigs.

    Although there have been no cases of human-to-human transmission, the CDC classified the new virus as a P4-pathogen. That meant samples can be collected and handled only by researchers clad in space suits and examined only in high-level safety labs. As for the virus's mode of transmission, one theory is that it is present in pig lungs and urine and that humans can get infected by inhaling aerosols. What is particularly worrying, says Mahy, is that one of the Australian victims of the Hendra virus died 14 months after he was infected. If the new virus has a similar lag time, he says, the current fatalities may only be the beginning.

    Another riddle is how the virus entered the Malaysian pig population. Scientists have shown that four species of Australian fruit bats normally harbor the Hendra virus, and they suspect horses could become infected if they ingest bat urine or part of a bat placenta, both of which contain the virus.

    As for any link to JE, most researchers now think that a few cases of JE may have occurred simultaneously when the outbreak began but that JE didn't cause the widespread epidemic. The presence of antibodies in some patients, they say, is not surprising given JE's prevalence in Malaysia, and because many people—especially pig industry workers—may have been exposed to JE without getting sick. “I don't think the JE virus has been involved in any significant way in this current epidemic,” says Mahy.

    But Malaysian health authorities remain convinced that JE is involved. Lam says he alerted the ministry immediately after the CDC informed him on 18 March about the new virus. But 5 days later, a press release by the ministry's director-general summed up the arguments behind the initial diagnosis and repeated that “the present outbreak is confirmed as JE.” The release briefly mentioned the discovery of the Hendra-like virus but said “we are not sure if the virus is a pathogen.”

    This week, Mohamad Taha Arif, director of the Disease Control Division of the Malaysian Ministry of Health, said that “currently the [Nipah] outbreak is more prominent,” but insists there is a dual epidemic and that measures to prevent the spread of JE need to remain in place. He says there wasn't enough proof on 18 March to say that the new virus had caused the epidemic.

    Some Malaysian scientists say they are not surprised at the government's rigidity. Jane Cardosa, a virologist at the Institute for Health and Community Medicine at the University of Malaysia in Sarawak, says she called the health ministry in November and again in January, urging officials to look for alternative infectious agents. She also expressed her doubts in a January message to ProMED, an electronic forum for emerging-disease researchers. The government's response, she says, was an e-mail reprimanding her for questioning the official theory. “The ministry made an early presumptive diagnosis, and they have difficulty admitting it was a mistake,” she says. When costly fogging and vaccination campaigns failed to halt the disease, she adds, “it became even more difficult to admit there was an error.” Lam, too, says “it was quite obvious to us right from the beginning that not all the cases were due to Japanese encephalitis.” But not being involved in the official investigation, he didn't look for other possible culprits.

    David Quek, editor of the journal of the Malaysian Medical Association, says the episode reminds him of a heart infection outbreak in Sarawak in 1997, in which more than 20 children died. Health authorities blamed that epidemic on the Coxsackie virus—and kept doing so long after scientists had ruled it out as the culprit. This time, says Quek, “we hope that the authorities can be a bit more enlightened. Sometimes it's all right to admit an error.”


    NIH Scientist to Head IVI Institute in Korea

    1. Michael Baker*
    1. Michael Baker writes from Seoul, Korea.

    SEOUL—An epidemiologist at the U.S. National Institutes of Health (NIH) has been named the first director of the International Vaccine Institute (IVI) in Seoul, Korea. The appointment of John Clemens to a 5-year term is a major step forward for the independent institute, founded in 1997 by the United Nations Development Program to research and promote vaccines in Asia.

    Clemens is chief of the epidemiology branch in the intramural program at the National Institute of Child Health and Human Development, which he joined in 1990. He has spent 15 years in Latin America, Egypt, India, Vietnam, and Bangladesh and has broad experience with pediatric infectious diseases and vaccine development. “He's at home in Asia … and has real clinical trial experience,” says immunologist Barry Bloom, chair of IVI's board of trustees and dean of Harvard School of Public Health in Boston.

    Clemens's first challenge after moving to Korea this summer will be to draw up a scientific program for IVI, which has begun to build a $50 million laboratory on the campus of Seoul National University that will be completed in late 2001. He plans to expand studies already under way on the prevalence of disease in the region to include Japanese encephalitis, rotavirus (a cause of diarrhea), and pneumococcal infections. IVI recently launched a study in Korea, China, and Vietnam of Haemophilus influenzae type b, which was dethroned as the leading cause of pediatric meningitis in the United States after a successful vaccination campaign. Five major pharmaceutical companies are supporting IVI's effort to study its prevalence in Asia.

    IVI is also working with the World Health Organization (WHO) to enroll 600,000 Vietnamese in a test of a promising oral cholera vaccine that costs only 20 cents a dose. “If this vaccine proves to be protective … it could make a major impact on the global control of cholera,” says Clemens.

    IVI's long-range goals include helping developing countries raise their rates of vaccination and working jointly with teams of researchers and international health organizations. Its 15,000-square-meter lab will provide space for a staff of 200 recruited internationally and for limited production of vaccines used in clinical trials. Although Clemens has never run an independent research institute, his colleagues are confident that he will learn fast. “He knows how to do what needs to be done,” says Bloom.

    IVI has already overcome a rocky start. Some saw it as a competitor to private industry and to existing organizations such as WHO (Science, 6 December 1996, p. 1607). But 7 years after the institute was first proposed, Clemens asserts that those conflicts have eased. “IVI will serve as a collaborator with WHO wherever and whenever it is appropriate,” says Clemens, who has spent 8 years on various WHO vaccine-related steering committees. “But we are not a coordinating agency for other organizations nor a policy-making body [for the community].”

    The institute has also survived Asia's economic crisis. The South Korean government, which is paying for the lab, has kept all its financial commitments to date, says Bloom, and the Program for Appropriate Technology and Health, a Seattle-based group that works with developing countries on reproductive technologies, is setting up its Asian office at IVI as part of a $100 million grant last year from Bill Gates (Science, 11 December 1998, p. 1971). The grant is expected to be especially helpful in boosting IVI's roster of non-Asian contributors.


    Activists Ransack Minnesota Labs

    1. Jocelyn Kaiser

    The University of Minnesota is reeling from one of the most damaging attacks on a U.S. research facility by animal rights activists in recent memory. The Animal Liberation Front (ALF) claimed responsibility for an incident last week in which vandals stole over 100 research animals and ransacked labs at the university's Twin Cities campus, causing at least $2 million in direct damage and the disruption of dozens of research projects. Some of the sabotaged projects, in research areas such as Parkinson's disease and cancer therapy, involved human cell cultures but no animals.

    In a press release, the ALF said it had “liberated” the animals and called other damage “economic sabotage” to “decrease profits to the animal abusers.” The attack surprised some observers, however, as ALF, whose North American press office is in Minneapolis, had lately turned its attention to fur and farm operations rather than labs. “This is really the first time in at least half a dozen years where there's been major damage to a biomedical research facility,” says Frankie Trull, head of the National Association for Biomedical Research in Washington, D.C. which monitors animal rights groups.

    In the Minnesota attack, vandals broke into the basement of a psychology building early on Monday, 5 April, and took 116 rats, mice, pigeons, and salamanders. Among the stolen animals were several transgenic mice for studying Alzheimer's disease that Karen Hsiao's group has described in Science (4 October 1996, p. 99). A video released by ALF shows several people in black clothes and masks dropping pigeons into white containers and spray-painting the walls with slogans like “No More Torture” and “Animal Liberation Now.”

    What happened to most of the animals is unclear. A university animal care official found 14 of the 27 pigeons and five dead and three live rats in a field east of Minneapolis. But ALF spokesperson Kevin Kjonaas questions “the validity of anything coming out of the university right now,” noting that ALF usually puts animals into homes.

    The vandals also broke into a building housing otolaryngology, ophthalmology, and neurosciences programs, where they ransacked 12 labs, destroying microscopes, computers, and other equipment. In neuroscientist Walter Low's lab, for example, they damaged incubators, resulting in the loss of several cell lines used to test compounds that might block neuron death in diseases such as Alzheimer's and Parkinson's. Low's group also may have lost a hard drive full of preclinical data on a vaccine therapy for brain cancer being tested on human tumor cells. “We were just completely devastated,” says Low, whose grad student discovered the damage around 6 a.m.

    Although some of the animals, such as Hsiao's Alzheimer's mice, are irreplaceable, insurance will cover much of the damage. In addition, the Minnesota Medical Foundation has set up a $25,000 fund to help researchers rebuild their labs. And a local cancer survivor has offered a $10,000 reward for tips on the perpetrators.


    Fewer Minorities Under New NSF Rules

    1. Jeffrey Mervis

    Last month the National Science Foundation (NSF) selected 900 aspiring young scientists to receive its prestigious graduate research fellowships. But the news was tempered by the fact that the number of minorities chosen had dropped by more than half from last year's total, from 175 to 76. The decline, following the cancellation of a separate competition for underrepresented minorities begun 20 years ago, is the latest fallout from legislative and judicial rulings prohibiting the use of race as a selection criterion in education.

    “I'm not surprised,” says biologist Joel Oppenheim, head of the Sackler Institute of Graduate Biomedical Sciences at New York University, which aggressively recruits minority students. He notes that the elimination of affirmative action programs has also had a chilling impact on minority enrollment in college and graduate schools.

    The drop comes in the midst of declining interest in the fellowship program, which received 13% fewer applications this year (from 5548 to 4796). For minorities, however, the decline was an even steeper 20%—from 697 to 559—despite an increase in NSF's outreach efforts to schools with sizable minority populations. “There is a feeling among minorities that they didn't stand as good a chance once NSF dropped its sheltered fellowship program,” says Rice University mathematician Richard Tapia, a member of NSF's oversight National Science Board.

    NSF has traditionally used targeted programs to accomplish its congressional mandate to increase participation in science by members of all segments of society. But officials are reviewing some two dozen programs to see if they still satisfy both the law and the current political climate. They revamped the 47-year-old graduate fellowship program last year after being sued for discrimination by a white student who was denied the chance to apply to the minority component of the program (Science, 2 January 1998, p. 22). The agency paid $95,400 in a pretrial settlement and soon after announced that it would no longer set aside 15% of the total number of slots for a competition reserved for African-American, Hispanic, and native American students. Under the new rules, all applicants for the 3-year, $15,000 a year awards were funneled into one competition.

    Hoping to minimize any negative impact of the new rules, NSF officials dispensed with an initial numerical rating of each applicant—based on such quantitative factors as Graduate Record Exam scores, undergraduate grade point average, and a ranking of the baccalaureate institution—that was thought to put some minority candidates at a disadvantage. The change was designed to give more weight to less tangible factors such as persistence and commitment. Officials also ended the practice of assigning only one reviewer to applications that had received a low rating. “This year we heavily emphasized that reviewers needed to look at all the material in the application,” says Susan Duby, head of NSF's division of graduate education. Every application was read by at least two reviewers, she says. But these measures apparently weren't enough to avert the sharp drop in awards to minority students.

    Duby says NSF plans to be even more aggressive next year in spreading the word about the fellowship program and counseling potential applicants on how to improve their odds. But Tapia, who has successfully boosted minority participation in graduate programs at Rice, cautions that NSF should not expect to see the number of minority awardees return to previous levels anytime soon. “It's a very complicated problem, and it takes time to learn how to do it right,” he says. “I don't do anything right the first time, but I keep learning.”


    Earliest Animals Growing Younger?

    1. Richard A. Kerr*
    1. With reporting from Pallava Bagla in India.

    For paleontologists, finding the most ancient example of an animal in the fossil record is usually a triumph. But sorting out a recent claim about the earliest traces of multicellular animals is turning out to be an ordeal instead. Citing ancient fossil worm tracks from central India, researchers last fall pushed the age of the first animals back from 600 million years old to a startling 1.1 billion years. But claims and counterclaims later tugged the apparent age of animals back and forth between truly ancient and more conventionally old. In the latest set of twists, reported last month at a workshop in Lucknow, India, new radiometric dates nudged the pendulum back toward a relatively young age—about 620 million years—for the fossil tracks. At the same time, workshop participants firmly rejected the fossil evidence originally used to suggest a younger age.

    The traces in question are squiggly furrows from the Vindhyan basin, which paleontologist Adolph Seilacher of Yale University and his colleagues attributed to half-centimeter-thick worms (Science, 2 October 1998, p. 19). Seilacher's group came up with the stunning 1.1 billion year age from published radiometric dates on mineral grains from sedimentary rocks containing the burrows. But geochronologists quickly pointed out that the mineral grains could have been eroded from much older rock before being deposited as sediment.

    So sedimentologist Dhiraj Mohan Banerjee of the University of Delhi and geochronologist Wolfgang Frank of the University of Vienna have used a different dating technique, based on the decay of potassium to argon, on volcanic ash that fell from the sky shortly before the putative worm-track sediments formed. “All these samples gave consistent ages close to 620 million years,” says Frank. Although there are complications in dating these rocks, “I am absolutely confident we can reject the very old age of 1.1 billion years.”

    Even so, the new dates are not the final word. Frank and Banerjee analyzed chunks of rock rather than single mineral grains, a procedure that geochronologist Paul Renne of the Berkeley Geochronology Center in California calls “a little bit scary.” Renne explains that whole rock may contain older or younger mineral grains, which could skew the result, and weathering may have allowed some of the rock's argon to escape, making it seem younger than it is. Seilacher also sounds a note of caution. “All of us have to think about the validity of our data,” he says, “whether they be radiometric dates or fossils.”

    Although geochronologists may be moving toward a younger age, paleontologists at the workshop rejected the original challenge to the tracks' antiquity, published last fall by paleontologist Rafat Jamal Azmi of the Wadia Institute of Himalayan Geology in Dehra Dun, India. Azmi claimed to have used weak acid to extract “small shelly fossils” characteristic of the early Cambrian period—about 545 million years ago—from limestone laid down after the worm burrows. However, after firsthand inspection, three British paleontologists rejected the fossils as artifacts created by chemical alteration of the rock (Science, 6 November 1998, p. 1020).

    At the workshop, none of the specialists on hand could be convinced that Azmi's fossils were actually formed by living creatures. “Azmi has lost the battle,” says paleobiologist Vibhuti Rai of the University of Lucknow, one of the organizers of the workshop. What's more, says Banerjee, 15 workshop participants who subsequently accompanied Azmi to his collection sites were shocked to find that the “limestone” that was the purported source of his fossils is actually a porcellanite, a siliceous volcanic rock that would not dissolve in even strong acid. That raised the question of where the “fossils” came from.

    Azmi concedes he erred in identifying the rock, but says he now thinks that his maceration and acid extraction methods somehow extracted fossils from small layers of shale within the porcellanite. Indeed, one paleontologist, Rai, says that this week he was able to extract some fossil-like structures from the rock, although he says they are artifacts, not true fossils.

    Such news has made some Indian paleontologists uneasy, as they remember the professional embarrassment suffered in the late 1980s when Vishwa Jit Gupta, then at the Panjab University in India, was accused of passing off fossils from around the world as being from the Himalayas (Science, 21 April 1989, p. 277). Rai and other Indian paleontologists are standing by Azmi, saying that the problem may be only contamination of samples or a misinterpretation of data on Azmi's part.


    Security Fears Prompt Computer Shutdown

    1. David Malakoff

    Thousands of researchers at three Department of Energy (DOE) laboratories got an unexpected break from their computers last week thanks to the continuing controversy over the alleged Chinese theft of U.S. nuclear secrets. DOE officials abruptly suspended classified computing operations at the Los Alamos, Sandia, and Livermore national laboratories in New Mexico and California on 2 April and herded more than 20,000 employees—including many not involved in secret projects—to briefings on improving safeguards. Although some researchers say the time out was a necessary distraction, others worry that it could lead to new rules that will make the labs' computers harder to use but not necessarily more secure.

    The unprecedented “stand-down” cut off access to all computers containing classified information and idled two of the world's fastest supercomputers while lab officials prepared new security plans. The action marked DOE's most dramatic response so far to critics in Congress, who say that lax practices have led to the theft of classified information (Science, 26 March, p. 1986).

    The surprise training came a few days after DOE delivered a report to Congress outlining cybersecurity lapses at several labs, including the transmission of classified files over unsecured e-mail networks. In releasing that report, done annually, Energy Secretary Bill Richardson said DOE would be working to close gaps in its computer defenses, although the agency says its classified databases have not been breached.

    Still, lab employees were surprised by the far-reaching shutdown, which the three lab directors reportedly proposed to Richardson in late March. Indeed, when one Los Alamos researcher heard rumors of the plan, he “thought it was an April Fool's joke,” he said. Like others interviewed by Science, he requested anonymity because of the tense political atmosphere.

    The extent of the shutdown varied by laboratory. Los Alamos and Livermore idled their Blue Mountain and Blue Pacific supercomputers, which run simulations of nuclear weapons' explosions. At Sandia, however, researchers were able to keep running some nonclassified programs, such as weapons safety models, on Sandia's Red supercomputer and allied machines. “The nonclassified work goes on,” says lab spokesperson Rod Geer.

    During the pause, staff members at Los Alamos and Livermore, which do the bulk of the nation's secret weapons science and also have classified contracts with law enforcement and intelligence agencies, were required to attend or view broadcasts or video tapes of a security briefing. Lab director John Browne led the 90-minute Los Alamos briefing, which featured descriptions of potential threats and prevention measures. Employees with security clearances also attended additional sessions that took up to a day to complete, according to lab sources. Although Sandia managers only required attendance of staff with some connection to secret material, that group included artists who create images for classified projects.

    In some briefings, lab officials asked employees for ideas on how best to accomplish nine security goals set by DOE, including making it impossible to transfer classified information from secured to unsecured computer networks. The agency also wants to reduce the number of people with access to highly classified information, institute more rigorous scanning of e-mail, and require two or more people to approve file transfers.

    Lab scientists had mixed reactions to the stand-down. One Livermore researcher called it “distracting” but said security “is an issue that can't be ignored.” However, others fear that DOE may go too far in erecting barriers to electronic data transfer. “They may overreach if they think they can make it physically impossible to transfer classified information … without impairing everyday activity,” says one Los Alamos scientist. Another computer researcher wondered if the proposed measures “will make life more difficult for a spy—or for us.”

    The classified computers were expected to be back in service this week once Richardson signs off on the three labs' new security protocols. But Browne reassured his staff that the new plan won't crimp science. “We can't raise the bar so high we can't get any work done,” he said in a prepared statement. “That affects national security, too.”


    NIH Plans Ethics Review of Proposals

    1. Eliot Marshall

    The National Institutes of Health (NIH) last week inched forward on its commitment to fund research on human embryonic stem cells despite a barrage of criticism from the antiabortion movement. Some researchers believe very early stem cells will be valuable for research and as a source of human transplant tissue; others say it's inappropriate to use any material taken from aborted fetuses or unwanted embryos. In addition, more than 70 members of Congress have told Secretary of Health and Human Services Donna Shalala that language in an appropriations bill forbids support for studies of human embryonic stem cells.

    Mindful of the controversy, NIH director Harold Varmus has offered a technical solution. Appearing on 8 April before a special panel of advisers in Bethesda, Maryland, Varmus proposed that an outside committee review grant proposals to square them with criteria set by Congress. Essentially, the NIH would block funding of research that involves direct use of embryos or aborted fetuses but permit some carefully vetted research on stem cells derived from these sources. An outside body would examine highly rated grant proposals and approve only those that comply with NIH's guidelines.

    The 13-member advisory panel assigned to help draft the rules—chaired by molecular biologist Shirley Tilghman of Princeton University and Ezra Davidson, associate dean of the Charles R. Drew University of Medicine and Science in Los Angeles—took no immediate action, but heard comments from critics and supporters of the NIH plan. One opponent, Richard Doerflinger, a staff representative of the U.S. National Conference of Catholic Bishops, argued that federal officials were wrong to make a distinction between embryos and stem cells derived from embryos, and that doing research on either destroys human life. Representatives of the Society for Developmental Biology, the American Society for Cell Biology, and the National Alliance for Aging Research were among those who spoke up for the NIH plan.

    The Davidson-Tilghman panel considered adding some terms to the NIH guidelines that might make the process of screening grants more intricate. They proposed, for example, that legal restrictions already in force on the use of fetal tissue also be adapted to stem cells. And the panel seemed ready to require any researcher receiving federal funds for embryonic stem cell research to certify that donors had given proper consent for the use of their embryos.

    Meeting this standard could be difficult, one observer says, because scientists are not likely to know where the embryos came from or how consent was obtained. Furthermore, donated embryos typically come from clients of private fertility clinics, which are not covered by federal rules on informed consent. Although documenting ethically correct sources of stem cells is the “hardest issue to deal with,” says Wendy Baldwin, assistant NIH director for extramural research, she predicts that people who are intent on doing this research will find suppliers who can provide all the documentation NIH requires.

    At least one observer grew impatient with the discussion because it implied a more prolonged review than he'd anticipated. Stem cell researcher John Gearhart of The Johns Hopkins University in Baltimore said he's thinking of dumping plans to file for an NIH grant and going in search of private money.

    The public will have 60 days to comment on draft guidelines that NIH plans to issue by summer before the topic is taken up by another high-level NIH advisory panel. That suggests the first grants, if Varmus decides to proceed, could be at least a year away.


    Giant Sulfur-Eating Microbe Found

    1. Bernice Wuethrich*
    1. Bernice Wuethrich is an exhibit writer at the Smithsonian's National Museum of Natural History in Washington, D.C.

    In the sediment below the waters of Namibia's Skeleton Coast—named for the storm-tossed ships that litter the sea floor there—scientists have made a dazzling find: a giant new species of bacterium, the world's largest, that grows as a string of pearly white globules. As reported on page 493, cells of Thiomargarita namibiensis, the “Sulfur pearl of Namibia,” reach three-quarters of a millimeter in diameter—100 times larger than that of the average bacterium. “They were so large, at first we could not believe they were bacteria,” says discoverer Heide Schulz, a microbiologist at the Max Planck Institute for Marine Microbiology in Bremen, Germany.

    This oddball microbe consumes both sulfide and nitrate, linking the ecological cycles of these two key coastal compounds. Although many bacteria utilize one or the other, few have been identified that rely on both. But it now seems that “this kind of metabolism is much more widespread than previously thought,” says co-author Bo Barker Jørgensen of the Max Planck Institute. Other researchers note that such bacteria might one day help clean up coastal waters that have been polluted by excess nitrates from agricultural runoff.

    Schulz found Thiomargarita while trying to determine whether an unusual species of sulfide-eating microbe common off the coast of Chile could be found elsewhere. She chose the Skeleton Coast for her research cruise because, like Chile, it is fed by a strong upwelling current that brings nitrate-rich water to the surface, nourishing an abundant food chain. The teeming life of the surface waters produces a rain of organic matter, which bacteria on the sea floor decompose, producing hydrogen sulfide, a compound that is toxic to most organisms.

    While examining sediment cores from below 100 meters of water, Schulz was struck by the presence of many pearly spheres. They were far too big to be any known bacteria, but under the microscope she recognized the familiar glow of a sulfur microbe, caused by light refracting off tiny globules of elemental sulfur just below the cell's surface. Later Schulz and her team realized that the bacterium doesn't live by sulfur alone, as they measured large concentrations of nitrate within the cells as well.

    After measuring both sulfide and nitrate in the cell, Schulz and her colleagues reasoned that Thiomargarita gets its energy by stripping the electrons from sulfide. To do so it needs an electron acceptor, a role that falls to oxygen in most sulfur microbes. But in the oxygen-free world of the sea floor, the only potential electron acceptor is nitrate suspended in the seawater. Because Thiomargarita is stuck in the sediment, it relies on occasional storms to stir nitrate-rich water into the loose sediment. And it needs a way to last out the intervals between storms.

    That's where the microbe's bulk comes in: About 98% of its volume is storage space, allowing it to hoard large reserves of nitrate under a thin layer of cytoplasm. Like a big bacterial balloon, “Thiomargarita can hold its breath for months at a time between storms,” Jørgensen says. Most bacteria have a size limit, because they rely on diffusion to exchange chemical compounds with their environment, and a small size ensures a high surface area compared to volume. Thiomargarita skirts this problem by being hollow inside—the living cytoplasm is confined to a thin layer surrounding the nitrate.

    Researchers don't yet know how common or widespread Thiomargarita is, but it thrives in high densities off the Namibian coast. “It's exciting,” says Jay Grimes, a marine microbiologist at the University of Southern Mississippi in Ocean Springs. “Sitting off the coast in nutrient-rich anoxic upwellings, it plays a very important ecological role,” removing hydrogen sulfide and so detoxifying the environment for other forms of life.

    Indeed, biologists are realizing that bacteria with this kind of metabolism play a critical role in keeping some coastal bottom waters habitable for higher organisms. Two species of bacteria, Thioploca and Beggiatoa, are known to oxidize sulfide with nitrate, although they have adopted other solutions to the problem of finding these compounds, and both are much smaller than Thiomargarita. They, too, are found in areas fed by upwelling, nutrient-rich currents, off the Pacific coast of South America and in the Arabian Sea near Oman.

    Grimes suggests that such sulfide-oxidizing, nitrate-reducing organisms could be introduced to other coastal waters to clean up pollution caused by agricultural nitrate, which nourishes algal blooms that deplete the waters of oxygen and lead to massive fish kills. Indeed, some species of Beggiatoa are spreading on their own to the sea floor along the European and Baltic Coasts, which are polluted by agricultural nitrate, Jørgensen says. If Thiomargarita or its kin can clean up polluted coasts, one extreme of evolution may someday help balance perturbations caused by excesses of another kind.


    Missile Defense Rides Again

    1. James Glanz

    A new push to erect a ballistic missile shield is technologically more plausible than the 1980s “Star Wars” program. To skeptics, however, the effort remains futile and dangerous

    Sometime this summer, two projectiles will collide over the central Pacific Ocean with such fury that they annihilate each other. Or they might miss. Either way, this encounter in the silence of space, more than 100 kilometers above the top of the atmosphere, will be freighted with meaning for the citizens of the globe sparkling below. Some of them will see in this event a clash between good and evil, peace and war, or simply technical know-how and raw destructive terror. Others will view the entire display as a step toward a dangerous and desperate folly.


    A vision of a national missile defense system ramming an incoming warhead.


    In literal terms, a mock enemy warhead launched westward from Vandenberg Air Force Base in California will meet a 55-kilogram, thruster-controlled “kill vehicle” outfitted with infrared seekers and fired from the Kwajalein Missile Range in the Marshall islands. Hit or miss, the encounter will mark the first intercept test for components of a proposed system that some believe could shield the United States from a small number of intercontinental ballistic missiles (ICBMs) launched in our direction either accidentally or by “rogue nations.”

    Major Nickolas Demidovich of the Air Force says the collision will produce “a very bright flash,” easily visible to the naked eye from Kwajalein if the intercept is successful and clouds don't block the view that afternoon. That flash would be generated solely by the energy of the collision at more than 10 kilometers per second, not by explosives, says Demidovich, chief of the National Missile Defense (NMD) flight test for the Pentagon's Ballistic Missile Defense Organization (BMDO). “This is pure, kinetic, body-to-body kill,” he says.

    The flash would fade almost instantly, but its consequences could change the face of global strategy, warfare, and what might be called macropsychology in the 21st century—for better or worse. For if the July test succeeds, it will add technical credibility to a fast-expanding arsenal of missile defense systems, some parts of which have suffered high-profile test failures. Officially, many of these systems have nothing to do with the NMD, which would meet an incoming warhead early enough in its deadly arc to shield the entire country from a limited attack. They are meant to erect missile shields over limited swaths of territory, such as a “theater” of war—not an entire nation. But the slower, shorter range theater interceptors largely rely on the same hit-to-kill strategy as NMD and might even play a role in its shield. NMD “is essentially a theater defense writ large,” says John M. Cornwall, a physicist at the University of California, Los Angeles, who teaches and consults on missile defense.

    Even before the summer test, missile defenses are getting an enormous political boost from a confluence of domestic and global events. China's alleged theft of nuclear warhead designs from U.S. national laboratories and North Korea's test of a three-stage medium-range missile last August have alarmed politicians and the public. And a report by a commission led by Donald Rumsfeld, a former secretary of defense, has warned that Iran, Iraq, or North Korea could secretly develop ICBMs in the next 5 or 10 years. Reacting to the concerns, last month the Senate passed—by a vote of 97–3—a bill sponsored by Thad Cochran (R-MS) and others that calls for the United States to deploy an NMD “as soon as technologically possible.” The Clinton Administration, originally skeptical about national missile defense, has now pledged $6.6 billion in new money for NMD through 2005, adding to the BMDO's current budget of $3 billion to $4 billion a year. It has also set a target date of 2005 for deploying an initial, small NMD—providing tests beginning with the one this summer yield promising results.

    Supporters of the program believe the technology will live up to its billing. The current missile defense effort has a far more limited goal than the Strategic Defense Initiative (SDI) of the 1980s, which aimed to defend the entire nation against a barrage of thousands of ICBMs launched by the Soviet Union. And it shuns the phantasmagoria of ambitious technologies, from pop-up x-ray lasers to space-based particle accelerators, that gave the 1980s effort the pejorative moniker “Star Wars.” The hit-to-kill strategy builds on existing know-how: fast, miniaturized, solid-state hardware like ring lasers for inertial guidance, infrared sensor arrays for seeing targets, and tiny processors for computing trajectories. “This is not Star Wars redux,” says a high-ranking scientist at an Energy Department laboratory.

    High hopes.

    A Minuteman II missile lofts a National Missile Defense sensor in a test flight from Kwajalein in the Marshall islands.


    Critics concede part of the point. In spite of the recent test failures, including six straight misses by an Army theater defense system called the Theater High Altitude Area Defense, or Thaad, the hit-to-kill technologies are evolving rapidly. Many analysts agree that the strategy has a better chance of meeting its goals—picking off a few enemy missiles above the atmosphere—than the grandiose Star Wars program did. “At the physical principles level, the device fabrication level, this is obviously easier than some of the stuff they were talking about a decade ago,” says John Pike of the Federation of American Scientists in Washington, D.C. But he and other skeptics say the change in technology and scope has not eliminated many of the larger concerns about missile defense.

    Any system deployed nationally, they say, will make the world a more dangerous place by alarming adversaries who fear that such a system would allow the United States to launch a first strike, then parry a counterattack. It might also violate the 1972 Anti-Ballistic Missile (ABM) treaty, which now limits Russia and the United States to one fixed land site with no more than 100 interceptors. Conservatives have long viewed the treaty as a needless constraint on the United States' ability to defend itself. But other analysts disagree. “Abrogating the ABM treaty would be politically a disaster,” says Dean Wilkening, director of the science program at Stanford University's Center for International Security and Cooperation, as doing so could prompt adversaries to build more weapons in hopes of overwhelming any missile defenses.

    At the same time, many believe that any system, no matter how good, will be porous—vulnerable to crude countermeasures like throwing out dozens of decoys or inflating a huge balloon around a nuclear warhead in space in order to hide its exact position. “The old saying, ‘One nuclear weapon will ruin your whole day’—it's really true,” says Stephen Schwartz, publisher of the Chicago-based Bulletin of the Atomic Scientists. “Nothing works 100%, all of the time,” he says—a point that some believe is underscored by Thaad's repeated failures to hit a target uncluttered by decoys. The only proven response to the threat of nuclear or biological weapons is deterrence, says Richard Garwin, a senior fellow at the Council on Foreign Relations and an IBM fellow emeritus. “You can't counter it,” says Garwin of the ICBM threat. “You can only lie to the American people. Once you start spending so much money, that's bound to happen.”

    NMD supporters, however, have a rejoinder that is almost visceral: “It has to work,” says Senator Bob Smith, the conservative New Hampshire Republican who has already declared his presidential candidacy for the 2000 election. “We're in a very dangerous and vulnerable situation; we do have nations now that have the capability to reach us,” says Smith, who also chairs the Strategic Forces subcommittee of the Senate Committee on Armed Services. The doubters, says Air Force Lieutenant Colonel Rick Lehner, a BMDO spokesperson, are arguing, “if you can't do everything, don't do anything. Obviously we don't agree with that.”

    A limited shield

    Around the same time that political support for deploying an SDI system evaporated in the early 1990s, public interest in more limited defenses, designed to protect troops or cities against short-range missiles, gained a big boost. The catalyst was the Patriot, a system that received glowing praise during the Gulf War for apparently shooting down Iraqi Scud missiles. Some defense analysts have since cast serious doubt on the Pentagon's claims of a high success rate for the Patriot (see sidebar), but support for theater defenses has gained strength, and many of the technologies developed for these systems lie at the heart of the planned NMD.

    The Patriot itself has undergone a radical technological overhaul. Originally designed to punch holes in relatively slow-moving airplanes by exploding near them, the missile has since been redesigned into the lighter, sleeker, and more maneuverable Patriot-3, which rams its quarry. A prototype of the new interceptor nailed three out of four intercept attempts in 1993 and 1994, and the Patriot-3 had a hit just last month.

    The Patriot-3 trades the original aerodynamic control system of tilting fins for 180 small thrusters, which enable it to make the hairpin turns needed to intercept a missile that has begun to tumble. An onboard, millimeter-wave radar feeds data to processors that not only calculate trajectories and control the thrusters, but also scroll through databases of known missiles to determine how best to ram the target in order to demolish the warhead. If it works as planned, the Patriot-3 (which goes by the acronym Pac-3) would meet and destroy a target at an altitude of 20 kilometers or below as the warhead bore down on a troop emplacement or a city within the protected “footprint.”

    Unlike the original Patriot, the Pac-3 “was designed from the beginning as an antimissile missile,” says George Lewis, associate director of the Massachusetts Institute of Technology's (MIT's) Security Studies Program and a critic of the earlier system's performance in the Gulf War. “I can't tell you if this is going to be good enough—but it's better,” says Lewis.

    Pac-3 would not have to do its job alone. Theater defense systems like Thaad would take an earlier shot at incoming missiles, climbing faster and higher to defend a footprint of a few hundred kilometers. Thaad might get its first warning of an enemy launch from satellite-borne infrared sensors. Eventually the incoming missile would be picked up by Thaad's ground-based “X-band” radar, which operates at frequencies above 10 gigahertz and is being touted as the most powerful radar in the world. High frequencies correspond to short wavelengths, permitting the radar to pick out fine structural details that could enable it to distinguish a warhead from decoys.

    The fall and rise of missile defense.

    Graph tracks the recent growth of theater missile defense funding.


    The Thaad interceptors, fired from truck-based launchers, would initially rely on the satellite and radar data to close in on their target. As the air thins out with increasing altitude, Thaad's booster rocket falls away from its kill vehicle, whose course is then controlled by thrusters emerging from its center of mass. Thaad has a “sweet spot” for intercept between altitudes of 40 and 100 kilometers, where the air is dense enough to foil light decoys such as balloons but cool and tenuous enough to contrast with the infrared glow of a ballistic missile's warm reentry vehicle. As it enters a “basket” of proximity to the target (the actual distance is classified), Thaad “opens its eyes”—a 256-by-256-element, gimbal-mounted matrix of infrared sensors.

    “This matrix obviously becomes a picture,” says the Army's Colonel Bill Hastie, director of system acquisition in BMDO. “You have the cold background of space, and you have the hot missile coming at you.” Just as with the Patriot, says Hastie, onboard processors use this information to home in for the kill.

    This multiple-shot strategy relying on both Thaad and Pac-3 would create a leakproof shield in a theater of war, planners hope. “When a missile's coming in, hopefully the upper tier gets a shot at it first. Maybe one or two shots,” says Hastie. “And if it hasn't made the intercept, then the lower tier can. To get a high probability of kill … you need more than one shot to do it.”

    The high-altitude shield, however, faces both technical and political hurdles. Two weeks ago Thaad—after Pac-3, the best developed of a panoply of new theater defense programs—once again failed to hit its target in a test, making a string of six straight misses. The planned Space-Based Infrared Satellite System, an upgraded version of an existing satellite system for detecting the hot plumes of enemy missiles, has also been plagued by technical glitches and delays. A BMDO-sponsored panel, led by retired Air Force General Larry Welch, blamed the Thaad failures on intense political pressure, which is leading to a “rush to failure.” Thaad engineers have not had time to understand and correct the failures, which mostly resulted from shorts, contamination, and software problems, the panel concluded. “We are still on ‘step one’ in demonstrating and validating hit-to-kill systems,” according to the report.

    Hastie adds that most subsystems—such as launchers and radar—performed well in the tests. But some analysts argue that the series of failures shows just how hard it will be to build a reliable system. Hit-to-kill missile defense, says Pike of the Federation of American Scientists, “is apparently a problem that is extraordinarily unforgiving of error.”

    And theater defenses have begun setting off geopolitical alarm bells as well, because it has become apparent that they might be used to protect entire countries the size of Israel, Kuwait, or Taiwan. Indeed, Israel and the United States are collaborating on an upper-tier defense called Arrow, which Israel may deploy, perhaps in tandem with the Patriot-3, to defend its entire territory. And Japan has agreed to work with the United States in the development of Navy Theater Wide (NTW), a high-altitude interceptor that would travel substantially faster than Thaad for improved range. These developments are causing jitters in East Asia. On 8 March, for example, The New York Times reported that Tang Jiaxuan, China's foreign minister, said that including Taiwan in an American missile defense system “would amount to an encroachment on China's sovereignty” and would destabilize the region.

    What's more, many of the theater defenses could, at least in theory, contribute to missile defense for the home territory of the United States. Calculations by MIT's Lewis and others suggest that one Thaad system in its basic form could protect a footprint the size of the Baltimore-Washington metropolitan area from an ICBM. But using the full panoply of upgraded early-warning radars and other sensors envisioned for NMD could expand the Thaad footprint enormously, says Lewis. Conservative groups have advocated using the NTW as a U.S. national defense, stationing the ship-based system either along U.S. seaboards or just off hostile coasts to hit missiles while their rockets are still firing. A study sponsored by the Heritage Foundation, for example, envisions 22 shiploads of NTW-style interceptors roaming the seas. And an analysis by the Union of Concerned Scientists (UCS) suggests that a more modest system could easily span the continent. “We've tried to make some estimates of how large the footprints would be,” says David Wright, a researcher at MIT and UCS. “Under the best estimates, two or three [ships] could cover the entire U.S.”

    BMDO downplays such possibilities. At a recent press conference, Lieutenant General Lester Lyles of the Air Force, BMDO's director, had a curt reply when asked about the relevance of the Thaad tests for national defense: “None whatsoever,” said Lyles. Adds Lehner, the BMDO spokesperson: “I've not seen anything even discussing” the use or relevance of Thaad for NMD.

    Yet the line between theater and national defenses is also blurring for futuristic Air Force theater defenses that would rely on powerful chemical lasers to destroy missiles when their boosters are still firing. One, the Airborne Laser, would blast missiles from perhaps 300 kilometers away, using an oxygen-iodine laser mounted in the nose of a Boeing 747. Although such systems were heavily criticized as impractical during the Star Wars days, the prototype laser has made dramatic advances in the amount of power it can train on a target, according to sources both inside and outside the military. Testifying before the Strategic Forces subcommittee of the Senate committee on Armed Services in February, Lieutenant General Lyles said that a related program—the Space-Based Laser—could someday “‘thin out’ missile attacks” in their early stages as part of a multiple-shot NMD.

    Decisive encounter

    NMD's centerpiece is a separate system: a hit-to-kill interceptor that would function much like Thaad. It would get an early warning of a missile attack from existing or upgraded early-warning radars deployed on America's coasts, as well as from satellites and a new X-band radar tailored for national missile defense. The kill vehicle would then be launched on a three-stage rocket to make an intercept at altitudes of hundreds of kilometers, in hopes of protecting all 50 states.

    Those altitudes, high above the atmosphere, may make distinguishing targets from decoys much more difficult for an NMD than for Thaad. Radar-reflecting swarms of aluminum shreds, or “chaff,” would float along with the warhead in space, where there is no air resistance to strip them away. “Decoys are a major problem for the sensors,” says Gerold Yonas, a former SDI chief scientist who is vice president for systems science and technology at Sandia National Laboratory in New Mexico. In the nightmare scenario, the kill vehicle would confront a swarm of radar-reflective aluminized balloons looping through space, only one of which contains a nuclear warhead.

    BMDO officials say they are learning how to sort out decoys, noting that infrared sensors on the NMD kill vehicles, which are being built by Raytheon Systems Co. of Tucson, Arizona, successfully picked targets out of a set of decoys in flyby tests over the central Pacific in June 1997 and January 1998. (Details are classified.) This summer's interception test will also include decoys, although the kill vehicle will be told which mock warhead is the “real” one.

    Analysts worry, however, that no matter how sophisticated the sensors, an attacker could find a way to sneak a weapon of mass destruction past them. Early in its flight, an ICBM might release dozens of individual “bomblets,” each containing a fearsome biological or chemical warhead, making an effective intercept impossible. Or a disguised ship could simply steam into a U.S. harbor and fire a small nuclear warhead from there. “It has always been very implausible to me that a rogue state would send one or two missiles over here; it would be suicide,” says Kurt Gottfried, a Cornell University physicist and acting chair of the UCS. Ensuring that Russia's crumbling early-warning radar does not give false alarms, leading to an accidental missile launch, would be a better way to spend the money, he says, calling NMDs “an ass-backward way of looking at our priorities.”

    But if ballistic missile defense engineers hit a bull's eye with the summer test, the push to develop and deploy a national defense may be hard to stop. “I have zero doubt that the system will work, ultimately,” says John Peller, vice president and program manager for the NMD team at Boeing Co. which last April won a $1.6 billion, 3-year contract to oversee NMD development.

    Major Demidovich of the Air Force, who will direct the NMD test, explains that the summer intercept attempt will actually be two tests in one: the actual intercept and a simultaneous “shadow” test on computers, in which the interceptor will get fewer hints about the identity of the target. The shadow test will begin with the launch from Vandenberg in California of a surplus American ICBM—a modified, three-stage Minuteman II missile with the mock warhead and decoys atop it. The stages will burn for about a minute each, and then the target and decoys will separate, eventually hurtling to an apogee of 1600 kilometers before falling back toward Kwajalein, itself about 7000 kilometers west and slightly south of Vandenberg. Satellites, early warning radars, and finally a prototype X-band radar at Kwajalein will track the objects and attempt to pick the target out from among the decoys. In the shadow test, these data will be used to launch and guide a computer-simulated interceptor to its basket in space.

    In the real test, which will unfold at the same time, another modified Minuteman II carrying the Raytheon kill vehicle will blast off from Kwajalein about 25 minutes after the launch of the “hostile” missile from Vandenberg. The vehicle will be dropped into its basket and use its infrared seekers to lock onto the mock warhead, firing thrusters for course corrections until, 230 kilometers above the ocean, the two objects violently collide and pulverize each other. Or they will sail silently past each other in space, leaving only questions behind.


    Patriots Missed, But Criticisms Hit Home

    1. James Glanz

    In the debate about ballistic missile defenses, the Patriot is Exhibit A—for both sides. “Patriot is proof positive that missile defense works,” said President George Bush during the 1991 Gulf War. At the time, Army assessments painted the antimissile system as all but perfect at intercepting Iraqi Scud missiles. But the Patriot received quite different reviews from two Massachusetts Institute of Technology (MIT) researchers who analyzed commercial video footage of intercept attempts. They said there was no evidence of a single successful intercept during the Gulf War (Science, 8 November 1991, p. 791).

    Now a team of physicists and engineers has concluded that the video analysis was probably correct. The team was assembled by the American Physical Society's Panel on Public Affairs (POPA) and was led by Jeremiah Sullivan, a physicist and former director of the Program in Arms Control, Disarmament, and International Security at the University of Illinois, Urbana-Champaign. Accepted for publication at the journal Science & Global Security, the team's report analyzes all of the technical criticisms raised against the video evidence. It concludes that those criticisms are “without merit” and goes on to identify “an absolute contradiction” between the Army's scoring of Patriot performance and that video record.

    Raytheon Co. the prime contractor for the Patriots used in the Gulf War, has already prepared a rebuttal, which is tentatively scheduled for a subsequent issue of the Princeton-based journal, according to its editor, Hal Feiveson. And the POPA study says nothing directly about the future prospects of the Patriot system, which has been redesigned entirely. But Theodore Postol, one of the MIT researchers, says that the Patriot affair may bode ill for plans to develop more expansive missile defenses to protect soldiers and the country as a whole (see main text). It reflects what he sees as a culture of exaggeration and cover-up that “has a corrupting effect on every aspect of weapons development.”

    To one degree or another, everyone describes the Patriot as overmatched in its bid to destroy the Scuds. The Gulf War Patriot “was built to intercept airplanes, not missiles,” says Brigadier General Daniel L. Montgomery, the U.S. Army's Program Executive Officer, Air and Missile Defense. Traveling at speeds of up to 1.5 kilometers per second, the single-stage Patriot missile homed in on enemy aircraft using ground-based radar, then exploded near the aircraft. By the late 1980s, the system had been adapted to missile defense largely through software changes, and such refinements continued during the Gulf War. But the souped-up Scuds, called Al-Husseins, reentered the atmosphere at about 2.3 kilometers per second, and they often broke up, creating showers of confusing debris from which the warhead would emerge, corkscrewing to the ground.

    “The Patriot had no chance, no chance against such a target,” says George Lewis, associate director of MIT's Security Studies Program. But U.S. officials initially claimed astonishing results. “The Patriot's success, of course, is known to everyone,” said General H. Norman Schwarzkopf on 30 January 1991. “It's 100%—so far, of 33 engaged, there have been 33 destroyed.”

    That certainty soon began to crumble, even by official accounts. Under criticism by the U.S. General Accounting Office and other agencies that examined the Army data, the Army revised its success estimates from 96% in March 1991; to 69% in May 1991; to 59% in April 1992, when Representative John Conyers (D-MI) led a congressional inquiry into the Patriot's performance. Those final numbers, which include estimates of better than 70% success in Saudi Arabia and 40% in Israel, have not budged officially.

    According to an analysis published in 1993 by Postol and Lewis and discussed at the Conyers inquiry, however, those numbers were not even close to reality. While the Army based its assessment mostly on inspecting ground damage after the war, Postol and Lewis found commercial videos (often from news organizations) of more than half of the approximately 44 Scuds engaged by Patriots. After taking into account unknowns such as viewing angle and distance and using fixed reference points such as the bright Patriot fireball to compensate for camera movement, the team found no evidence of even one successful intercept.


    The redesigned Patriot-3 and the fireball from a successful intercept last month.


    The video analysis, in turn, was repeatedly attacked as flawed by Robert Stein, now a Raytheon vice president, Peter Zimmerman, a physicist who was recently named science adviser to the U.S. Arms Control and Disarmament Agency, and others. Criticisms centered on the slow video-framing rates—which left 0.033-second gaps in the data—the difficulties of reconstructing three-dimensional events from the videos, and the possibility that Postol and Lewis had consistently misidentified the Scud warheads.

    Now the six-member POPA panel has determined that Postol and Lewis correctly accounted for the limitations of the videos. Addressing a long list of criticisms, the panel found that Postol and Lewis had made proper assumptions about the physics and were not likely to have made major blunders like misidentifying warheads. “We don't claim that in every single case they have to be right,” says Sullivan. “But being wrong here or there doesn't change the overall physical consistency. It's not a matter of onesies and twosies.”

    Stein and Zimmerman, who wrote the forthcoming “comment” on the POPA study, both declined to respond for this article. But Brigadier General Montgomery says, “Video footage showing less than full destruction of the Scud does not mean [it] was not deflected off its intended target.” Postol responds that there is no way to know just where the highly inaccurate Scuds were going in the first place, let alone whether anything deflected them.

    The POPA panel has recommended that a third party undertake a joint study using the still-classified Army data and the videos. But with Raytheon turning up as the prime contractor for the “kill vehicle” of the proposed national missile defense, Postol warns that the Patriot episode raises more than technical questions. “Denial of failure leads to institutionalized failure,” he says. “And the message, loud and clear, was ‘We don't care about the truth.’”


    Probing the Shaking Microworld

    1. Alexander Hellemans*
    1. * Alexander Hellemans is a writer in Naples, Italy.

    With the help of atomic force microscopes, acoustics researchers are using vibration as a tool to study materials' elastic properties on a microscopic scale

    Vibration is the bane of microscopy. When you are trying to image atomic-scale features using instruments such as the scanning tunneling microscope (STM) or the atomic force microscope (AFM)—which scan needle-fine tips across a sample—even the slightest vibration will smear the picture. For one group of researchers, however, vibrations are not a problem: They're the object of the exercise. At a recent acoustics meeting+ in Berlin, several European research groups reported techniques that set a sample vibrating with sound waves and then use STMs or AFMs to sense how its atoms are jiggling about, revealing details of the material's local physical properties, such as elasticity.

    “We have demonstrated that it is possible to image oscillations on an atomic scale,” says Eduard Chilla of the Paul Drude Institute for Solid-State Electronics in Berlin. Such studies are of more than academic interest. Communications equipment, televisions, and cellular phones regularly rely on acoustic transducers as filters to exclude unwanted signal around their desired frequencies. Understanding how these devices vibrate can help improve their performance.

    Researchers wanting to trace sound waves in a material have generally been limited to coarse images of low-frequency waves. One strategy is to bounce pairs of laser beams off the material's surface and then combine the beams to produce an interference pattern, which indicates how the sound waves are displacing the surface, but the lasers' spot size limits the resolution of the technique. “It is almost impossible to image [high-frequency] waves” that are key to devices such as cellular phones, says Chilla.

    But a collection of six European teams known as the Atomic Force Microscopy and Microacoustics consortium, which has been supported by European Union funding, is probing vibrations on a much smaller scale. Under the right conditions, an AFM tip placed near a vibrating surface will stick to it, because of van der Waals and other electrical and viscous forces, causing the tip to follow the surface's oscillation. The tip's motion is read out by bouncing a laser beam off a tiny mirror attached to the cantilever. By pumping ultrasound into a material at different frequencies, then imaging the passing waves, researchers can determine the material's velocity dispersion—the relationship between the waves' velocity and their frequency—which is a clue to its elastic properties on a nanometer scale. This allows the detailed study of how the very small structures found in acoustic filters behave when vibrating and how the different parts influence each other.

    Chilla's team is now trying to image the motion of an individual atom as the sound oscillation moves it through an elliptical path. Above frequencies of around several hundred kilohertz, the probe tips just can't keep up. Chilla reported at the Berlin meeting, however, that he and his team can still glean information on high-frequency waves by allowing the tip to skim the surface of the waves, registering their amplitude without following their every up and down. From this information the researchers can derive the local elastic properties of the material at high frequency.

    In a variation on this technique, Andrew Kulik and his group at the Swiss Federal Polytechnic Institute in Lausanne eliminated adhesion from the equation. Relying on adhesion to couple the tip and the surface can skew measurements of elasticity, because the adhesive bond between the tip and the material can also stretch and compress. So Kulik's team holds the tip absolutely steady and lets the oscillating surface bump into it. Analyzing how the tip vibrates when it touches the vibrating surface reveals the local elasticity. “We have a depth resolution of about 100 nanometers, and we know that we are imaging elastic properties,” says Kulik.

    Similarly, a team led by Walter Arnold of the Fraunhofer Institute for Nondestructive Testing in Saarbrücken, Germany, actually pokes the tip into the surface so that it moves with it. “You deform the surface with the tip, and the deformation field contains the stiffness of the tip and the sample,” says Arnold. If you know the stiffness of the tip, you can deduce the elasticity of the sample, he explains. The system is so sensitive that it can detect differences in the elasticity in the small areas in magnetic materials in which the magnetic field is oriented in a specific direction. “The various domain orientations have a different contact stiffness,” he says.

    • + The Joint 137th Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association Integrating the 25th German Acoustics DAGA Conference, Berlin, 14 to 19 March.


    The Clock Plot Thickens

    1. Marcia Barinaga

    Researchers prove that a nonvisual light sensor sets our daily clock; a likely candidate for that role appears to fulfill other clock functions, too

    One of our most indispensable biological machines is our circadian clock, which acts like a multifunction timer to regulate sleep and activity, hormone levels, appetite, and other bodily functions with 24-hour cycles. The clock generally runs a bit fast or slow and must be reset daily by sunlight. Although many components of the clockwork are known, the crucial photoreceptor that passes light's signal to the clock is still at large.

    Two suspects, the light-sensitive pigments in the rod and cone cells of the mammalian eye, are eliminated by two papers in this issue. “The really important conclusion from these experiments is that there is another photoreceptor” affecting the clock, says circadian biologist Michael Menaker of the University of Virginia, Charlottesville.

    One candidate for that photoreceptor is a protein called cryptochrome. But a report in yesterday's issue of Nature puts an intriguing wrinkle in that story, fingering cryptochrome as a likely part of the clock itself. In mice that lack cryptochrome, the group found, the clock doesn't run at all. “We have never seen [in mice] a mutant like this, where there is instant arrhythmicity,” says clock researcher Steve Kay of The Scripps Research Institute in La Jolla, California. That means cryptochrome is essential for clock function, but leaves open the question of whether it is the long-sought circadian photoreceptor in mammals.

    Biologists have known since the 1960s that the clock-setting light signal in mammals normally comes via the eyes, because eyeless rodents and humans with few exceptions are unable to reset their clocks to light. One obvious possibility is that the molecules that capture light for vision—the opsins in the rod and cone cells of the retina—also send light signals to the clock.

    Evidence against that has mounted as researchers have found that mice lacking either rods or cones have clocks that respond to light. But the chance remained that rods and cones both can do the job, and either can do it alone. The reports on pages 502 and 505 by Russell Foster of the Imperial College of Science, Technology and Medicine in London and his colleagues rule that out.

    The researchers introduced genes that destroy retinal rod and cone cells into mice. They found that in those mice, just as in normal mice, light resets the clock and suppresses production of the clock-controlled nocturnal hormone, melatonin. “That says that you don't need rods and cones” for the light response, says Menaker, and means another photoreceptor must do the job.

    Cryptochrome, which is found in the eye, became a hot candidate for the photoreceptor last fall, when three teams reported that it seems to help light reset the clock in plants, fruit flies, and mice (Science, 27 November 1998, p. 1628). A group led by Aziz Sancar at the University of North Carolina, Chapel Hill, and Joseph Takahashi at Northwestern University in Chicago mutated cry2, one of two mammalian cryptochrome genes, in mice. The animals' clocks lost some light responsiveness, suggesting that Cry2 is a light sensor, but not the only one. Researchers wondered if Cry1 might be the other, and waited to see if mice missing both cryptochromes could adapt to light.

    In Nature this week, Jan Hoeijmakers at Erasmus University in Rotterdam, Netherlands, Akira Yasui of Tohoku University in Sendai, Japan, and their co-workers report the first tests on such mice. But instead of providing an answer about light response, the results delivered a surprise: The mice have no clock. Under conditions of 12 hours of light followed by 12 hours of dark, they act like normal mice, running in their exercise wheels in the dark and sleeping when it is light. But in constant darkness, when the clock would normally maintain the alternating cycles, their behavior loses that pattern; they run on and off around the clock.

    Those results suggest the animals' clocks fail in constant darkness. But further tests show they actually have no clock at all. When normal animals are subjected to a new light-dark pattern, they begin to adapt their clocks, a slow process as any jet-lagged traveler knows. But the mutant mice instantly adjust to any light pattern; they run when the lights go out and stop when they come on. That, says clock researcher Jeff Hall of Brandeis University in Waltham, Massachusetts, is the kind of behavior observed in clockless animals. Without a clock to control their behavior, it is driven directly by the light. Sancar and Takahashi, working with Takeshi Todo of Kyoto University in Japan, have also made double cryptochrome knockout mice and have preliminary results similar to those of the Dutch-Japanese team.

    Slight abnormalities in cry2 mutant mice that Sancar and Takahashi reported last fall suggested that cry2 might play a central role in the clock, but most researchers were surprised to learn it is essential for the clock to work. That creates a new mystery: What is the cryptochrome doing in the clock? But it does little to solve the old puzzle of whether cryptochrome transmits light signals.

    “Perhaps both functions, the clock and the light input, are being taken out” in the double mutants, says Takahashi. Ironically, the lack of a clock in the mutant mice makes it hard to test that hypothesis; one can't measure the effect of light on a nonexistent clock. But Kay notes that even if the clock is disabled, some of its molecular parts remain and should be able to respond to light. He suggests the authors check the behavior of those proteins to see if a light signal is getting through, something both groups plan to do. Wherever the search for the circadian photopigment leads as it moves beyond the rods and cones, one thing is for sure: Cryptochrome has guaranteed itself a place in the story of the circadian clock.


    Lab-Grown Organs Begin to Take Shape

    1. Dan Ferber*
    1. Dan Ferber is a writer in Urbana, Illinois.

    With the need for transplant organs growing, researchers are making progress toward developing them, using cultured cells and special polymers

    Call it the seaweed that's changing medicine. On a balmy summer afternoon in 1986, surgeon Joseph Vacanti of Harvard Medical School in Boston was sitting on a stone breakwater near his Cape Cod vacation house watching his four children play on the beach. He and biomedical engineer Robert Langer of the Massachusetts Institute of Technology (MIT) had been trying for more than a year to devise new ways to grow thick layers of tissues in the laboratory—a first step toward their long-term goal of growing replacements for damaged tissues and organs.

    Web site for cells.

    The micrograph shows smooth muscle cells growing in a porous polymer used for tissue engineering.


    But even though they were using the latest in cell-friendly, biodegradable polymers as scaffolds to support the growing tissue, the thickest slices they could grow were thinner than a dime—not much use for building complex three-dimensional organs like livers, kidneys, or hearts. The problem, Vacanti realized, was that as the tissues thickened, the interior cells couldn't take in enough nutrients and oxygen or get rid of sufficient carbon dioxide to continue growing.

    Then, as Vacanti gazed into the water, inspiration struck. He spotted a seaweed waving its branches, silently soaking up nutrients from the water around it. He immediately made the connection: Branching is nature's way of maximizing surface area to supply thick tissues with nutrients, and polymer materials that branch, rather than being completely solid, would be porous enough to support growing tissue in the lab. Vacanti raced up the road to a pay phone to call Langer. “He asked if we could design [biodegradable] polymers that had a branching structure,” Langer recalls. “I said, ‘Well, we could probably do that,’ and we tried and we did.”

    Thirteen years later, branched biodegradable plastics and related sponge-shaped plastics undergird tissues growing in dozens of laboratories around the world. Some of the simpler of these tissues, including skin and cartilage, have already made it to the clinic or are on their way (see sidebar). But, fueled by recent advances in polymer chemistry, in the design of the bioreactors that incubate the tissues, and in the understanding of basic cell and tissue biology, researchers are also beginning to grow organs with more complex architectures.

    Two months ago, for example, a team at Harvard Medical School reported in Nature Biotechnology that it had used tissue engineering to produce new urinary bladders that appeared to work normally in dogs. And on page 489, Langer, anesthesiologist and biomedical engineer Laura Niklason of Duke University in Durham, North Carolina, and their colleagues report growing functioning pig arteries. Less advanced, but showing progress, are efforts to engineer tissues to fill in for failing hearts, livers, and kidneys—organs for which the demand for transplants far outstrips supply.

    “I'm very excited about it all,” says biomedical engineer Michael Sefton of the University of Toronto. “Things are going much better than I expected.” Indeed, he has organized more than 25 leading tissue engineers into an informal international network called the Life Initiative. Their goal? To raise more than $1 billion from industry, government, and private foundations for a decade-long effort to grow livers, kidneys, and hearts (Science, 12 June 1998, p. 1681).

    Tissue in three dimensions

    The first bioengineered internal organ to reach the clinic may be the bladder, produced by surgeon Anthony Atala of Harvard Medical School and his colleagues. The team faced several challenges during its 9-year quest to grow a working urinary bladder in the laboratory, Atala says. First, the researchers had to isolate the necessary cells and coax them to grow in culture dishes. They had little trouble in harvesting from dog bladders the smooth muscle cells that would form the bladder's outer surface and getting them to grow. But growing the specialized epithelial cells called urothelial cells that would line the inside surface of the organ was another matter.

    The main problem was that in culture, the urothelial cells, taken from the same bladder samples, tended to revert to less specialized, primitive forms that would function poorly, allowing urine to leak into bladder tissue, which could cause scarring and pain. But with the right culture conditions and combination of growth factors, the team was eventually able to steer urothelial cells toward their mature state. “It's important to get the cells to differentiate and to stop differentiating; to grow and to stop growing,” Atala says.

    The researchers also had to find the right polymer scaffolding on which to grow the cells. The polymers had to be elastic enough to give the cells a lifelike mechanical environment, sufficiently porous to allow the fluids bathing the growing tissues to deliver nutrients and flush away cellular wastes, and capable of degrading as the tissue developed —but not so fast that it dissolved before the cells had time to grow into it. Eventually, the Harvard team settled on a branched, porous version of a polymer called polyglycolic acid that Langer had produced. They set the polymer in a bladder-shaped mold and coated it with a second polymer called polylactide-coglycolide.

    In the next step, the researchers applied the urothelial cells to the inner surface of the polymer bladders and the smooth muscle cells to the outer surface, and then nurtured the synthetic organs for 7 days in a sterile nutrient broth. At that point, they surgically replaced the bladders of six beagles with the lab-grown bladders. Within 3 months, the polymers had disappeared from the tissues, the dogs' bodies had supplied the bladders with blood vessels, and the neobladders held just as much urine as normal bladders and maintained a normal bladder shape. In addition, the bladders worked and even developed innervation. “The dogs empty by themselves,” Atala says. Indeed, says bioengineer Antonios Mikos of Rice University, the Harvard group's success at engineering the bladder “proves the feasibility of engineering other organs using cocultures of cells and synthetic, biodegradable organ scaffolds.”

    The Harvard group has since gone on to grow human bladders in the laboratory and is now seeking regulatory approval to start clinical trials. Lab-grown bladders could serve as replacement organs in some of the hundreds of thousands of people whose bladders have been damaged by accidents, chronic infections, bladder cancer, or congenital birth defects, Atala says.

    Building blood vessels

    The work of the Niklason-Langer team in creating artificial arteries looks equally promising. Blood vessels have been engineered before using other techniques but with only mixed success. For example, cell biologist François Auger and colleagues at Laval University in Québec City, Québec, reported in the January 1998 issue of The FASEB Journal that they had assembled blood vessels from smooth muscle cells and endothelial cells from umbilical cords, and fibroblasts from adult human skin.

    They did this by growing the smooth muscle and fibroblast cells as sheets, without a polymer support, in separate culture flasks, then wrapping them, in successive layers, around a pipette, “like a cinnamon bun,” as Auger puts it. After lining these artificial blood vessels with endothelial cells, which form the internal surface of natural blood vessels, the Laval team showed that they held up under arterial pressures and could be stitched into dogs. But blood leaked between the sheets of cells, and many of the vessels plugged up with blood clots within a week. Auger is pleased, though, with these still-early results. “We wanted to see if the plumbing held, and it did,” he says.

    Rather than stacking sheets of tissue, the MIT team grew its blood vessels on tubes of polyglycolic acid, under conditions Niklason designed to mimic those encountered by a newly formed artery in the body. First, co-author Jinming Gao, now of Case Western Reserve University in Cleveland, partially hydrolyzed the polymer with sodium hydroxide. This creates water-loving hydroxyl groups that enable more cells to attach. Then, Niklason drizzled smooth muscle cells from cows onto the tube-shaped scaffold and inserted a pliable piece of silicon into the interior. The tubing was hooked up to a pump, which pulsed 165 times a minute to mimic the pulsing pressure on a developing embryonic artery. After 8 weeks, Niklason coated the inside of the vessels with endothelial cells. “To me the most novel thing [about this study] is the idea of using a bioreactor that beats like a heart,” Langer says.

    The pulsing made the tissue stronger because, Niklason says, it increased the cells' production of collagen, a tough protein found in connective tissue. In any event, the pulsed blood vessels had walls that were twice as thick as nonpulsed vessels. They could also withstand sutures without tearing, although not as well as natural arteries, and they contracted in response to the same chemical signals that normally spur arterial contractions. The pulsed vessels also worked in animals. When the researchers took cells from tiny arterial biopsies of miniature pigs, grew vessels, and used them to replace an artery in the same animals' legs, the engineered arteries lasted more than 3 weeks without clogging, whereas arteries grown without pulsing clogged.

    The results are “really astounding,” says cardiovascular surgeon Timothy Gardner of the University of Pennsylvania School of Medicine, who is chair of the surgery council of the American Heart Association. “If this works out,” he adds, “it will be a major development in cardiovascular surgery.” In many of the nearly 600,000 coronary bypass operations performed each year in the United States, surgeons can use blood vessels from the patient's own leg or chest to replace clogged coronary arteries. But in some cases, as when the individual has already undergone one such operation and doesn't have any suitable blood vessels left, this may not be possible. And small-diameter synthetic arteries haven't worked because they tend to clog up.

    Organizing organs

    Although engineered bladders and arteries are a significant accomplishment, making these hollow structures, formed by relatively thin layers of cells, is easier than achieving another goal the tissue engineers have set for themselves: producing large internal organs like the liver and kidney that need complex networks of arteries, veins, and capillaries to carry blood to and from every cell. “Large, vascularized tissue is the Holy Grail of tissue engineering,” Vacanti says.

    There has been some progress, however. In two recent papers, one published in the Annals of the New York Academy of Sciences in December 1997 and the other in the July 1998 Annals of Surgery, Vacanti, bioengineer Linda Griffith of MIT, and their colleagues reported taking a step toward producing a synthetic liver. “We are fabricating tissue-engineered structures that will have their own blood supply,” Vacanti says.

    To do this, Vacanti, Griffith, and colleagues first created a plastic scaffold, made from polylactide and polyglycolide polymers, that contains channels that allow fluid to flow and capillaries to develop inside the tissue. The trick to making the scaffold was a method called three-dimensional printing (3DP), which was originally designed to make intricate molds for metal engine parts. By depositing successive thin layers of polymer, the technique creates a complex internal structure, including channels as narrow as 300 micrometers. When the researchers seeded the matrix with rat liver and endothelial cells and gently pumped growth medium through it for 5 weeks, the cells not only survived, but they rearranged themselves into microscopic structures resembling those in liver tissue. What's more, the liver cells grown inside the matrix produced the protein albumin, just as they do in a real liver.

    But despite the successes so far, researchers will need to surmount some major challenges before tissue-engineered organs are routinely available. Besides a network of capillaries, arteries, and veins, most engineered organs will also need to be wired with nerves that regulate their function. Tissue-engineered organs “will do an OK job, but ultimately you'll need fine-tuning,” says biomedical engineer Christine Schmidt of the University of Texas, Austin, who is designing new ways to regrow severed nerves.

    Tissue engineers also need a reliable source of cells. Today, many of them are growing tissues using cells from an animal's or a patient's own body, but that can take weeks—time that some patients don't have. And engineered tissue from outside sources, although easier to come by, will run into the same problems that plague organ transplants. Unless scientists can prevent it, the immune system will reject the tissue as foreign.

    Then comes the challenge of reliably manufacturing the tissue-engineered products, says Gail Naughton, president and chief operating officer of Advanced Tissue Sciences, a biotech firm in La Jolla, California, that is doing tissue engineering. That means that researchers will need to develop ways to test the products for sterility, mechanical strength, and function; ways to freeze the tissue and store it; and ways to scale up manufacturing to meet the tremendous demand. That goal has already been met for some skin and cartilage products. And researchers are optimistic that it can be achieved for internal organs as well. “I think that anyone in tissue engineering, myself included, believes that any tissue or organ can be grown outside the body,” Naughton says.


    From the Lab to the Clinic

    1. Dan Ferber

    Even as tissue engineers work to produce whole organs such as bladders and livers (see main text), lab-grown versions of more than a dozen different tissues, ranging from skin and cartilage to heart valves and corneas, are either in the clinic or under development. Here's a sample:

    • Approved for clinical use. The first engineered tissues to hit the market have been skin and cartilage products. In 1997, the U.S. Food and Drug Administration approved TransCyte, a skin replacement made by Advanced Tissue Sciences Inc. of La Jolla, California. Consisting of cells from the inner, or dermal, skin layer grown on a biodegradable polymer, TransCyte can serve as a temporary wound cover for some of the more than 30,000 patients hospitalized each year in the United States with second- and third-degree burns. Another skin product, Apligraf, which is made by Organogenesis Inc. of Canton, Massachusetts, and consists of both the dermal and epidermal skin layers, was approved last year in the United States and Canada to treat leg ulcers that don't spontaneously heal.

      One cartilage product has also won regulatory approval. This is Carticel, made by Genzyme Corp. of Cambridge, Massachusetts, to replace damaged knee cartilage. Genzyme takes cartilage-forming cells, called chondrocytes, from cartilage snipped from the patient and grows them in a degradable matrix. The surgeon can then cut out the damaged cartilage and replace it with this new tissue.

    • In clinical trials. Reprogenesis Inc. in Cambridge, Massachusetts, has a different sort of cartilage product in advanced clinical trials. Consisting of chondrocytes growing in a polymer called a hydrogel, which hardens when injected into the body, it's intended for replacing defective bladder valves in children with vesicourital reflux, which causes urine to flow from the bladder to the kidney, and in women with urinary stress incontinence, in which patients urinate when they cough or sneeze. Other products in or nearing clinical trials include Dermagraft, from Advanced Tissue Sciences, a variation on TransCyte designed to treat difficult-to-heal foot diabetic ulcers, and Vitrix, a connective tissue product from Organogenesis consisting of fibroblasts and collagen, which helps deep wounds heal without scarring.

    • In the pipeline. Still in lab studies are a host of additional products. For example, bioengineer Antonios Mikos of Rice University and his colleagues have recently developed an injectable polypropylene-fumarate copolymer that hardens quickly in the body and provides a surface that guides severed long bones to regenerate in rats and goats. Joseph Vacanti of Harvard Medical School in Boston and his colleagues have recently used a polymer matrix to grow lengths of replacement intestines in rats, which they then attached successfully to the animal's gut. Also in the works are the first lab-grown human cornea, from François Auger's team at Laval University in Québec City; a portion of the heart's pulmonary valve, grown by pediatric cardiovascular surgeon John Mayer of Harvard Medical School and his colleagues; and soft tissue engineered by David Mooney of the University of Michigan, Ann Arbor, together with colleagues at the Carolinas Medical Center in Charlotte, North Carolina, as a potential replacement for breast tissue removed during mastectomies.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article