News this Week

Science  01 Mar 2002:
Vol. 295, Issue 5560, pp. 1616

    Disappointing Data Scuttle Plans for Large-Scale AIDS Vaccine Trial

    1. Jon Cohen

    SEATTLE, WASHINGTON—The National Institutes of Health (NIH) has decided not to fund full-scale clinical trials of the leading AIDS vaccine in its pipeline. At a closed session held here at a 5-day AIDS conference,*60 investigators who have been evaluating the vaccine in a midsize human study learned that an interim analysis showed weaker-than-hoped-for immune responses against HIV. Even so, the U.S. military and the makers of the vaccine, in collaboration with researchers and officials in Thailand, are still planning to go ahead with a large-scale trial of a similar preparation, and Merck is working with NIH to move another vaccine into midsize trials.

    Researchers had pinned high hopes on the first vaccine, a concoction made by the Franco-German pharmaceutical company Aventis Pasteur that has HIV genes stitched into a harmless bird virus, canarypox. If all went well with the midsize study, they planned to launch the largest HIV vaccine trial to date, involving 11,000 people. The $60 million to $80 million trial was expected to begin in the United States, South America, and the Caribbean by the end of this year. It would have been conducted through the HIV Vaccine Trials Network (HVTN), a collaboration funded by NIH's National Institute of Allergy and Infectious Diseases that has 25 sites around the world.


    Lawrence Corey heads the clinical trials group that would have tested the vaccine.


    But a preliminary analysis of immune responses in the midsize trial did not meet the targets HVTN researchers had set for staging the larger one. Researchers have long known that the vaccine does little to stimulate production of antibodies, which prevent viruses from infecting cells. But in some people it stimulates production of killer cells, immunologic warriors that target and destroy cells that the virus infects. The planned trial was designed to test whether these killer cells can thwart HIV. That, in turn, could help resolve a huge mystery confronting the field: No one knows which immune responses correlate with protection. “If we could find any kind of correlation of protection, that would spur the field forward more than anything else,” says HVTN's principal investigator, Lawrence Corey, a leading clinician in sexually transmitted diseases at the University of Washington, Seattle.

    To arrive at a statistically significant result in the large-scale study, HVTN statisticians had calculated that at least 30% of the volunteers would have to develop killer-cell responses. The preliminary analysis of the midsize trial, which involves 330 volunteers, found that the killer-cell response was about “one-third lower” than needed, says Corey: “The vaccine stimulates an immune response, but not at the level that we can analyze the correlates.”

    To Corey and others close to the study, the results are disheartening: “With 15,000 new HIV infections a day, anything that delays anything disappoints me.” Virologist James Tartaglia, head of the global HIV program at Aventis Pasteur and a key architect of the vaccine, points out that many vaccines have come to market with scant understanding of how they work. Tartaglia says he has no quarrel with analyzing killer-cell levels, but he says the most important question is whether the vaccine provides any overall protection.

    This is precisely what led the U.S. military's AIDS research program and collaborators in Thailand to design an “empirical” efficacy trial of a slightly different version of the Aventis Pasteur canarypox vaccine. The Thai trial, slated to begin in September, will combine the Aventis Pasteur vaccine with one that contains a genetically engineered version of HIV's surface protein. The second vaccine, which is made by VaxGen of Brisbane, California—and which is now in full-scale efficacy trials by itself—aims to stimulate antibody production. The Thai trial should determine whether the vaccine provides any overall protection, but it will not have the statistical power to tell which immune responses are most significant. The trial, which will cost $35 million to $45 million, will involve nearly 16,000 people.

    In a twist, much of the funding for that trial may end up coming from NIH. The U.S. Office of Management and Budget in January directed the Department of Defense to transfer its AIDS research program to NIH; the agency has agreed to provide the $24 million annual budget (Science, 1 February, p. 781). And Corey says the HVTN may become involved with the Thai study, noting, “We're very supportive of that trial.”

    Forging on.

    An AIDS patient in Thailand, where the U.S. military is still planning a large-scale trial.


    HVTN is also collaborating with Merck on the testing of the company's AIDS vaccines. This fall, HVTN will stage midsize studies of Merck's approach, which uses a one-two punch of a so-called “naked DNA” vaccine that carries HIV genes followed by one that uses adenovirus as the vector. This strategy also relies on stimulating production of killer cells. In preliminary data presented here by Merck's Emilio Emini, each vaccine, when used alone, appears to be at least twice as good as canarypox at stimulating this arm of the immune system. The ultimate aim of the study is to analyze the impact on killer cells when the two vaccines are combined.

    Several researchers applaud NIH for pulling the plug on its lead vaccine. Immunologist John Moore of Cornell University, who wrote a commentary in the 24 January issue of Nature that criticized NIH and the Department of Defense for planning “duplicative” trials of the vaccine, says NIH has “shown excellent judgment after reviewing the scientific data.” Douglas Richman, a virologist at the University of California, San Diego, who sits on NIH's AIDS Vaccine Advisory Committee, says many of his colleagues on that panel had similar qualms. “I was very uncomfortable with the two trials,” says Richman. “I can live with the one.” He says he “remains skeptical” that the vaccine will work, but adds, “I'd be delighted if I were wrong.”

    • *Ninth Conference on Retroviruses and Opportunistic Infections, 24-28 February in Seattle, Washington.


    Has GM Corn 'Invaded' Mexico?

    1. Charles C. Mann

    On Thursday, 21 February, the gene wars took a stunning new twist, or so it seemed. Mexican newspapers reported that two teams of government researchers had confirmed University of California (UC), Berkeley, biologist Ignacio Chapela's explosive findings: that transgenic corn was growing in Mexico, the heartland of maize diversity.

    Yet even as Chapela was proclaiming this news at a Mexico City press conference, a scathing editorial in the February issue of Transgenic Research was crisscrossing the globe by e-mail. In it, editor Paul Christou charged that Chapela and his co-author, UC Berkeley graduate student David Quist, had presented “no credible evidence … to justify any of [their] conclusions.” Meanwhile, Nature, which published the Quist-Chapela paper last November, was weighing the publication of no fewer than four biting critiques of the article. Adding to the muddle, Elena Alvarez-Buylla Roces, a biologist at the National Autonomous University of Mexico who appeared with Chapela at the press conference, insisted in a later e-mail to Science that Mexican investigators “still do not have definite answers towards corroborating or not [corroborating] Chapela's results.”

    Welcome to the “maize scandal,” which is driving the battle over genetically modified (GM)crops to new heights of acrimony and confusion. Widely circulating anonymous e-mails accuse Chapela and Quist of conflicts of interest and other misdeeds. Meanwhile, 144 civil-society groups have leapt to the authors' defense, asserting in a joint statement on 19 February that the biotech industry is using “intimidatory” techniques to “silence” dissident scientists. “I've never seen anything like it,” says Peggy Lemaux, a UC Berkeley molecular biologist who is one of the most public critics of the Quist-Chapela paper. “There's been a lot of fighting about transgenics, but this is something else.”

    Still unclear, say many scientists, is whether transgenic corn has indeed invaded Mexico–and if so, whether it poses a threat to one of the world's most important foodstuffs.

    The furor began on 29 November, when Quist and Chapela reported that transgenic maize genes had introgressed–skipped from one gene pool to another–with traditional strains (landraces) of maize in remote areas of Oaxaca. The highlands of Oaxaca, Chiapas, and adjacent Guatemala are one of seven “centers of genetic diversity” that spawned most of today's crops. To protect this diversity, an invaluable resource for crop breeders, the Mexican government declared a moratorium in 1998 on planting transgenic maize anywhere in the nation. Now the Nature paper was claiming “a high level of gene flow” from illegally planted transgenic maize to local landraces–a process that Quist and Chapela argued could exert “a major influence on the future genetics of the global food system.”

    At risk?

    Traditional strains of maize could be threatened by GM corn.


    Greenpeace and others opposed to biotechnology immediately called on the Mexican government to ban transgenic U.S. maize, the presumed source of the foreign genes. (Free-trade rules let transgenic maize be shipped into Mexico but not grown there.) “World food security depends on the availability of this diversity,” Chapela told Newsweek in January. “Having it contaminated is something humanity should worry about.”

    Adding to the alarm, Quist and Chapela suggested that the transgenes were unstable. The foreign genes, they wrote, often “seemed to have become re-assorted and introduced into different genomic backgrounds.” In other words, when transgenic maize hybridized with landrace maize, the novel genetic material broke up into chunks that jumped around the genome. The implications were profound: Because a gene's behavior depends on its place in the genome, the displaced DNA could be creating utterly unpredictable effects.

    Activists' fears centered on the promoter sequence–usually CaMV 35S, which originates in the cauliflower mosaic virus–used to drive the activity of newly inserted genes for, say, herbicide resistance. If the promoter broke off during hybridization, it could conceivably take over other genes, with unknown consequences. “The spread of the promoter could prove to be worse than the spread of the genes for herbicide and insect resistance,” says Peter Rosset, co-director of the Institute for Food and Development Policy (Food First), a research group that advocates on behalf of small farmers. “If true, this would be a red flag that would call into question every other GM crop on the market.”

    But Lemaux and other critics aren't buying it. “They're saying that the [hybrid and introgressed] genomes were completely unstable all the time,” she says. “I've worked with transgenic corn for 10 years, and I've never seen anything like that.”

    To search for transgenic DNA, Quist and Chapela took sample ears of maize from two locations in Oaxaca in October and November 2000 and tested them using the polymerase chain reaction. PCR amplification detects particular snippets of DNA by multiplying them to observable levels. Unfortunately, notes molecular biologist Marilyn Warburton of the Mexico-based International Maize and Wheat Improvement Center (CIMMYT), PCR is so sensitive that minute traces of laboratory contaminants can create false-positive results. “If you get a positive result, you have to check it repeatedly,” Warburton says. “And even then you need to confirm it by another method to be completely sure you're not fooling yourself.” Chapela and Quist did not report performing such additional tests.

    Motivated by these sorts of concerns, at least four groups of researchers–from the University of Washington, the University of Georgia, and two from Quist and Chapela's home base of UC Berkeley–sent sharply critical letters to Nature in December. Three referees reviewed the letters and recommended publication of one or more, accompanied by a rebuttal from Quist and Chapela. “The PCR and iPCR [inverse PCR, a variant] data presented is simply not sufficient data to warrant ANY of the conclusions of the authors,” including both the presence of transgenic DNA in Mexican maize and its instability, declared the first reviewer. “Nature should demand that the authors retract their manuscript if they cannot demonstrate well-controlled DNA blot analyses [a common confirmatory test] documenting transgene integration events.”

    Nature is coming under pressure to use secondary technical criticisms to discredit our main findings,” responds Quist. Regarding doubts about the instability he reported, he believes that “the critique is coming from expectations” created by lab experiments “that aren't necessarily reflected in what you see when you go out in nature.” To respond to criticisms, “we're discussing with Nature the possibility of publishing [in a reply] some new information that substantiates our findings.”

    (Science obtained three of the letters, the initial Quist-Chapela response, and some of the anonymous referee reports from sources other than their authors, who are blocked by Nature from discussing their critiques before publication. Nature editor Philip Campbell says the journal acts “as promptly as possible” on criticisms, publishing them when “appropriate.”)

    Surprisingly, even Quist and Chapela's most strident critics agree with one of their central points: Illicit transgenic maize may well be growing in Mexico. In May 2001 Chapela shared his initial results with the National Institute of Ecology (INE, the research arm of the Mexican Ministry of the Environment and Natural Resources) and the interagency National Biodiversity Council (CONABIO). Concerned, INE and CONABIO took maize samples from 20 random locations in Oaxaca and two in the adjacent state of Puebla. The samples were divided into two groups and independently analyzed by researchers at the National Autonomous University of Mexico and the Center for Investigation and Advanced Studies (CINVESTAV) at the National Polytechnic Institute. At a 23 January meeting in Mexico City, CINVESTAV official Elleli Huerta presented preliminary PCR findings indicating that transgenic promoters, mostly CaMV 35S, were present in about 12% of the plants. In some areas, up to 35.8% of the grain contained foreign sequences, INE scientific adviser Sol Ortiz Garcia told Science last week.

    According to Ortiz, both the INE lab and the National Autonomous University of Mexico labs are still “double-checking” the findings. The possible corroboration, Alvarez-Buylla Roces says, is “only based on PCR tests and [is] preliminary.” Indeed, says Timothy Reeves, director-general of CIMMYT, which is working with the Mexican government, the two Mexican teams are now responding to the criticism of PCR methodology by revamping their analyses to include bigger samples and more reliable tests.

    Meanwhile, CIMMYT, which develops improved crops for Third World farmers, has been searching its vast storehouse of maize varieties for transgenic “contamination.” By 22 February, the lab had found none, and the organization has adopted measures that it believes will prevent GMmaize from entering its gene bank, preserving at least some of Mexico's maize diversity. But given the amount of transgenic maize in the United States, Reeves believes it is “very likely” that some will eventually end up growing in Mexico. For now, however, “transgenic maize in Mexico is still hypothetical.”


    NAS Asks for More Scrutiny of GM Crops

    1. Erik Stokstad

    The U.S. Department of Agriculture (USDA) needs to strengthen its procedures for approving field tests and commercialization of transgenic plants, a National Research Council committee concluded in a report released last week. Although transgenic crops don't pose a greater risk than that of products of conventional breeding, the committee said, traits introduced by either technique can pose risks to the environment. Ultimately, it added, the potential environmental impact of conventionally bred crops should also be assessed. But for now, to bolster its regulation of transgenics, the committee urged the agency to consult more with outside scientists and strengthen its expertise in ecology, and it also suggested that an independent organization set up a program for long-term monitoring of transgenic plants.

    “The take-home message is that we haven't had a significant environmental problem yet, but that the review process is inadequate,” says Daniel Simberloff, an ecologist at the University of Tennessee, Knoxville. “If the major recommendations of this report are adopted, it would greatly lessen the probability of an accident.”

    Regulation of some transgenic plants falls to the Animal and Plant Health Inspection Service (APHIS), a branch of USDA. A biotech company has two choices when it wants to field-test a transgenic plant: It can apply for a permit, or it can simply notify APHIS that the plant meets general safety guidelines. APHIS must reply in 30 days if it has objections. The vast majority of applications—about 1600 a year—take the notification route.

    Look closely.

    An NRC panel says USDA should regulate biotech crops more rigorously.


    In some cases, say, those that involve very minor changes to an already approved transgenic plant, APHIS's streamlined notification process is appropriate, the committee said. But speedy review can result in slip-ups. For instance, in 1997 APHIS used the notification process to approve field-testing of a corn variety engineered to contain a glycoprotein called avidin that is toxic to at least 26 insect species—in violation of its own guidelines.

    Calling APHIS's handling of ecological issues “superficial,” the committee said that if APHIS can't strengthen its reviews, it should leave them to the Environmental Protection Agency. The committee also recommended that APHIS convene a scientific advisory board and consult it before changing its policies on how it regulates new types of transgenic plants. To check for unanticipated impacts, the committee called for long-term monitoring of transgenic crops—something not done now in the United States.

    Spokesperson Val Giddings of the Biotechnology Industry Organization says the call for more scientific input is “logical,” but he doesn't think there's a need for more extensive monitoring of environmental effects. In a statement, APHIS director Bobby Acord noted that “USDA has already addressed some specific issues raised in the report.” The agency, which asked for the review, declined to provide details.

    “I hope this report will stimulate improvements in the staffing levels and general procedures at APHIS,” says Allison Snow, an ecologist at Ohio State University, Columbus. “A stronger, more rigorous regulatory process is essential if the world is going to accept GM [genetically modified] products.”


    T. rex Was No Runner, Muscle Study Shows

    1. Erik Stokstad

    When the “dinosaur renaissance” blossomed in the 1970s, the sluggish, lizardlike denizens of natural history museums got a kick in the scaly pants. Paleontologists found evidence for higher metabolisms and more erect postures, the giant sauropods emerged from the swamps, and Tyrannosaurus raised its tail and lowered its head into an aggressive crouch. A few paleontologists argued, based on limb proportions, that the fearsome beast could even have run as fast as 72 kilometers per hour—a possibility that Jurassic Park's nip-and-tuck jeep race exploited for maximum terror.

    Now a new biomechanical model suggests that the movie characters wouldn't have had much to worry about. In the 28 February issue of Nature, John Hutchinson, a postdoc at Stanford University, and Mariano Garcia, now at BorgWarner Automotive in Ithaca, New York, argue that a 6000-kilogram Tyrannosaurus could not have packed enough muscle into its legs to hustle faster than about 40 km/h. Although the finding doesn't change ideas about Tyrannosaurus's hunting ability, paleontologists say the study sets a new standard for biomechanical analysis of an extinct organism. “This is one of the most sophisticated studies on dinosaur locomotion ever,” says Greg Erickson of Florida State University, Tallahassee.

    Primed by seeing Jurassic Park and gorging himself on dinosaur books, Hutchinson entered graduate school in paleontology with the idea of studying the biomechanics of Tyrannosaurus. He and Garcia, then a postdoc at the University of California, Berkeley, designed a simple model of the forces on tyrannosaur leg bones. They modeled the rotational forces exerted when a limb touched the ground while running. The equation revealed how much muscle would have been required to balance forces and keep the dinosaur on its feet.

    A stretch.

    Large animals such as Tyrannosaurus or a 6000-kg chicken couldn't carry enough leg muscle to run.


    To test the model, the researchers studied the closest living relatives of dinosaurs: reptiles and birds. Hutchinson dissected a chicken and an alligator and weighed their muscles. The model suggested that a chicken would need to invest at least 4.7% of its body mass in its leg muscles in order to run fast. The chicken turned out to have 8.8%, showing that it had a large margin of safety to deal with the forces generated during a run. In contrast, alligators, which do not run, had only 3.6% of the their body mass in each hindlimb—nowhere near the 7.7% minimum the model predicted.

    Hutchinson then studied Tyrannosaurus bones, picked a posture that most postrenaissance paleontologists would consider reasonable, and ran the model. It suggested that in order to run, a tyrannosaur would have needed to carry 86% of its body mass as extensor muscles in its legs. To double-check, they analyzed how different parts of the animal's physique affected the results. The most important factors, such as orientation of the limbs and the length of the muscle fibers, could have led to a threefold variation in minimum muscle mass. But even with the most liberal assumptions, a dashing tyrannosaur would have needed 26% of its muscle mass in its legs—far more than living animals have. Hutchinson and Garcia estimate that the fastest a tyrannosaur could have traveled was 40 km/h.

    Most paleontologists agree that Tyrannosaurus was no Carl Lewis. In 1989, R. McNeill Alexander of Leeds University, United Kingdom, showed that the tyrannosaur leg bones would have cracked under the stress of a wind sprint. And Jim Farlow of Indiana University-Purdue University Fort Wayne calculated that a Tyrannosaurus would have seriously hurt itself if it tripped at high speed. But even without sprinting, a tyrannosaur would still have been able to hunt, Hutchinson and other paleontologists say. Large prey such as duckbilled dinosaurs and Triceratops would have been limited by the same factors and probably couldn't have run fast either.

    Why does speed matter? Once an upper limit is established, Don Henderson of the University of Calgary in Alberta notes, paleontologists can put a cap on ecological questions such as how much territory a tyrannosaur could patrol in a day and how many top carnivores an area could support. Hutchinson says the technique of calculating minimum muscle mass could be used to answer other questions, such as whether sauropods or pterosaurs could walk bipedally and which early tetrapods had the strength to walk on land.


    Solar System Kicks Up Its Own Dust

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, Netherlands.

    An alien civilization might be able to deduce the existence of planets in our solar system by examining the infrared light emitted by a ring of dust around our sun. A team of astronomers argues that the telltale dust could not have formed without planets, and they propose that stars surrounded by similar rings may be a good place to search for extrasolar planets.

    Dust beyond the orbit of Saturn was first detected in the 1970s by NASA's Pioneer 10 and Pioneer 11 spacecraft. But no one knew whether it came from inside or outside the solar system. One clue came from the realization that the dust must get replenished—otherwise, it would get sucked up by the sun or ejected from the solar system.

    Dusty disk.

    Seen from afar at infrared wavelengths, our solar system might resemble star HR4796A, which also sports a bright dust ring.


    Markus Landgraf of the European Space Agency (ESA) and colleagues suspected that colliding objects in the Kuiper Belt—a flat cloud of debris in the outer solar system probably left over from planet formation—might be kicking up the dust. Using measurements of interstellar dust recorded by detectors aboard ESA's Ulysses spacecraft, the team determined that the grains the Pioneers had observed were too coarse to have come from outside the solar system. The only possible source is the Kuiper Belt, according to computer simulations to be published in The Astrophysical Journal. Landgraf's team calculates that about 50 tons of dust are created each second inside the belt—enough to maintain a dust ring that should be bright at infrared wavelengths when seen from afar. Another key signal of planets should be a distinctive pattern of gaps and edges in the dust cloud, carved out by gravitational resonances with Jupiter and other giant planets.

    “It's a very interesting report,” says David Trilling of the University of Pennsylvania in Philadelphia. “Looking for gaps or structures in dust disks [around other stars] is a very compelling way to look for planets.” Rings of dust that emit infrared light have been discovered around a number of nearby stars, and Trilling's team has been searching for dust around more than 40 others. So far, though, no one has found stars that have both planets and a dust ring.


    Reforms Would Loosen Bonds, Cut Safety Net

    1. Dennis Normile

    TOKYO—Japanese academics appear set to win new freedoms that would allow closer collaborations with private companies and greater autonomy in spending research grants. But they may have to pay a steep price: an end to the security of jobs for life and, perhaps, stricter evaluations of the quality of their work.

    Last week, an advisory panel to the Ministry of Education, Culture, Sports, Science, and Technology recommended abolishing civil servant status for academics. The recommendation, expected to appear in a final report later this month, would grant administrators flexibility in hiring, including the option of putting staff on fixed-term contracts. If the change applies to current employees—now subject to legal debate—it could affect 60,000 faculty members and 58,000 staff at 98 national universities and 15 institutes.

    As civil servants, academics enjoy lifetime employment, and the vast majority of researchers remain at one institution for their entire careers. Reformers have argued that this leads to a stagnant scientific environment. “The biggest problem of the university system is the lack of mobility [among academics],” says Shinichi Nishikawa, a molecular geneticist at Kyoto University's Graduate School of Medicine, who serves on the advisory panel.

    The employment issue is the last major unresolved question in an ambitious effort to denationalize Japan's academic institutions. Moves along these lines grew out of efforts launched in the mid-1990s to slim the country's bureaucracy. The academic community at first resisted the reform push, seeing it primarily as an attempt to cut spending. But it has since warmed to the idea. Independence would in principle give universities a freer hand in structuring programs, setting research priorities, and handling budgets—functions now scrutinized by the education ministry (Science, 13 August 1999, p. 997).

    Although the die is cast for the overall direction of the reforms, the ministry and its advisory councils will begin hammering out the details of implementation only after the recommendations are finalized. One thorny question is whether the new rules will apply to current employees or only to those hired after the institutions become independent. The consensus of the ministry's labor subcommittee is that it would be best to put existing faculty members on fixed-term contracts and make them earn reappointment.

    It is unclear, however, whether the government has the legal right to alter the status of current employees. There are also practical matters to settle, including how to fund pensions for employees put on contract. “There could be a lot of problems during the transition phase,” says Nishikawa. Resolving such issues could take “2 years or 3 years or longer,” says Naokazu Odani, head of the ministry's university reform office. For many academics looking to break free of Japan's rigid bureaucracy, it may well be worth the wait.


    NASA Decision Not Suited for Women

    1. Andrew Lawler

    NASA has halted work on a $16 million program to develop a space suit designed for smaller women. The decision, which could make it harder for women to make it into space, comes at a time when only one female astronaut is slated to fly aboard the international space station in the next 3 years. Together, these developments raise concerns among some engineers and researchers that biomedical data gathered aboard the orbiting lab will be skewed toward men.

    NASA officials blame budget pressures for the recent decision, saying they can't afford the $9 million needed to complete work on the new, smaller suit and that only a small percentage of women astronauts would be affected. But a recent internal report obtained by Science urged NASA to continue the program, arguing that a smaller suit would benefit some 20% of the astronaut corps—including smaller men. Critics of the decision, many of whom requested anonymity, also note that NASA in recent years has spent millions on an extra-large space suit and a redesigned Russian Soyuz capsule to accommodate bigger men. “The lack of a small suit precludes lots of women from flying,” says Yvonne Brill, a retired engineer and former member of NASA's Aerospace Safety Advisory Panel.

    Space suits are complex assemblages of arms, gloves, legs, and boots of varying length held together by a hard upper torso. And they are expensive, thanks to the custom design and batteries of tests required on the ground before flight. That upper torso now comes in three sizes: medium, large, and extra-large. They fit some 90% of all men but only approximately 60% of all females, who tend to have narrower chests and shorter arm spans, according to Paula Hay, deputy program manager at Hamilton Sundstrand, which builds the suits for NASA. The smaller size would have accommodated up to 95% of women, she adds. Almost one-third of the current corps of nearly 100 mission specialists—astronauts who are not pilots—are women.

    Until the late 1990s, NASA offered a different kind of space suit that came in five sizes, including small and extra-small. That model was gradually abandoned for one that allowed astronauts to perform more complex maneuvers, such as building the space station. The large and medium sizes first flew in 1998. The extra-large version, which fits about 20% of all astronauts, was added in 2001, and the smaller version was slated to debut in 2003.

    Well suited?

    Anchored to a robotic arm, astronaut Susan Helms works in orbit during a March 2001 shuttle mission. Only one other woman is scheduled for a space station flight through 2004.


    The current suit puts some women at a disadvantage in qualifying tests, because it often is not a good fit. In order to fly, astronauts must demonstrate their maneuverability while in Houston's Johnson Space Center Neutral Buoyancy Lab (NBL), a large swimming pool that mimics zero gravity. Women who performed well during tests using the old suit have had trouble managing the hard upper torso version, according to the internal NASA study, “and therefore cannot perform well in the NBL.” Low scores make it hard for astronauts to win coveted slots aboard space flights, says Brill. The small hard upper torso suit would have given smaller women a better fit.

    The issue goes beyond gender balance, says Judith Swain, chair of the medicine department at Stanford University in Palo Alto, California, and a member of the National Research Council's space biology panel. “There's a big advantage in flying smaller people,” she notes, because they consume fewer resources and take up less room on the cramped space station. “This decision seems a little bit shortsighted.” And a 1999 workshop report by the National Space Biomedical Research Institute in Houston strongly urged NASA to increase data-gathering on women, given the differing effects that microgravity likely has on the sexes. The report added that “a space 'glass ceiling' should not exist based on size or gender.”

    Even with a new suit, however, getting more women into space may be difficult. NASA managers are considering significant cuts to the astronaut corps as another budget-saving measure, a move that some agency officials say could further reduce the number of women astronauts who make it into orbit.

    Shannon Lucid, NASA's new chief scientist and a veteran astronaut, declined comment on both matters. Critics of the decision say they may seek out receptive members in Congress, who have repeatedly heard NASA officials describe how more data about women's health in space might lead to advances in solving Earth-bound problems such as osteoporosis. That audience, they note, includes Senator Barbara Mikulski (D-MD), chair of the panel that oversees NASA's budget. NASA officials have left themselves some room to maneuver, however, should the politics get hot. Agency spokesperson James Hartsfield called the decision a “deferral” rather than a cancellation. But with 2.5 more years of work needed to get the small suit ready, any delay decreases opportunities for women.


    CDC Head's Resignation Expands Leadership Void

    1. Martin Enserink

    Jeffrey Koplan, who guided the Centers for Disease Control and Prevention (CDC) in Atlanta through the country's first fatal bioterrorism attacks last fall, is stepping down on 31 March. Koplan unexpectedly announced his resignation last week, exacerbating the leadership vacuum at U.S. public health agencies. Three other top jobs are currently vacant, even as the nation struggles to face the continuing threat of bioterrorism.

    Koplan, who declined an interview request, did not give a reason for quitting, and in newspaper reports he denied that he was pressured to leave. Health and Human Services (HHS) Secretary Tommy Thompson called Koplan's departure a “loss,” adding that “I am going to miss [Koplan's] counsel, leadership, and dedication to public service.” But public health experts say there had been friction between Koplan and top HHS officials, including Thompson, in part over CDC's handling of the anthrax crisis.

    Help wanted.

    Koplan's departure leaves another top health job vacant.


    Some members of Congress and media outlets criticized Koplan last year for an apparent lack of control during the bioterrorism episode and for failing to communicate effectively with local public health experts and the public. “Koplan is a very knowledgeable and credible doc,” says Tara O'Toole, who heads the Center for Civilian Biodefense Strategies at Johns Hopkins University in Baltimore, Maryland; “the country would have been better off if it had seen more of him.” But O'Toole adds that it's unclear whether Koplan ducked the limelight on his own initiative or at the request of others in the Bush Administration. Eventually, National Institute of Allergy and Infectious Diseases (NIAID) director Anthony Fauci became the government's prime anthrax spokesperson.

    Koplan served CDC from 1972 to 1994 and took the centers' top job in October 1998 after a 4-year stint in the private sector. As a member of the agency's Epidemic Intelligence Service in the 1970s, he helped eradicate smallpox in Bangladesh, one of the scourge's last hideouts. In the early 1980s, he chaired the Public Health Service Executive Committee on AIDS. O'Toole and others credit Koplan for his steadfast push to improve public health infrastructure nationwide and his efforts to replace the agency's dilapidated facilities. “CDC has crummy old labs, and he did a superb job of getting a new building plan under way,” says C. J. Peters, a former head of CDC's special pathogens branch, who is now at the University of Texas Medical Branch in Galveston.

    Koplan's departure comes at a time when the Bush Administration is proposing to spend $5.9 billion next year to prepare for bioterrorism, some $1.6 billion of which would go to CDC. The National Institutes of Health, slated to receive $1.5 billion in bioterrorism funds, has lacked a director for 2 years. Fauci was long rumored to be the front-runner but is now out, according to media reports. Why the deal crumbled is unclear: Some attribute it to Fauci's wish to stay involved in NIAID; others say his candidacy was unpalatable to conservatives, who prefer an outspoken opponent of abortion and embryonic stem cell research.

    The Administration is also trying to fill the top slot at the Food and Drug Administration, as well as find a successor for Surgeon General David Satcher, whose term expired this month. Now that Koplan is leaving too, says O'Toole, “Tommy Thompson is truly home alone.”


    Battle Heats Up Over Mammography Benefits

    1. Eliot Marshall

    The top U.S. health official last week fired the most dramatic salvo to date in a long, drawn-out war over the benefits of mammography. But it is unlikely to be the last shot on the subject.

    On 21 February Tommy Thompson, secretary of the Department of Health and Human Services (HHS), released a report from an outside group saying that all women over 40 should get breast x-rays at least once every 2 years. This conclusion, published on the HHS Web site last week,*is at odds with some other biostatistical studies that have found little support for screening women in their 40s. Thompson buttressed the report with a personal view: Mammography saved his wife from cancer, he said, adding that “all of you in this audience [should] take these recommendations to heart.”

    The recommendation that mammography should begin at age 40 comes from the U.S. Preventive Services Task Force, an independent panel of health care experts that advises HHS. After examining published reports over a 2-year period, the task force concluded in January that there is “fair” evidence that mammography for women in their 40s “significantly reduces mortality from breast cancer.” Janet Allan, dean of the school of nursing at the University of Texas Health Science Center in San Antonio and a co-chair of the panel, appeared at the HHS press conference with Thompson to defend this finding. The risks and benefits of mammography have become clearer since the panel examined this issue in 1996, she said. Back then, the task force had found “insufficient evidence” to support routine mammography under age 50.

    Peter Greenwald, a National Cancer Institute official in charge of cancer prevention, used the press conference to criticize a widely cited analysis questioning the value of mammography. The paper, which appeared last October in The Lancet, rejected the methodology in five of seven large studies that have been cited as proving the value of mammography. The authors, Peter Gøtzsche and Ole Olsen of the Nordic Cochrane Center in Copenhagen, Denmark, a biostatistics group, said even the two studies that are reliable fail to show that the benefits outweigh the risks. The false positives that turn up in x-ray testing lead to anxiety and unnecessary surgery, according to the Lancet paper, which argued against the routine use of mammography in cancer screening.

    Screening supporters.

    HHS Secretary Tommy Thompson and advisory panel co-chair Janet Allen.


    The skeptics got another boost in January, according to biostatistician Donald Berry of Houston's M. D. Anderson Cancer Center, when another advisory group began taking a serious look at the Gøtzsche-Olsen analysis. The panel, which reviews medical literature for NCI's online information service known as the Physician Data Query, noted that the benefits claimed for routine screening with breast x-rays are small in public health terms, about 4 days of added survival per woman, says Berry, a longtime skeptic. “We found a lack of credibility” in many of the studies that claimed to find such benefits for women under age 50, Berry added.

    The panel's concerns were written up in The New York Times, raising the volume on a debate that has raged for at least 5 years, ever since a “consensus conference” in 1997 sponsored by the National Institutes of Health ruled that the evidence did not support routine mammography for younger women. That ruling brought down the wrath of the U.S. Senate, which issued a resolution favoring mammography by a vote of 98 to 0. Observers say that Thompson's very public endorsement of mammography, including the release of the task force's report on an accelerated scale, was intended to blunt this latest attack.

    Larry Norton, current president of the American Society of Clinical Oncology and a researcher at Memorial Sloan-Kettering Cancer Center in New York City, rejects the Gøtzsche-Olsen analysis, dismissing it as a scholarly debate about “30-year-old studies and 30-year-old therapies.” But he agrees that the controversy is far from over. Norton says that patients are getting far better diagnosis and treatment now and that mammography can produce a 25% to 30% reduction in mortality. The whole topic, he says, deserves yet another, more impartial, review.


    Clear-Cut Publication Rules Prove Elusive

    1. Eliot Marshall

    A select group of scientists and journal editors met last week at the National Academy of Sciences in Washington, D.C., to chisel out some commandments for their peers on the ethics of publishing. Organizers hoped that the 25 February session would produce clear and simple rules compelling scientists to share data. But the participants clashed on what it means to insist that an author make “freely available” the data backing a published claim—reviving an argument that wracked the human genome community a year ago. After drafting a few broad “thou shalt” phrases, participants failed to agree on how these rules should be enforced. The leader of the session—Thomas Cech, president of the Howard Hughes Medical Institute in Chevy Chase, Maryland—promised that an academy panel will fill in the details later.

    Prepping the audience, Eric Lander of the Whitehead Genome Center at the Massachusetts Institute of Technology began the day with a talk on historical context. The rules being considered by this meeting, he said, were established by the Royal Society in London in 1665 when it began publishing its scientific proceedings. The society offered a simple bargain, according to Lander: Anyone claiming to be an inventor could get the society's imprimatur—as long as the claimant published a detailed description of the discovery. Before this, scientists had often protected their work through concealment, Lander said; but, thanks to the society's bargain, they could achieve honor through disclosure. Lander proposed an updated set of rules, a “uniform policy on access to data and materials” (UPADAM), which he pronounced “up 'n' at 'em.” The basic idea is that if you choose to publish a claim, you must release all the “integral data” supporting it, as determined by editors and peer reviewers.

    One code?

    Lander (bottom) proposed a uniform policy; Patrinos (top) argued for flexibility.


    Lander acknowledged a personal stake in this cause. As the principal author of the draft version of the human genome sequence published in Nature last year, he strongly disapproves of the way a commercial group—Celera Genomics Inc. in Rockville, Maryland—was allowed to publish a rival paper at the same time in Science (16 February 2001, p. 1304). Unlike Lander's group, Celera did not release supporting data through a government-funded repository, GenBank. Instead, Celera allowed readers to view data at a Web site the company controls. Lander said Science made “a mistake” and did “a disservice” in agreeing to this form of data release. He asked the academy group to reject what he called “partial data release.” Some academic researchers, including Marc Kirschner, cell biology chair at Harvard Medical School in Boston, endorsed this view.

    But several others disagreed. The most outspoken dissenter was Ari Patrinos, director of biological and environmental research at the Department of Energy. DOE pioneered the Human Genome Project, although the bulk of support has come from the U.S. National Human Genome Research Institute (NHGRI) and the Wellcome Trust, a British charity. Patrinos, describing himself as “normally an optimist,” said, “I am extremely pessimistic about the outcome of this discussion.” It would be “a mistake,” he argued, to adopt a simple rule forcing authors to choose between releasing control of all their data at publication or not publishing. He thinks that enforcing such a rule would silence some would-be authors in the private sector.

    Patrinos urged people to “recognize the importance of the emerging biotechnology industry” and avoid adopting a set of “feel-good” data-release policies that suit mainly academics. This could cut the academic world off from some of the most exciting research being done now, he said. Patrinos argued instead for a “trench-by-trench” campaign, accommodating the rules of publishing to the circumstances of the author. Noting that private investment in research is increasing, Patrinos also warned that agencies such as DOE and NHGRI may have less clout than before: “Our hands may be more tied than in the past,” making it difficult “to enforce the rules you would like us to enforce.”

    Francis Collins, director of NHGRI, found these comments “puzzling.” He said that recently there has been “a blurring” of the rules on data release. “It is hard for me to see how we can step away from” an effort to “nail down” the basic principles and decide how they should be enforced, Collins said. And he argued that Patrinos's trench-by-trench approach would lead to a series of exceptions.

    Although the working session did not reach a consensus on who should be the primary enforcer of standards, Cech summed up a few principles he hoped all could agree on. The draft summary states that authors have a responsibility to “undertake reasonable efforts to make data and materials integral to a publication available in a manner that enables replication and further science.” Specifically, if authors claim to have created a large database, “the entire database must be available,” and in every case, they must make available “enough [data] to support the paper's conclusion.”

    Cech said he and his panel aim to wrap up a report on this project within “a few months.” Meanwhile, he said, the National Institutes of Health is planning to release its own updated set of data release guidelines—along with new grant support to help defray the cost of sharing materials—possibly as soon as next week.


    Forest Biotech Edges Out of the Lab

    1. Charles C. Mann,
    2. Mark L. Plummer*
    1. Contributing correspondent Charles C. Mann and Mark L. Plummer of Washington state write regularly for Science.

    New, high-intensity tree plantations are setting the stage for rapid biotechnological change in forestry. But the novel methods may never be used if the ecological risks and economic obstacles cannot be overcome

    BOARDMAN, OREGON—“Warning,” says the sign on the interstate. “Blowing Dust Area Next 45 Miles.”

    The drylands of northeast Oregon, an almost treeless region with an annual rainfall of just 20 cm, is one of the last places one would expect to see the future of forestry. But just outside Boardman, next to a Navy bombing range, sits a harbinger of things to come: 7200 hectares of cloned hybrid poplars, planted in square blocks 400 meters to a side. Grown by Potlatch, a Spokane, Washington-based forest-products company, the trees receive fertilizer, pest treatments, and water from a computer-controlled “fertigation” system that pumps water from the Columbia River, 8 km away, through 24,000 km of plastic pipes that crisscross the plantation. “We control what the trees get almost as precisely as if they were on a petri dish in a lab,” says research manager Jake Eaton. In this way, he says, Potlatch can grow 20-meter trees in just 6 years, achieving wood production rates 10 times the global average.

    Yet these huge, mechanized plantations, which are now sprouting in countries from New Zealand to Brazil, are just the beginning, say many forestry researchers. The next, and far more controversial, step in forest biotech will be to stock these high-intensity plantations with genetically altered trees that scientists say will grow faster, require fewer chemicals to pulp, or have wood with special properties. Already, researchers have inserted genes for traits such as pesticide resistance, herbicide tolerance, and delayed flowering into several types of trees, and the U.S. Department of Agriculture has received applications to field-test 138 types of transformed trees, 52 of them in the last 2 years.

    The future?

    At its high-tech plantation in Boardman, Oregon, Potlatch carefully controls agricultural inputs to create poplars that grow at 10 times the global average rate.

    Farther down the road, biotech supporters imagine extraordinarily fast-growing trees that can not only reduce the pressure on natural forests but help combat climate change as well. The ultimate goal, says botanist Toby Bradshaw of the University of Washington, Seattle, is to redesign trees altogether, creating superproductive organisms that in many ways will not resemble today's trees at all. Not only will the forest-products industry gain but so will the environment, says Eaton, who calls forest biotech “win-win.”

    Not everyone embraces this high-tech, bioengineered vision, however. Some forestry research leaders—notably Weyerhaeuser in Federal Way, Washington—have decided not to pursue some of the most advanced techniques, especially genetic engineering. Forest-biotech research may well pay off in the long term, these companies believe, but the short-term scientific, economic, and political hurdles are so high that they cannot justify embracing it all.

    And even some scientists who endorse superintense tree plantations worry about ecological risks of genetically engineering the trees in them. Introduced traits, they argue, could have unintended consequences if transferred to natural trees. And outside the research community, activists have already vandalized research plots and burned down a laboratory in an effort to rid the world of “Frankentrees” (Science, 6 April 2001, p. 34). In November, police found two bombs outside a forestry lab in Michigan. So intense is the opposition, in fact, that even some of forest biotech's strongest scientific supporters acknowledge that their research may not make it out of the lab for years, or even decades.

    “Rearchitecting” trees

    On a table in Steve Strauss's laboratory at Oregon State University in Corvallis, a leading center of arboreal genetic engineering, sits a key piece of high-tech equipment: an office hole punch. Strauss's co-workers use it to clip round, pencil-eraser-sized pieces from the leaves of a quaking aspen. They then drop the green circles into a broth thick with Agrobacterium tumefaciens, a common garden microorganism that inserts part of its DNA into host plants, causing tumorlike galls. Strauss's team has endowed the bacterium with genes for antibiotic resistance and delayed flowering, in the hope that it will insert those genes into the aspen DNA contained in those bits of leaf.

    After exposing the leaf circles to the bacteria for 48 hours, Strauss's team dips them into an antibiotic solution that kills all the leaf cells except those that took up the antibiotic-resistance gene. In petri dishes, the transformed leaf cells grow into tiny sprouts that eventually become large enough to pot. Researchers then evaluate the emerging trees to see whether the other introduced gene, for delayed flowering, is also being expressed. (Agrobacteria insert their genes randomly into the leaf-cell DNA, and the location of the foreign genes in the genome affects their function.)

    By delaying flowering past the time of harvest, Strauss hopes to reduce the likelihood that genetically modified trees will pollinate their wild relatives, an ecological safeguard he believes is essential. Not until researchers can limit the likelihood that novel genes with new properties will spread into natural forests, he argues, will industry be able to introduce transgenic trees into plantations safely.

    Another major research target for forest biotech is lignin. The compound that makes tree cells stiff, lignin is desirable for sawtimber but not paper. Removing it costs the pulp and paper industry $20 billion a year, according to Jonathan Malkin of ATP Capital, a biotech investment firm that backs high-tech forestry start-ups. In July, researchers at Michigan Technological University in Houghton announced the discovery of the gene responsible for producing syringyl lignin, the type of lignin in hardwood trees; the next step, they say, is to turn down the gene's expression and, they hope, create low-lignin trees. At North Carolina State University in Raleigh, researchers have discovered a natural mutation that lowers the amount of lignin in loblolly pine. Because the mutant gene eventually harms the tree, North Carolina State botanist Ronald Sederoff and his team are trying to create heterozygous loblolly pines that grow normally but are more easily pulped.

    The University of Washington's Bradshaw has a far grander goal: what he calls the “rearchitecting” of trees. “What a tree wants to do is grow its trunk as thin as possible and devote as many resources as possible to leaves and seed,” he says. “What [foresters] want are as much wood as possible and as little leaves and flowering as possible.” Most trees have about one-third of their biomass tied up in their root systems, a percentage foresters would like to lower. Trees today can't be packed too closely in farms because they respond to crowding by reaching for light, resulting in taller, thinner, and therefore less desirable trunks. Biotechnology, Bradshaw suggests, offers the possibility to “create the tree we want.”

    Bit by bit.

    Using an office hole punch, researchers clip pieces of aspen leaves, insert foreign genes in culture, and plate the discs to produce transgenic seedlings.


    In a scenario that is widely believed to be distant but feasible, scientists would create genetically modified trees for tomorrow's intense plantations: short, wide, almost branchless organisms without extensive root systems that could withstand crowding. These supertrees wouldn't “look anything like trees today,” Bradshaw says, “any more than today's corn looks like its ancestor.” But, as he acknowledges, turning this dream into reality will require leaping over high scientific, economic, ecological, and political hurdles.

    Scientific challenges

    There are three overarching scientific barriers to bioengineering trees. First, trees are so different from the annual plants used in most biotech research that scientists may have little ability to use better known genomes as guides. Second, researchers have been unable to propagate most trees clonally, an essential step in reliably disseminating new strains. (An exception is the genus Populus, whose members—aspens, poplars, and cottonwoods—can be easily cloned and have long been favored by researchers for that reason.) Finally, even if breeders produce trees with the desired genetic makeup, the effects would take years to evaluate, unlike wheat or maize, which can be tested in a few months. Recent work suggests that researchers are close to solving the first two difficulties; the third may be overcome with more funding and experience.

    In conventional biotech, researchers studying little-known species can work with genes from better understood species, either by transplanting them directly or using them as guides to search for equivalent genes, or homologs, in the new species. The delayed-flowering gene in Strauss's lab, for example, comes from Arabidopsis thaliana, the model plant for molecular biology. But such techniques could be limited in silviculture. For one, long-lived trees are so unlike annuals such as Arabidopsis that scientists don't know whether their most important genes will be readily identifiable homologs. “A tree is essentially a mountain of poisons,” Strauss notes. “Trees have to sit out there for a couple of decades and not get eaten. Arabidopsis doesn't, so there's no obvious reason why its genetic makeup should be comparable.”

    If researchers cannot rely on homologs, they will have to sequence and evaluate tree genomes, not an easy task. Many commercially important trees have unusually big genomes: Pines, with 20 billion base pairs of DNA, have a genome seven times as large as that of humans. Yet pines “are not expected to have any more genes than Arabidopsis,” says Bradshaw. “Their genome is probably full of junk. But that doesn't make sequencing it less of a chore.”

    Nonetheless, genome projects are under way, focusing on spruce (a project based in Canada) and radiata pine (a commercial effort run by Genesis, a New Zealand biotech firm). In 1999, Sederoff received almost $4.5 million from the U.S. National Science Foundation to begin sequencing loblolly pine, the most important plantation tree in the southern United States. And last month the U.S. Department of Energy (DOE) launched a fast-track program to sequence a member of the Populus genus. Bradshaw is already growing cuttings from a black cottonwood from southern Puget Sound to send to sequencing laboratories. (At 550 million base pairs, the cottonwood's genome is of a manageable size.) The project, based at DOE's Joint Genome Institute in Walnut Creek, California, but run by an ad hoc group led by Bradshaw, should complete its first pass—sequencing each gene an average of three times to reduce the chance of error—by fall 2003.

    If researchers do succeed in introducing a new trait into a tree, propagating it poses the next challenge. The ideal strategy, forest researchers agree, would be clonal propagation: the process used by breeders of annual plants when, for example, they grow violets from cuttings. But because almost all conifers and many hardwoods cannot readily reproduce in this way, scientists are investigating a process known as “somatic embryogenesis": in essence, inducing cells in nonreproductive tissues such as leaves or roots to grow embryos. Like clonal propagation, the process does not involve fertilization, so there is no risk of pollination by wild trees—which occurs frequently in conventional plantations—and the resulting embryos will be clones of the tree that produced them.

    Typically, somatic embryogenesis involves knocking adult cells back to a juvenile state in which they are less firmly set on their course—often by exposing adult tree cells to dilute solutions of herbicides, especially 2,4-dichlorophenoxyacetic acid, says Scott Merkle, a tree geneticist at the University of Georgia in Athens. In the usual dosage, such herbicides “kind of stimulate plants to death,” Merkle says. “At 2 parts per million, which is what we use, it simply stimulates them”—unlocking a previously hidden potential to create new clones. Creating the clone embryos is often relatively straightforward, Merkle says, “but getting them to germinate properly and make a somatic seedling is a problem, because it's difficult to get an embryo in culture to grow anywhere near in size to an actual seed embryo.”

    Nevertheless, using techniques developed by biologist Stephen Attree, CellFor, a forestry start-up in Vancouver, British Columbia, says it has mastered somatic embryogenesis for some of the most commercially important softwoods. (Attree did the work at the University of Saskatchewan and is now CellFor's chief of research.) According to CellFor president Christopher Worthy, next year the company will produce 8 million to 10 million embryos and deliver to forest-products companies 3 million seedlings of loblolly pine, Douglas fir, radiata pine, and spruce; ultimately it intends to scale up to more than 250 million seedlings a year.

    Even if propagation can be achieved, researchers will still face the inherent difficulty of evaluating the results of forest-biotech experiments. “If you insert a new gene into a tree,” explains William Baughman, forest-research manager of the MeadWestvaco timber company in Stamford, Connecticut, “you have to grow that tree long enough to show that after a generation or so the only change that occurred is what you expected and that it has not mutated into something strange.” Because trees may not mature for years or even decades, testing is costly and slow.

    Examining faster growing species may at least help speed early research. Sederoff notes, for example, that Simcha Lev-Yadun, a plant geneticist at the University of Haifa, Israel, has discovered that “if you prune Arabidopsis in the right way and raise it in the right conditions, it grows to 10 times its normal size and makes woody stems.” Arabidopsis may therefore provide some clues to the genetics of wood formation—and even, perhaps, the role of lignin.

    Ecological and economic questions

    If the technical hurdles for bioengineered trees can be overcome, the potential ecological payoffs could be enormous. So could the risks. According to the U.N. Food and Agriculture Organization, world demand for wood products in 2010 will be about 1.9 billion cubic meters, almost 20% higher than it is now. To meet that demand without laying waste to the world's remaining forests, economist Roger Sedjo of Resources for the Future in Washington, D.C., and ecologist Daniel Botkin of George Mason University in Fairfax, Virginia, suggested in a widely read 1997 paper that forest-products companies devote small areas “to intensive timber production and large areas to other uses, including biological conservation.” This, they said, could drastically reduce the pressure on natural forests.

    Tree doctor.

    If transgenic trees can be designed to be sterile, says Steven Strauss, they will pose fewer ecological risks.


    And if logging were almost entirely confined to high-intensity plantations, speculates economist David G. Victor, director of Stanford University's Program on Energy and Sustainable Development, the tropical forests that now release carbon as they are cleared might instead become a carbon sink. Simply using techniques such as somatic embryogenesis to put the best, fastest growing lines of conifers in the field, says Baughman of MeadWestvaco, “would let companies use a third less land to grow the same amount of wood. For a company like International Paper [the largest private forestland owner in the United States], that's 3.5 million acres [1.4 million hectares] you don't have to cut. And that's without transgenics. Add in transgenics, and you're talking about completely transforming the industry.”

    Don Doering, a senior associate at the World Resources Institute (WRI), a think tank in Washington, D.C., is not convinced. “A transgenic pine in Georgia will no more save the forests of Indonesia than an improved soybean grown in Iowa benefits the food-insecure peoples of Africa and Asia,” he said at a forest-biotechnology conference last summer. Even researchers such as Botkin who favor intensive plantations have strong reservations about transgenic trees. He likens the environmental dangers to introducing exotic species into an ecosystem—a practice that has produced “good-willed disasters.” Plenty of benefits can be achieved without genetic modification, he insists. Potlatch-style plantations, he says, “have side effects that are better understood and less of a risk. … Why not do the simple thing first?"

    Strauss thinks that his and others' work on producing sterile trees can reduce the likelihood of gene flow from genetically altered trees to their wild relatives. He also notes that rearchitected supertrees will have traits—short stature, small branches—that make them unlikely to survive outside carefully controlled tree farms.

    Aside from safety concerns, the basic economics of forestry will make costly research programs such as tree genetic engineering a tough call. “When you have to wait 20 to 30 years to get payback,” says Todd Jones, director of Weyerhaeuser forest biotechnology, “you have to have something that looks like it's going to have some real economic potential. If we look at economic models for some of the genes that do appear to be out there, there aren't that many that make that hurdle.” Take herbicide resistance. Applying herbicides “is not that large of an expense” in the forest industry, Jones says.

    Competition from conventional tree breeding poses another economic barrier. Because most breeding programs are now in only their second or third generations, traditional methods can still yield sizable gains. The approach may not be cutting-edge, but its more predictable returns make it attractive to a fiscally conservative industry. Finally, uncertainty over how bioengineered trees will be regulated adds to their economic risk. For ordinary crops that have been genetically engineered, running this regulatory gauntlet can cost years and “millions of dollars,” says Nancy Bryson, a Washington, D.C., attorney who works on biotech regulation issues. The rules for trees are just beginning to evolve, she points out, and companies can't predict how burdensome they are likely to be.

    All dressed up with no place to go

    To WRI's Doering, the slow emergence of forest biotech has a positive side. Unlike transgenic crops, which were deployed in a frenzy, “there's a real chance of getting [tree engineering] right,” he says. “There isn't overwhelming pressure, everyone can be cautious, and no one's going to make a fast buck on this. Society has the chance to make some good choices.” He suggests that the forest-products industry demonstrate biotech's societal benefits rather than concentrating on economic gain. Genetically transforming the American chestnut to confer resistance to the blight that has ravaged this beloved tree in the eastern United States, he says, would be something that “speaks directly” to the public (see sidebar).

    Potlatch, though, is moving away from genetic engineering, a decision that highlights forest biotech's uncertain future. In 2000, the company decided to seek certification of its environmental practices from the Forest Stewardship Council, a nonprofit organization that issues a kind of ecological Good Housekeeping seal to qualified timber companies. Potlatch's intensive, high-technology tree farm passed muster with the council last summer, but with an important condition: It had to remove any genetically modified organisms from its Boardman plantation—a decision that permanently shut down a 1.2-hectare plot that the company was hosting as part of Strauss's research.

    Potlatch still supports Strauss's work at Oregon State University, says Eaton: “We just can't do it on our farm.”


    Can Genetic Engineering Help Restore 'Heritage' Trees?

    1. Charles C. Mann,
    2. Mark L. Plummer*
    1. Contributing correspondent Charles C. Mann and Mark L. Plummer of Washington state write regularly for Science.

    In the summer of 1904 Hermann W. Merkel, a forester at the New York Zoological Park, noticed peculiar cankers on the stately chestnut trees that lined the zoo's pathways. The cankers—caused by the Asian fungus Cryphonectria parasitica—soon circled the trunks completely, killing the trees. Initially, Merkel's report was treated as a curiosity. But the fungus spread with astonishing speed. By the end of World War I, the American chestnut, which once dominated many eastern forests, was fast approaching oblivion.

    Now, forest-biotech researchers believe genetic engineering might help restore this majestic species—and possibly other “heritage trees” menaced by disease, including elms, white pine, butternut, and several species of California oak. So promising are the new techniques that researchers from academia, industry, government, and private foundations are forming a coalition to bring back these species, starting with the American chestnut. If the effort pays off, it would put an end to decades of scientific frustration and, its backers hope, some of the negative aura of genetic engineering (see main text).

    Since 1983 the American Chestnut Foundation has been trying to restore the species using conventional breeding. It has been crossing American chestnuts (Castanea dentata) with blight-resistant Chinese chestnuts (Castanea mollissima), then repeatedly “back-crossing” hybrids that showed resistance, to obtain resistant trees that look like pure American chestnuts. Under the best of circumstances, back-crossing takes decades, and the end product would still have many unwanted Asian genes. But the problem has proven even harder to solve than the foundation initially anticipated.

    Blight resistance in the Chinese chestnut is largely due to three genes located on widely separated portions of the plant's genome. Because the genes are inherited independently, the only way to pass on the trait is to mate resistant hybrids with other resistant hybrids, and that entails creating many resistant hybrid lines—"really a difficult proposition,” says William Powell of the State University of New York (SUNY) College of Environmental Science and Forestry in Syracuse.

    Raised from the dead?

    Using genetic manipulation, scientists hope to restore the American chestnut, which once dominated eastern forests.


    To several researchers, including Powell and Charles Maynard of SUNY and Scott Merkle of the University of Georgia in Athens, genetic engineering offers a clear shortcut. But it, too, has proven tough. “The chestnut hates genetic manipulation,” says Maynard. The tree is so difficult to propagate in culture, he jokes, that “it's as if it wants to go extinct.” Indeed, scientists spent a decade devising a reliable method for propagating them in the field, a crucial first step.

    The researchers are now looking for genes with antifungal properties. A leading candidate, say Powell and Maynard, is OXY, a wheat gene that encodes oxalate oxidase. Oxalate oxidase breaks down oxalic acid, the compound exuded by Cryphonectria parasitica to kill cells. By splicing in OXY, Powell and Maynard hope to endow chestnut cells with a weapon to fight back.

    Powell, Maynard, and Merkle may soon get some much-needed help. Last November, a diverse group of academic, government, and private chestnut researchers* met at the North Carolina Biotechnology Center's Institute of Forest Biotechnology in Research Triangle Park to form a coalition to bring back the American chestnut and other heritage trees. According to institute head Edward Makowski, the parties are still working out the best legal structure for the group, which could license some patented genes from its corporate members. He hopes to resolve these issues “within the next 30 to 90 days.”

    But even if the coalition can design a resistant chestnut, the problem will not necessarily be solved, according to Roger Sedjo, an economist at Resources for the Future in Washington, D.C. The ecological niche formerly occupied by American chestnuts “was filled largely by oak trees,” Sedjo notes. “Part of the question is, 'Could the American chestnut reestablish itself on a wide-scale basis?' Once it's been displaced, it might not get back in there” without major effort. Although he acknowledges these obstacles, Makowski notes that “the loss of the chestnut was an enormous ecological disaster. I can't imagine anything more exciting than the chance to reverse it.”

    • *Participants included the American Chestnut Foundation, the U.S. Forest Service, the American Lands Alliance, the forest-biotech firms Arborgen and Mendel Biotechnology, and academic researchers such as Maynard and Ronald Sederoff of North Carolina State University.


    New State of Matter Not So New?

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Boone, North Carolina.

    BOSTON—Featuring its usual breadth, the annual meeting of the American Association for the Advancement of Science (publisher of Science), held 14 to 19 February, included reviews of recent physics breakthroughs and new findings in anthropology. Other stories can be found in last week's issue (see “Human Gene Count on the Rise” and “A Whale of a Chain Reaction”) and at and

    The 2001 Nobel Prize in physics honored three researchers for coaxing hordes of atoms into a single quantum state. But the bizarre phenomenon known as Bose-Einstein condensation was actually spotted many years before the prize-winning work, says one of the new laureates. Other scientists disagree.

    Bose-Einstein condensation occurs when certain types of particles are cooled to near absolute zero and suddenly collapse en masse into the single quantum state with the least energy and no momentum. Predicted in 1924, such condensation was achieved in 1995 by Eric Cornell and Carl Wieman of JILA, a laboratory run by the National Institute of Standards and Technology and the University of Colorado, Boulder, and independently by Wolfgang Ketterle of the Massachusetts Institute of Technology. The researchers used lasers and magnetic fields to cool gases of atoms such as rubidium and sodium, and last year, the three received the Nobel Prize for their efforts (Science, 19 October 2001, p. 503).

    But Bose-Einstein condensation was seen decades ago in liquid helium, Ketterle mentioned during a presentation at the meeting, acknowledging a controversial claim by John Reppy of Cornell University in Ithaca, New York. Although helium-4 becomes a superfluid and flows without resistance at low temperatures, physicists generally agree that superfluid helium-4 is not a Bose-Einstein condensate (BEC) in the original sense of the term. That's because its atoms interact much more strongly than theory assumes BEC particles do.

    Reppy studied an exception: tiny amounts of helium trapped in nanometer-sized pores of a spongelike glass called Vycor. Even though the pores keep its atoms too far apart to jostle one another much, the helium still behaves like a three-dimensional fluid. In 1983 Reppy and colleagues reported results that suggested that the helium was sloshing through the glass as a true BEC.


    Some claim that the newest state of matter was seen long ago in superfluid liquid helium.


    Ketterle says that Reppy brought this finding to his attention a couple of years ago and that the priority claim is fair. “I think the results appeared conclusive,” Ketterle says. But co-laureate Wieman says that Reppy's claim is “really a stretch” and that Ketterle may have acknowledged the helium experiments to appease Reppy. “Ketterle is being gracious,” Wieman says, “and Reppy makes a lot of noise.”

    Other experts are also divided. Jason Ho, a theorist at Ohio State University, Columbus, who has studied both liquid helium and atomic BECs, says Reppy certainly saw signals consistent with Bose-Einstein condensation, although he hesitates to call the observation conclusive. Guenter Ahlers, a helium physicist at the University of California, Santa Barbara, has no doubts. “Reppy had it before the atomic physicists,” Ahlers says, “and he never got the credit he deserves.”

    On this much all agree: Ketterle, Cornell, and Wieman have opened a spectacular new field of physics. BECs in atomic gases have already been used to fashion rudimentary “atom lasers” and to stop light in its tracks. The BEC in helium in Vycor—if it's there—remains trapped within the glass.


    Elbow Room and New Jewelry

    1. Ben Shouse*
    1. Ben Shouse, a former Science intern, is an intern at The Nation.

    BOSTON—Featuring its usual breadth, the annual meeting of the American Association for the Advancement of Science (publisher of Science), held 14 to 19 February, included reviews of recent physics breakthroughs and new findings in anthropology. Other stories can be found in last week's issue (see “Human Gene Count on the Rise” and “A Whale of a Chain Reaction”) and at and

    Humans living in the eastern Mediterranean first began wearing beads just when their populations apparently began to expand. The finding, reported here 15 February, suggests that modern behaviors such as dolling up with jewelry may have originated from a need to communicate rather than a fundamental change in the human brain.

    Humans started wearing beads about 42,000 years ago, according to work on archaeological sites in Bulgaria and Kenya. New excavations have now uncovered evidence of a third independent invention of ornamental beads. Pierced seashells started to appear in Turkey and Lebanon between 41,000 and 43,000 years ago, reported Mary Stiner of the University of Arizona in Tucson. At the same time, Stiner and her colleagues found, the locals started eating fewer tortoises and mollusks and more hares, as evidenced by the bones found at human settlements. To Stiner, the fact that hunters were forced to seek swifter quarry implies that the human population was growing. (The tortoise may have won one highly publicized contest, but the hare won the race to escape the hungry caveman.)

    Conversation piece.

    A need to communicate may have led to more seashell ornaments.


    This population growth may have prodded an explosion of bead use as a means of more efficient communication, Stiner says. Although other scholars suggest that ornamentation and other “modern” use of symbols was prompted by a so-called cognitive revolution in brain wiring, decorations may be a response to more frequent social encounters and the need to convey more information about oneself to strangers. “Rather than saying it's a new brand of human being, we're saying it's a new rate of social contact,” she says. Furthermore, a sudden biological change most likely would have occurred in only one place, Stiner says, whereas ornamentation arose independently in at least three.

    The theory is plausible and intriguing, says Frank Hole, an archaeologist at Yale University, but he notes that the finding is only a correlation. “There are other cases where people are living close together and not doing anything like this,” he says.


    Viral Threat to Newborns Under Radar

    1. Eliot Marshall

    BOSTON—Featuring its usual breadth, the annual meeting of the American Association for the Advancement of Science (publisher of Science), held 14 to 19 February, included reviews of recent physics breakthroughs and new findings in anthropology. Other stories can be found in last week's issue (see “Human Gene Count on the Rise” and “A Whale of a Chain Reaction”) and at and

    A common herpesvirus that infects healthy adults but rarely harms them—cytomegalovirus (CMV)—is now being recognized as a major threat to newborns. More than 40,000 children in the United States are infected with CMV at birth each year, and as many as 8000 of them suffer major consequences, including hearing loss and reduced IQ, says clinical researcher Richard Whitley of the University of Alabama, Birmingham.

    Speaking here at the meeting, Whitley estimated the national economic loss due to CMV at more than $1 billion a year. Whitley says this makes it one of the greatest health risks for U.S. children—comparable to infectious agents such as Haemophilus influenzae that have received far more attention in the past.


    Although cytomegalovirus can harm children, industry research on it is declining.


    CMV infection, which often coexists with HIV, attracted the attention of drug companies in the 1990s. When HIV infection rates were climbing, the pharmaceutical R&D investment in this area increased in parallel. But when more effective combination drug therapies began to reduce HIV incidence, CMV infections dropped in parallel; records show a sharp drop-off in opportunistic CMV infections of the eye (CMV retinitis) in adults starting in 1997. Since then, Whitley and other speakers said, drug companies have been cutting back on support for anti-CMV drugs because the potential market has declined.

    The drugs now available for treating CMV, researchers noted, are far from ideal. They cause life-threatening side effects and must be injected intravenously (or in the case of CMV retinitis, directly into the eye). Whitley and several other speakers—including Leroy Townsend of the University of Michigan, Ann Arbor, and Karen Biron of GlaxoSmithKline—described new compounds they are helping develop that might eventually be used as oral medicines to combat CMV. None has yet been approved by the Food and Drug Administration.


    'Earth Simulator' Puts Japan on the Cutting Edge

    1. Dennis Normile

    Researchers are eagerly awaiting the debut next month of the world's fastest computer for modeling Earth's climate and interior

    YOKOHAMA, JAPAN—Will rising levels of greenhouse gases make northern Europe as cold as Alaska? Current climate models disagree sharply about how a slight rise in atmospheric temperatures could affect the North Atlantic currents that carry warm water from the Caribbean and keep Europe's winters relatively mild. That's because the models' picture of climate is too crude to make accurate predictions based on such fluctuations. But next month, Japanese scientists will start to operate a $310 million supercomputer that can digest a much more detailed model and, researchers hope, spit out a better answer.

    Conceived around the time Japan was hosting the Earth Summit that produced the 1997 Kyoto Protocol on climate change, the Earth Simulator—billed as the world's most powerful computer—is designed to do more than help resolve such key questions about global warming. The supercomputer, housed at the Earth Simulator center here, will also add a new dimension to studies of Earth's interior by allowing the first global-scale simulations of the interaction between core and mantle, and between mantle and crust. And, although climate change and Earth's interior get priority, incoming director-general Tetsuya Sato, a computer simulation specialist, says he hopes to make the supercomputer available “for anything that promises to produce epochmaking results.”

    Scientists are already excited about its potential. “It's really a marvelous present the government has given to Japan's scientists,” says Syukuro Manabe, a prominent climate modeler at Princeton University who is an adviser to the project. Geoff Jenkins, head of the climate prediction program at the U.K.'s Hadley Centre in Bracknell, says the computer's “more comprehensive climate models should contribute substantially” to the debate over the effects of climate change.

    Groundbreaking work.

    Director Tetsuya Sato says the computer is available for “anything that promises to produce epochmaking results.”


    However, some researchers fear that bureaucratic and budgetary squabbles could handicap their work. The center's budget is split between several agencies that have different priorities. The country also has a dearth of senior-level environmental scientists capable of using the facility, and officials have not yet worked out a mechanism for sharing their bounty with foreign researchers. “The challenge is creating a system [of usage] to make this project worthwhile,” says Manabe.

    Details, details

    There is little doubt about the Earth Simulator's capabilities. At its theoretical peak, the machine's 5120 processors perform 40 trillion floating-point operations per second (teraflops). That is 20,000 times what a typical desktop computer can deliver and 50 times the speed of supercomputers now available to climate modelers in the United States and Europe.

    This computing power will put the Earth Simulator at the center of efforts to predict the consequences of global warming. Climate models incorporate data on air temperatures, wind speeds, and precipitation from points at the vertices of rectangular grids placed around the globe. Current models use grids 200 kilometers to 300 kilometers per side. In contrast, the Earth Simulator will use a grid 10 kilometers per side for its ocean model and one 60 kilometers per side for its atmospheric model.

    The finer scale, for example, will allow the simulation of mesoscale eddies, currents operating over distances of less than 100 kilometers. Recent research suggests that these local swirls, which are overlooked in current models, play an important role in ocean thermal energy movements, salinity changes, and other factors that could have dramatic effects on the behavior of larger, oceanwide currents. “There are a number of questions beyond the power of currently available computers that we should be able to fundamentally resolve on the Earth Simulator,” says Taroh Matsuno, director-general of the 4-year-old Frontier Research System for Global Change, one of several institutes funded by the Japan Marine Science and Technology Center (JAMSTEC) and a primary user of the Earth Simulator.

    On top of the world.

    The Earth Simulator will give modelers unprecedented computing power to run complex atmospheric and geophysical simulations.


    The new computer could also help climate modelers resolve the embarrassing conflict between models that predict far different outcomes on topics as sensitive as the climate of northern Europe. “If you don't understand why different models give different answers, then predicting climate change is no different from fortune-telling,” Manabe says. Resolving conflicts among the different models, he adds, will also allow scientists to send a more consistent message about climate change to policy-makers.

    The Earth Simulator is also designed to model movement of Earth's interior more realistically. Until now, says Mitsuhiro Matsu'ura, a geophysicist at the University of Tokyo, researchers have worked with models of either the core or the mantle. The result has been an incomplete understanding of how the core's churning produces Earth's magnetic fields, as well as gaps in understanding interactions between the mantle and the crust, which help drive the globe's tectonic plates, and in modeling earthquake propagation and seismic radiation.

    “There were no models which systematically covered processes on a global scale,” says Matsu'ura, who since 1998 has worked with a team of earth scientists and computer specialists to develop the software needed to tackle these challenges. “Scientifically, these results are going to be very interesting.” The computer will be a shot in the arm for the field, agrees John Rundle, a geophysicist at the University of Colorado, Boulder. “The Earth Simulator will be an extremely valuable facility” in helping geoscientists catch up to their colleagues in condensed matter physics and astrophysics, he says.

    Squeeze play

    Unfortunately, even a wondrous tool for modeling the complexities of Earth's physical processes can't escape the complexities of political processes within Japan. All three of the Earth Simulator's sponsoring agencies—JAMSTEC, the National Space Development Agency, and the Japan Atomic Energy Research Institute—belong to a class of public corporations that Prime Minister Junichiro Koizumi has vowed to slim down or abolish. The reform is aimed at entities that build and operate toll roads and airports, but all public corporations have been put on a budgetary diet. As a result, JAMSTEC and its partners can provide the center with only half of the $31 million it needs to operate full-time in the fiscal year that begins on 1 April.

    To make up the shortfall, the Ministry of Education, Culture, Sports, Science, and Technology hurriedly arranged for a new grants program to fund scientists who develop and use computer models to study climate change and the hydrological cycle. But the rescue comes at a price: Although the program will pay for the rest of the simulator's running costs, the research proposals will be reviewed by a Ministry of Education advisory committee rather than the center's usual reviewers. Sato worries that this bifurcation could lead to redundancies and a failure to tackle the big questions in climate change research. “We're in danger of becoming more like a big time-sharing computer than a mission-oriented facility,” he says.

    This administrative uncertainty was a turnoff for Manabe, who until last fall ran the global warming modeling unit in the Frontier global change research program. Manabe spent most of his career as a climate modeler at the U.S. National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey, before returning to Japan in 1997. But he stepped down after deciding that he lacked the contacts and inside knowledge to assemble a good research team or work effectively on a large-scale project. “I think this group should be headed by somebody who has been [in Japan] for a long time,” he says.

    A shortage of senior scientists is “our biggest problem,” admits Matsuno. Four of the center's six program directors also hold half-time university appointments. The lack of depth within the senior ranks makes collaborations with foreign teams even more important, but Sato says it will be a year or more before the center has a policy in place to govern such arrangements.

    However, time is the enemy of any cutting-edge research facility. U.S. climate modelers are hoping to gain at least occasional access to a 100-teraflops computer to be completed by 2005 under the Department of Energy's Accelerated Strategic Computing Initiative, and their European colleagues are lobbying for a new E.U. supercomputer that could be as powerful as the Earth Simulator. Matsu'ura says that “producing some accomplishments is a must” before these other facilities come online.


    Tangled Roots? Genetics Meets Genealogy

    1. Kathryn Brown

    Genealogists are discovering the new high-tech tools of genetic analysis, but they may hope for more history than current techniques can deliver

    Genealogy meetings are usually familiar affairs. Family historians swap stories and standard tools of the trade—faded maps, old census data, hot Internet sites. But the 700 participants who showed up for the 10th annual GenTech conference in Boston last month got a glimpse of the future. In the exhibit hall, Oxford Ancestors, a U.K. company, and Relative Genetics, a Salt Lake City lab, advertised cheek-swab tests that, for about $220, deliver a “genetic identity” by mail. Nearby, employees of Brigham Young University (BYU) in Provo, Utah, drew the blood of some 200 volunteers donating DNA to a growing genealogical database. And inside darkened lecture rooms, scientists wowed crowds with colorful tales of ancient blood ties.

    Welcome to the latest commercial frontier in the genome revolution. Drug development, new diagnostics, and designer genes may be getting most of the attention, but “genetics is going to affect genealogy faster than any other field,” predicts Oxford Ancestors founder Bryan Sykes, also an Oxford University geneticist. Unlike the hazy genetics of, say, disease susceptibility, ancestry is strikingly simple, Sykes says: Fathers pass down Y chromosome DNA (Y DNA), and mothers mitochondrial DNA (mtDNA), generation after generation. “There's no proof of principle needed,” says Sykes. “We can glean masses of information for genealogists with simple genetic tools.”

    But the commercial marriage between genetics and genealogy is raising some concerns. The genetic tools, most agree, are still rudimentary, and interpreting genetic data can be vexing. “The statistics are not always easy,” comments geneticist Mark Jobling of the University of Leicester, U.K. “And that can be hard to put across fairly to a customer.” He and others worry that customers walk away with fuzzy statistics and fantasy. “I think these companies have a role to play, as long as the science is done well,” remarks Peter Underhill, a molecular anthropologist at Stanford University. “My concern is that people comprehend the relatively low level of resolution offered by these tests.” Because the tests analyze relatively few markers along Y DNA or mtDNA, Underhill says, millions of people may share a given molecular profile.

    The quest for roots

    Genetic genealogy does have a reputable pedigree. In tracing an individual's origins, these companies are adapting the same tools that population geneticists have used over the past 20 years to retrace ancient human migrations. In 1989, Sykes became the first to report recovering ancient DNA from archaeological bone. Since then, he has delved into the DNA of the so-called Ice Man, Cheddar Man, and other historical mysteries.

    Blood ties.

    Relatives look alike, and so do their DNA sequences. Genealogists hope genetic tests will help build family trees.


    Along the way, Sykes, among others, realized that there's another market for this DNA detective work: genealogists. Today, four fledgling companies (see table) specialize in genetic genealogy. If they succeed, others are sure to follow. After all, who doesn't want to know where he or she came from?

    Some clues are hidden in mtDNA, a tiny ring of genes that coils inside the mitochondria. Building a database of European mtDNA over the past decade, Sykes recently concluded that 95% of Europeans descend from just seven women, described in his provocative 2001 book The Seven Daughters of Eve.

    But Y DNA tends to be the genealogist's tool of choice because it's handed down from father to son, as are most surnames. The two make a powerful combination. Five years ago, geneticist Michael Hammer of the University of Arizona in Tucson and colleagues studied Y DNA from Jews to validate the legend of Cohanim families, a Jewish priesthood handed down from father to son for 3000 years.

    For both mtDNA and Y DNA, the historical clues are mutations, slight variations that distinguish the Smith family's genetic sequence from the Jones's. Some of these mutations are simple DNA substitutions, say a G for a C. Others are microsatellites, or sequence repeats, in which short strings of DNA repeat over and over.

    View this table:

    The unique pattern of these mutations creates a person's genetic signature, or haplotype. By comparing a set of mtDNA or Y DNA markers from two people, researchers can determine whether they are likely to be distant relatives. They can even gauge roughly when their common ancestor lived: Was it a New York immigrant 200 years ago, or someone in Iberia centuries earlier?

    Where do I come from?

    If you suspect that your roots are European, Oxford Ancestors offers to take you back thousands of years. Aside from writing the check, it's an easy test: Simply swab the inside of your cheek, a ready source of DNA, and mail the swab. Company scientists will compare your mtDNA to the seven ancestral mtDNA sequences isolated by Sykes: the so-called seven daughters of Eve, who lived 10,000 to 45,000 years ago. Some weeks later, an “authorized certificate, suitable for framing” will identify your primal mother. She could be Ursula, Xenia, Helena, Velda, Tara, Katrine, or Jasmine, according to the names Sykes gave to each sequence's founder.

    Although the mtDNA technique is reasonably accurate, some population geneticists sniff at the fanciful names and dramatizations. But that doesn't faze Sykes. “Academic snobbery,” he retorts.

    Similarly, Family Tree DNA in Houston, Texas, offers a Native American ancestry test. For $319, the company will scan a customer's Y DNA for a genetic marker carried by more than 70% of male Native Americans. Another test, for men or women, compares a customer's mtDNA to that of five known Native American groups that share the same haplotype.

    Although the tests point to probable Native American origins, they can't distinguish tribes or meet stringent court guidelines for definitive ancestry, cautions Family Tree DNA founder Bennett Greenspan, a Houston entrepreneur. “We can't tell whether someone's Choctaw or Cherokee,” says Greenspan, whose company contracts with Hammer's University of Arizona lab for all DNA analysis.

    Are the results accurate? That depends. Some populations, such as Native Americans and Jewish Cohanim families, do have strikingly unique genetic signatures: specific mtDNA or Y DNA mutations unique to that population. In those cases, genetics and genealogy do combine to craft a more likely family history than genealogy alone can.

    Yet at least one geneticist has been accused of prematurely seeking profit from genealogy. Several years ago, geneticist Rick Kittles of Howard University in Washington, D.C., contributed to a project on DNA from skeletal remains of African Americans, funded by the National Human Genome Research Institute (NHGRI). Shortly afterward, Kittles announced plans to go commercial, by offering African Americans a $300 blood test to compare their DNA with a database of African ancestral DNA—promising a glimpse of the geographic regions, or perhaps even tribes, of native ancestors. But when a colleague blasted the research as too preliminary, Kittles found himself back-stepping, while Howard assured NHGRI that he wasn't using grant money to build a business. Today, Kittles says he's looking for commercial backers for the African Ancestry Project.

    Where are my relatives?

    While drafting a family tree, many genealogists hope to fill in more recent branches by finding relatives who are alive today. In fact, Family Tree DNA's most popular service is a sort of “family reconstruction” project, comparing the genealogy and genetics of at least six people who share a surname.

    But confirming a distant cousin is just as tricky as finding a founding ancestor. Some Y chromosome haplotypes are more common than others. Researchers are still learning how frequently various haplotypes may occur in the general population. “If you have a kind of Y chromosome that's got a 10% to 20% frequency, then to find two males share this Y chromosome may not be very significant,” says Leicester's Jobling.

    The most basic Y chromosome test offered at Family Tree DNA compares a dozen microsatellite markers among men. At that resolution, the company predicts that even an identical Y chromosome match means two men have just a 50% chance of sharing a common ancestor within the past 14.5 generations, or about 363 years. At best, the company's most sophisticated test would compare 21 Y DNA markers between the men—with a perfect match suggesting that they have a 50-50 chance of sharing a common ancestor within the past 250 years. In other words, you won't find your great-grandfather this way.

    But Ron Lindsay, a retired IBM engineer in San Jose, California, isn't complaining. He has been documenting Lindsays worldwide for 40 years. And he hopes that Y chromosome tests of some Lindsays, planned at Family Tree DNA, will place him in an obvious clan: “If there are 500 Lindsay lines I could belong to, and our DNA can narrow that down to four or five, then that's a great tool.”

    History really begins to blur, however, when Y chromosomes are close—but not identical. If, for example, two men share a Y chromosome haplotype except for two mutations, Sykes says, chances are they shared a common ancestor 20 to 40 generations (500 to 1000 years) ago. “That's a long time ago,” Sykes concedes. Such tentative ties, so far in the past, may not satisfy customers searching for a sense of family.

    Another challenge is getting the molecular clockwork right. To trace back the time of a common ancestor between two men, researchers calculate roughly when any mutations in their DNA evolved. But that requires estimating how rapidly each Y DNA marker mutates—and those mutations vary considerably. “There are layers of complexity,” Jobling says, “that could at some level be used to call into question any genealogical study.”

    That doesn't deter entrepreneurs. At the January GenTech conference, Relative Genetics announced a new partnership with Inc. Together, the partners now offer the “Ancestry GenetiKit” via, a popular Web site for genealogists. The Ancestry GenetiKit touts a 23-marker Y-chromosome test, an mtDNA Native American test, and mtDNA sequencing, among other services.

    The ambitious efforts of Relative Genetics reflect the science—and optimism—of its lab director, BYU geneticist Scott Woodward. At BYU, Woodward is attempting to build “the world's most comprehensive genetic database.” His team aims to collect DNA and genealogical histories from at least 100,000 people and then correlate that information, creating a database that represents the gene pools of populations at specific places and times, like a historical atlas of genetics. It's a big job. After almost 2 years, the BYU project has drawn the blood and recorded the family histories of some 25,000 volunteers.

    But the pace may soon pick up considerably. When customers order tests from the Ancestry GenetiKit, they can choose to include their DNA and genealogical information in the BYU project. The line between the business and academic projects could grow very fine.

    In fact, as Sykes signed books and Woodward recruited DNA donations at GenTech, it was sometimes difficult to see where science left off and business began. In an interview before the meeting, Woodward mentioned the need to “keep people's feet on the ground,” to explain that science cannot reveal all family connections. “I don't want to mislead, but I do want to inspire,” Woodward said. At the podium, however, the allure of DNA is hard to deny. Describing tests that found matching Y chromosome haplotypes between two men, Woodward played to the GenTech crowd: “This is what we wanted to have happen. These individuals belong to each other, and the DNA confirmed that.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution