News this Week

Science  22 Oct 2004:
Vol. 306, Issue 5696, pp. 586

    Stem Cell Researchers Mull Ideas for Self-Regulation

    1. Constance Holden

    Scientists and ethicists gathered at a brainstorming session last week in Washington, D.C., to discuss voluntary limits on human cloning and embryonic stem (ES) cell research. The event echoed the 1975 meeting in Asilomar, California, where an earlier generation tried to establish guidelines for genetic engineering. As was the case 29 years ago, researchers are eager to move ahead: Even as the session at the National Academy of Sciences got under way, for example, Harvard University officials were announcing that diabetes researcher Douglas Melton is applying for permission to use nuclear transfer—otherwise known as research cloning—to create new human cell lines in a privately funded effort to model diseases.

    In the United States, federal funds may be spent only on government-approved human ES lines, yet private funding is flowing to the field, largely without regulation. This has increased pressure on scientists to develop their own rules. To get started, the academies created an 11-member Committee on Guidelines for Human Embryonic Stem Cell Research, co-chaired by cancer researcher Richard O. Hynes of the Massachusetts Institute of Technology and Jonathan Moreno, director of the University of Virginia Center for Biomedical Ethics. Last week experts made suggestions covering all aspects of work with human ES cells, from informed-consent procedures to distribution of cell lines.

    What scientists want.

    This colony of human ES cells was cultivated from a blastocyst that a South Korean group created using nuclear transfer.


    “There's definitely a need to create standards in the field so it won't take 6 to 12 months to start work,” said blood researcher Leonard Zon of Children's Hospital in Boston. He ticked off a list of offices that have to pass on any project—including administration, research, finance, legal, ethics, intellectual property, and public affairs. Researchers also must contend with a patchwork of state regulations, noted Georgetown University bioethicist LeRoy Walters, who described rules ranging from California and New Jersey's aggressive pro-research policies to nine states' bans on human embryo research. On top of this tangle of standards are dizzying moral questions, such as: If an embryo is being created for research, is it better to do it through in vitro fertilization or nuclear transfer? And what does it mean to accord an early embryo “respect”?

    Much of the political debate over stem cell research in the United States has focused on the Bush Administration's prohibition on using federal funds to work with human ES cell lines other than a handful already in existence 3 years ago. All these lines were established from “spare” embryos created at fertility clinics.

    More ethically charged are efforts to create new stem cell lines by transferring DNA into an enucleated egg. Many researchers are eager to get on with such studies. But speakers warned of public resistance, complicated by the fact that stem cell research is conflated with cloning in many minds. “I hate to break the news, but there really isn't much support for nuclear transfer,” said Franco Furger, director of the Human Biotechnology Governance Forum at Johns Hopkins University, who cited a variety of polls to that effect. Michael Werner of the Biotech Industry Organization warned that even biotech investors “don't distinguish between stem cell research and cloning.”

    Leading the way.

    Harvard's Melton.


    So far only one group—in South Korea—has successfully cloned a human embryo (Science, 12 March, p. 1669), but more are on the horizon. In the United Kingdom, the International Centre for Life in Newcastle last August got the first license to clone human embryonic cells for research (Science, 20 August, p. 1102). Another British researcher, Ian Wilmut, is applying for a license to use nuclear transfer to study amyotropic lateral sclerosis. At Harvard, Melton's proposal to use these techniques to create embryonic stem cell lines expressing the genes for diabetes and Parkinson's and Alzheimer's disease is under review, and Zon and George Daley hope to create lines expressing the genes for blood diseases. China, India, Japan, Singapore, Belgium, and Israel have sanctioned nuclear transfer; Sweden is expected to do the same.

    Given the qualms over stem cell research and cloning, some participants at last week's meeting thought voluntary guidelines would be insufficient to reassure the public. Alison Murdoch of the Newcastle center said she holds the “very strong view that really strong regulation, with every embryo accounted for,” is needed. Speakers also saw a need for oversight of the loosely regulated in vitro fertilization industry so that any cell line that might end up in a clinical application can be shown to have a squeaky-clean pedigree. As it stands now, said Michael Malinowski of Louisiana State University School of Law in Baton Rouge, “much assisted reproduction is human experimentation in the name of treatment.”

    Some called for a standing oversight body like the Recombinant DNA Advisory Committee set up after Asilomar. Leon Kass, chair of the President's Bioethics Council, noted that Asilomar led to a voluntary gene-splicing moratorium and called for a similar moratorium on nuclear transfer. “This is momentous enough that it should be decided on a national level,” he said. The committee is to come up with proposed guidelines in February.


    A Complex New Vaccine Shows Promise

    1. Gretchen Vogel

    After years of dashed hopes, a vaccine against malaria has shown tantalizing results in a clinical trial in Mozambique. In a study involving 2022 children between the ages of 1 and 4, the vaccine lowered a recipient's chance of developing malaria symptoms by 30%. The results are the most promising so far in the search for a vaccine against a disease that kills between 1 million and 3 million people per year. “Malaria has had a sense of hopelessness and intractability around it,” says Melinda Moree of the Malaria Vaccine Initiative, an independent nonprofit group that helped fund the trial. “These results bring hope to us all that a vaccine may be possible.” Even so, the approach faces several hurdles, including whether the complex vaccine would be affordable in the poor countries most affected by the disease.

    A consortium led by GlaxoSmithKline (GSK) Biologicals in Rixensart, Belgium, developed the vaccine, called RTS,S/AS02A. It uses several techniques to boost the immune system's fight against the malaria parasite. Its designers engineered a hybrid protein that combines a protein fragment from the parasite, Plasmodium falciparum, with a piece of a protein from the hepatitis B virus. The Plasmodium protein is a promising target because it is present on the parasite's surface when it is first injected into the bloodstream by the bite of an infected mosquito. The hepatitis B protein is included because it is particularly effective at prompting an immune response. The vaccine also contains a powerful new adjuvant, developed by GSK Biologicals, that increases the body's production of antibodies and T cells.

    The combination seemed to work, at least partially. Although it didn't prevent all children from being infected with the parasite, it did seem to keep them from becoming sick. Children who received the full three doses of the vaccine were 30% less likely to develop clinical malaria in the first 6 months following the injections, a team from GSK Biologicals and the University of Barcelona reported in the 16 October issue of The Lancet. Data suggest that the vaccine is considerably more effective at preventing the most dangerous form of the disease, lowering a recipient's risk of severe malaria by 58%. Among children between ages 1 and 2, the results looked even better: The vaccine seemed to reduce the chance of severe malaria by 77%, although the numbers were quite small. “These are the best results we've ever seen with a candidate malaria vaccine,” says Pedro Alonso of the University of Barcelona in Spain, who led the trial.

    New hope.

    A team member treats a child with malaria at the Manhica Health Research Center in Mozambique.


    The sheer number of malaria parasites that people in endemic areas are exposed to makes it difficult to develop a vaccine that prevents all infection, notes GSK scientist Joe Cohen. In addition, Plasmodium has evolved multiple ways to elude the human immune system. Cohen says scientists aren't sure exactly how the vaccine works, but they suspect that the antibodies and T cells produced may both interrupt the parasite's ability to infect liver cells and help the immune system target infected cells for destruction.

    Alonso notes that even partial protection against malaria could save thousands of lives every year. Combined with techniques such as using bed nets and insecticides, “the vaccine could have a huge impact,” he says. But he and others caution that the vaccine still must be tested for efficacy and safety in younger children, as large-scale immunization efforts in Africa target children younger than 1 year.

    “It is a very exciting, encouraging result that establishes the feasibility of developing a malaria vaccine,” says Stephen Hoffman of Sanaria, a Rockville, Maryland-based biotech company working on a different type of malaria vaccine. But questions remain about the GSK vaccine. Candidate vaccines for other diseases have seemed to protect young children only to prove ineffective in infants, he notes. It is not yet clear how long the protection lasts. And the vaccine also has to show its mettle in areas with more intense malaria transmission than Mozambique.

    Another possible drawback is the vaccine's cost, which Jean Stephenne, president of GSK Biologicals, estimated at $10 to $20 per dose; multiple doses will likely be needed. Cohen acknowledges that it won't be cheap to produce. “If it gets on the market, it would be the most complex vaccine ever developed,” he says. Hoffman notes that the vaccine is about as effective as bed nets and other conventional malaria prevention methods, although it would be much more expensive. (A bed net typically costs about $5.) Moree agrees: “Any vaccine that goes forward will have to be cost-effective, or it will not be used.”


    Flipped Switch Sealed the Fate of Genesis Spacecraft

    1. Richard A. Kerr

    A design error by spacecraft contractor Lockheed Martin Astronautics Inc. caused engineers to install critical sensors upside down in the Genesis sample return capsule, dooming it to slam into the Utah desert floor last month at 360 kilometers per hour, according to the chair of the mishap investigation board. The accident, Lockheed Martin's third major incident of late, may be another reminder of an era when space missions were underfunded, too rushed, and undermanaged. Chances are good, however, that an identically equipped spacecraft, Stardust, will escape a similar fate.

    According to board chair Michael Ryschkewitsch of NASA's Goddard Space Flight Center in Greenbelt, Maryland, if the two pairs of sensors had been installed right- side up, they would have triggered Genesis's parachutes. Flipped according to incorrect drawings that assemblers were following, the sensors' spring-loaded weights were already at the end of their range of possible motion as the capsule hit the upper atmosphere and began slowing, so deceleration could not drive them through the required trigger point.

    The snafu recalls two earlier mishaps involving Denver-based Lockheed Martin as contractor and NASA's Jet Propulsion Laboratory (JPL) as spacecraft operator. In 1999, the Mars Climate Orbiter broke up as it skimmed too close to Mars. Engineers at the two organizations had misunderstood which units of thrust—English or metric—the other group was using. And in the same year, Mars Polar Lander crashed onto the surface after a software error caused its retrorockets to shut down too far above the surface.

    Fateful reversal.

    Incorrect drawings led assemblers to install critical sensors upside down.


    Before pointing fingers over Genesis, says space policy analyst John Logsdon of George Washington University in Washington, D.C., critics should consider its history. Although Genesis launched late enough to get additional reviews after the 1999 Mars losses, it was designed years earlier, at the height of the “faster, cheaper, better” era of NASA mission design. Spacecraft were being designed, built, and operated by fewer people in less time than ever before. Genesis was thus prone to the same sorts of problems as the Mars probes, although its particular problem “still should have been caught” by later reviews, says Logsdon. In its final report, due out by early December, Ryschkewitsch's board hopes to document why those reexaminations failed.

    More pressing, perhaps, is the state of the Stardust spacecraft's sensors. Also a Lockheed Martin/JPL mission, Stardust will be depending on identical sensors to trigger its landing sequence in January 2006 as it returns samples of comet dust. “Preliminary indications are that the design and installation of the switches on Stardust are correct,” says NASA deputy associate administrator Orlando Figueroa. Time will no doubt tell.


    Martin Backs Science Academy

    1. Wayne Kondro

    OTTAWA—Canadian Prime Minister Paul Martin has given the green light to what one prominent scientist calls “scientific advice on a shoestring budget.”

    On 5 October, Martin promised that his budget next spring would include $27.6 million over 10 years for a Canadian Academies of Science (CAS). The announcement culminates a decade-long campaign by leading scientists for a national organization that would deliver independent assessments of pressing scientific questions. But its status is dependent on the survival of Martin's minority government, which narrowly avoided being toppled in a procedural vote following his speech.

    CAS would be run by the Royal Society of Canada, the Canadian Academy of Engineering (CAE), and the Canadian Institute of Advanced Medicine. Officials from the three organizations have long touted the idea (Science, 27 October 2000, p. 685). “We're practically the only country in the world that doesn't have” such an organization, adds CAE executive director Philip Cockshutt.

    Royal Society past president William Leiss estimates that CAS will carry out no more than five studies a year—compared with the 200 or so churned out annually by the U.S. National Academies, on which CAS is modeled—with help from a small CAS secretariat. A board of governors, featuring two representatives apiece from the three founding organizations and six members of the public, will choose the panelists for each study. (The board could grow if other members join CAS.)

    CAS may also do a small number of self-initiated studies, Leiss said. But he expects the government to provide the bulk of the academies' support. “What they'll get is a kind of definitive resolution of some really thorny issues,” says Leiss, adding that all its reports would become public.

    Leiss credits Canada's new science adviser, Arthur Carty, with putting the campaign over the top. Carty will serve as the gatekeeper and conduit between the government and the new academy, submitting formal requests for the academy to undertake a study and receiving its final reports. Carty says the academy will give Canada a “voice” on the international science front and a point of entry for countries seeking its input on international projects. He also plans to consult with its members in preparing his recommendations to government.


    Butler Appeals Conviction, Risking Longer Sentence

    1. David Malakoff

    Taking a high-stakes legal gamble that could lengthen his 2-year prison term, former plague researcher Thomas Butler is appealing his conviction for mishandling bacteria samples and defrauding his university. Government prosecutors say they will respond with their own request to erase a judge's decision that cut 7 years off a possible 9-year prison term.

    “Butler is taking a huge, huge risk,” says former prosecutor Larry Cunningham, a law professor at Texas Tech University in Lubbock. “The judge gave him a sweet deal; this gives the government a shot at overturning it.”

    Butler “is willing to risk a longer sentence to fight for important principles,” says Jonathan Turley, one of Butler's attorneys and a law professor at George Washington University in Washington, D.C. “The trial was rife with irregularities; the government is pursuing a longer sentence because it is embarrassed about losing its core case.” Prosecutors declined comment.

    Butler, 63, captured national headlines last year after he reported 30 vials of plague bacteria missing from his Texas Tech laboratory, sparking a bioterror scare (Science, 19 December 2003, p. 2054). The government ultimately charged him with 69 counts of lying to investigators, moving plague bacteria without proper permits, tax fraud, and stealing from his university by diverting clinical trial payments to his own use. Last December, a Texas jury acquitted him of the central lying charge and most of the plague-related allegations but convicted him on 44 financial charges and three export violations involving a mismarked Federal Express package containing bacteria.

    Although government sentencing guidelines called for a 9-year sentence, federal judge Sam Cummings reduced it to 2 years, in part because Butler's research had “led to the salvage of millions of lives.” Butler is currently in a Texas prison.

    Prosecutors were unhappy with the sentence, say sources familiar with the case, but agreed not to challenge it unless Butler filed an appeal. He recently did just that, arguing in an 80-page brief that his trial was marred by the government's refusal to try him separately on the plague and financial charges, its use of vague university financial policies as the basis for criminal charges, and a judge's ruling that barred Butler from gaining access to university e-mails. He is asking the appeals court to strike down the convictions or at least order a new trial. Prosecutors are expected to file a response later this month, and a hearing in New Orleans, Louisiana, could come as early as January.

    Butler has rolled the legal dice before. He rejected a pretrial government plea bargain offer that included 6 months in jail. Turley expects the government to ask the appeals court to impose the full 10-year sentence allowed by the export violations but says that move would be a “vindictive, gross abuse of prosecutorial discretion.”

    If the government wins, Butler will lose more than his argument. Because the appeal is expected to take longer than his current sentence, he could find himself back in prison after spending time as a free man.


    Bird Flu Infected 1000, Dutch Researchers Say

    1. Martin Enserink

    AMSTERDAM—At least 1000 people—many more than assumed—contracted an avian influenza virus during a massive poultry outbreak in the Netherlands last year, according to a new study. In another unexpected finding, those who developed symptoms after being infected passed the virus on to a whopping 59% of their household contacts, say the researchers at the National Institute for Public Health and the Environment (RIVM), whose results were published in Dutch last week.

    Flu experts were cautious in discussing the findings, which they had not yet been able to read. But if correct, they are “another warning signal,” says Klaus Stöhr, head of the World Health Organization's global influenza program. Every time an avian virus infects a human being, Stöhr says, the risk that it will mutate into a pandemic strain grows.

    Almost 31 million poultry were culled in the Netherlands before the virus, a strain called H7N7, was contained. By the end of the outbreak, the virus had killed one veterinarian, and some 450 people had reported health complaints, mostly an eye infection called conjunctivitis. In a paper published in The Lancet in February, RIVM virologist Marion Koopmans and her colleagues reported that they detected the H7N7 virus—using the polymerase chain reaction or by culturing the virus—in eye swabs of 89 of them.

    Take your pills.

    Many of those exposed to infected chickens did not take antiviral drugs, the study found.


    To gauge the true reach of H7N7, Koopmans and her colleagues also tested those at risk, such as poultry farmers and those hired to cull and remove poultry, for antibodies against the virus. This test provides more definitive and longer-lasting proof of infection. They used a new variation on the classic hemagglutinin inhibition test, which the team says is better at picking up antibodies to avian flu in humans. (It uses red blood cells from horses, rather than turkeys or chickens, in a key step.)

    They found antibodies in about half of 500 people who had handled infected poultry; based on the total number of poultry workers at risk, the team concludes that at least 1000 people must have become infected, most of them without symptoms. Wearing a mask and goggles did not seem to prevent infection; taking an antiviral drug called oseltamivir (Tamiflu) did, but a quarter of the cullers and half of the farmers did not use the drugs.

    Among 62 household contacts of conjunctivitis patients, 33 became infected—another surprisingly high figure, Stöhr says. Having a pet bird at home increased household members' risk of becoming infected, perhaps because the birds replicated the virus too.

    Detecting antibodies to avian influenza is “tricky,” and the results need to be corroborated, cautions flu specialist Maria Zambon of the U.K. Health Protection Agency, whose lab may retest the Dutch samples.

    Human antibody tests for H5N1, the avian flu virus currently ravaging Asian poultry, are ongoing, Stöhr says. So far, the results show that, although far more lethal to humans, the virus has caused few, if any, infections beyond the known 43 patients.


    1918 Flu Experiments Spark Concerns About Biosafety

    1. Jocelyn Kaiser

    Just days after publishing a well-received study in which they engineered the 1918 pandemic influenza virus to find out why it was so deadly, researchers are catching flak from critics who say their safety precautions were inadequate. The lead investigator, Yoshihiro Kawaoka, contends his team followed federal guidelines. But critics say these rules are out of date.

    The brouhaha erupted after Kawaoka's team at the University of Wisconsin and the University of Tokyo reported in the 7 October issue of Nature that a normal human flu virus containing a gene for a coat protein from the 1918 flu strain is highly pathogenic to mice. An article in the New York Times noted that although the team began the studies in a stringent biosafety level 4 (BSL-4) lab in Canada, where workers wear “space suits,” the University of Wisconsin's safety board approved moving the work to its own BSL-3 lab.

    That set off alarm bells with some biosafety experts, including Karl M. Johnson, a former Centers for Disease Control and Prevention (CDC) virologist now retired in Albuquerque, New Mexico. He and several others wrote to Promed, an Internet e-mail forum widely read in the infectious-disease community, arguing that the move to a BSL-3 was dangerous.

    Kawaoka responds that the critics do not know the full extent of his team's precautions. Among other steps, his workers get the regular flu vaccine, which protects mice against 1918-like flu viruses. All workers also take the antiflu drug Tamiflu prophylactically. Work by Kawaoka's group (in the BSL-4 facility) and by a federal lab has shown that the antiviral “works extremely well” at protecting mice against 1918-like flu strains, Kawaoka says. According to National Institutes of Health (NIH) guidelines for research with recombinant organisms, this puts the work in the BSL-3 category, he notes.

    Safety risk?

    Some experts question whether BSL-3+ conditions, like the air purifier worn here, would prevent engineered 1918 pandemic flu from escaping.


    Kawaoka's group also beefed up its lab to what is informally known as an “enhanced” BSL-3, or BSL-3+. For example, workers wear battery-powered air purifiers with face shields and shower when they leave the lab. When Kawaoka presented the work in September to NIH's Recombinant DNA Advisory Committee (RAC), which was reviewing research with pathogenic viruses, “no member raised any concern,” he says.

    Johnson is partly assuaged. “I feel a bit better,” he says, adding that BSL-3+ may be adequate for some experiments with engineered 1918 flu. But he still has reservations about, for instance, whether the vaccine would fully protect some individuals.

    Other critics on Promed, however, such as Ronald Voorhees of the New Mexico Department of Health, argue that antiviral drugs may not eliminate the risk of a worker passing the virus to someone outside the lab, so a BSL-4 facility is needed. Biosafety expert Emmett Barkley of Howard Hughes Medical Institute suggests that if experts were polled, “half of them would call for [BSL-4].”

    Part of the confusion stems from another set of federal guidelines, Biosafety in Biomedical and Microbiological Laboratories (BMBL). This manual says that flu viruses require only BSL-2 facilities, and there is no mention of 1918 flu or “enhanced” BSL-3, Johnson notes. The issue is important to resolve, as Kawaoka's is not the only group working on 1918-like flu viruses. A group led by Mount Sinai School of Medicine in New York City is doing so in a BSL-3+ facility at the CDC, and the University of Washington plans to study monkeys infected with modified 1918 flu strains in a BSL-3+ lab.

    Even more controversial are planned experiments that would mix pathogenic avian flu strains, such as the H5N1 strain now circulating in Asia, with human flu viruses (Science, 30 July, p. 594). CDC scientists have opted for BSL-3+, but flu expert Robert Webster of St. Jude Children's Research Hospital in Memphis, Tennessee, says he would do these studies in a BSL-4 facility.

    Clearer guidance may emerge in the next version of BMBL, due out in 2005. Meanwhile, RAC expects to release “points to consider” in December.


    Swiveling Satellites See Earth's Relativistic Wake

    1. Charles Seife

    The world's a drag—and scientists have proved it. By studying the dance of two Earth-orbiting satellites, Italian physicists have detected the subtle twisting of spacetime around a massive, spinning object.

    The measurement is the most convincing sighting yet of a hard-to-spot consequence of Albert Einstein's general theory of relativity, says Neil Ashby, a physicist at the University of Colorado, Boulder. “There was a lot of criticism of previous results, but this is the first reasonably accurate measurement,” Ashby says. Physicists Ignazio Ciufolini of the University of Lecce, Italy, and Erricos Pavlis of Goddard Space Flight Center in Greenbelt, Maryland, describe the result this week in Nature.

    General relativity predicts that a spinning mass drags the fabric of space and time around with it, much as a restless sleeper drags the sheets around while twisting and turning in bed. This effect, known as the Lense-Thirring or “frame dragging” effect, is difficult to detect. To spot it, one has to observe how a spinning body changes the orientations of nearby gyroscopes—much tougher than seeing, say, how a massive body bends light.

    Ciufolini, Pavlis, and colleagues tried to measure the effect in 1998 by using two satellites, LAGEOS and LAGEOS II, as test gyroscopes. The satellites—half-meter-wide mirrored spheres—were launched in 1976 and 1992 as targets for laser range finders, which can track their position within a few centimeters. As the satellites spin around Earth, the Lense-Thirring effect twists the planes of their orbits slightly. The early measurements were “very rough,” Ciufolini says, because the uneven distribution of Earth's mass causes similar orbital distortions thousands of times greater than those due to the Lense-Thirring effect. “[A satellite's] precession is about 2 meters per year due to frame-dragging. The precession due to the oblateness of the Earth is many thousands of kilometers per year,” Ashby says.

    Curve balls.

    LAGEOS's laser-ranging satellites revealed a twist in spacetime.


    Because the mass distribution was poorly known, Ciufolini and colleagues had to make a few controversial estimates, including one about how the satellites' perigees precess. As a result, their published value of the Lense-Thirring effect had a large error—20%—and even that was greeted with some skepticism.

    Now, thanks to better gravitational maps produced by twin satellites known as GRACE, as well as improved gravitational models and other refinements, the perigee estimation is no longer needed. The result is a much firmer detection with an error of about 10%. “I believe that in a few years, more accurate gravitational field models and a longer period of observation will make it more and more accurate,” says Ciufolini. “It will be at the level of a few percent.” By then, physicists hope, the gyroscope-laden satellite Gravity Probe B, which was designed to detect the Lense-Thirring effect, will have produced results with an error of about 1%—far lower than the two LAGEOS satellites can achieve (Science, 16 April, p. 385). “I think that the biggest contribution is a validation of what, in a year or so, will be the results from Gravity Probe B,” says Richard Matzner, a physicist at the University of Texas, Austin.

    To keep Gravity Probe B from being the only game in town, Ciufolini and other researchers are pushing to loft another LAGEOS satellite into an orbit that would completely eliminate the effects of Earth's mass distribution. “If we had a third satellite, we could go even below the 1% limitation,” says Ciufolini. Of course, getting funding for a new satellite is quite a drag—one much easier to sense than the Lense-Thirring effect.


    Metabolic Defects Tied to Mitochondrial Gene

    1. Jean Marx

    Abnormally high blood pressure is bad enough by itself; it predisposes people to diseases such as kidney failure, heart attacks, and strokes. But for an estimated 47 million Americans with so-called metabolic syndrome, high blood pressure (hypertension) comes hand in hand with other cardiovascular risk factors such as diabetes and high blood concentrations of cholesterol and triglycerides. Obese people often have metabolic syndrome, but so do some nonobese people, so excess weight isn't the sole cause.

    Findings published online this week by Science ( now point to an unexpected new culprit. The work, by a team led by Richard Lifton of Yale University School of Medicine, shows that a mutation in a mitochondrial gene causes people to develop a constellation of symptoms—hypertension, high concentrations of blood cholesterol, and lower-than-normal concentrations of magnesium—similar to those of metabolic syndrome.

    The mutation likely disrupts the function of mitochondria, subcellular structures that provide most of a cell's energy and have their own small genome. Despite uncertainty about how a mitochondrial DNA mutation could lead to such diverse symptoms, hypertension expert Theodore Kurtz of the University of California, San Francisco, says that the finding “could be of tremendous importance.” Previously, few cardiologists looked to the mitochondria for insight into hypertension and other cardiovascular risk factors, but this, he says, “could shift the interest dramatically.”

    The new discovery grew out of the examination of a female patient who was suffering from low blood magnesium. Lifton and his colleagues had previously discovered a handful of genes that, when mutated, cause this blood condition, which is characterized by general malaise and weakness. In the course of conversations with the woman, she mentioned that several of her relatives also suffered from low blood magnesium. What's more, her family was a gene hunter's dream. It “was extremely large and all lived close to one another,” Lifton says.

    Seat of the problem?

    Malfunction of the mitochondria (blue) may underlie problems of aging such as hypertension and high blood cholesterol.


    Further investigation turned up 142 relatives, many of whom had low magnesium, hypertension, elevated blood cholesterol concentrations, or some combination of those problems. Even more intriguing, in all cases, the traits had been inherited from the individuals' mothers—a clear indication that the gene at fault was located in the mitochondrial genome. The genes that Lifton has previously linked to low blood magnesium had all been nuclear.

    The mitochondrial location of the new gene mutation was a big advantage because that genome consists of only 16,000 base pairs as opposed to the 3 billion in the nuclear genome. Analysis of the mitochondrial genome of family members turned up one mutation found only in affected members and not detected in any of the thousands of mitochondrial genomes previously sequenced. This mutation altered one base—a thymidine was changed to a cytosine—in the gene for a mitochondrial transfer RNA (tRNA), which carries amino acids to the ribosome for protein synthesis.

    Because virtually all tRNAs have a thymidine at that spot, implying that it's essential for the molecule's function, the swap likely disrupts the tRNA's structure and interferes with protein synthesis in the mitochondria. “The [thymidine] is extremely conserved,” says Carlos Moraes, an expert on mitochondrial genetics at the University of Miami, Florida. “That does indicate that the mutation could cause some kind of problem.”

    Moraes adds that he's surprised that people with the mutation don't suffer even more serious problems. Previous mutations found in mitochondrial tRNA genes have caused, among other things, muscle and nerve degeneration, although the extent of the damage can vary.

    A key question now is how the mutation produces hypertension and the other symptoms, which seem to be independent of one another. The low blood magnesium levels, which appear even in children, might be due to failure of the kidney to remove the mineral from the urine before it's excreted—a process that requires a great deal of energy.

    In contrast, blood pressure and cholesterol concentrations were normal in young individuals but began increasing at about age 30. That suggests additional factors come into play with age. These might be environmental—say, a high-fat diet—or related to the declining mitochondrial function that some researchers think contributes to aging.

    Another crucial unknown is whether mitochondrial dysfunction contributes to metabolic syndrome in the general population. Kurtz thinks it might. “The implications are much greater than this finding in one family,” he predicts.


    Researchers Build Quantum Info Bank By Writing on the Clouds

    1. Charles Seife

    An information theorist doesn't care whether a quantum bit is stored on particles of matter or particles of light. But to an experimentalist, it's a very big deal where your qubit is stored because slow but long-lived chunks of matter have very different properties from those of quick but evanescent photons. Now, two physicists have shown how they can reliably transfer quantum information from matter to light. The procedure may soon enable scientists to exploit the advantages of both matter and light in building systems for quantum communications.

    “It's a breakthrough that is needed,” says Klaus Møller, a physicist at the University of Aarhus in Denmark. “It's a bridge between traveling qubits—light—and stationary ones.”

    For years, physicists have been excited about using the properties of quantum theory in computing and communications. In theory, a quantum computer could solve certain problems (such as cracking a code) much faster than a classical computer can; a communications system built upon quantum-mechanical particles would be functionally immune from eavesdropping.

    But quantum computers and quantum communications depend on having information stored on quantum objects like atoms and photons rather than classical ones like hunks of silicon—and quantum objects are hard to handle. Quantum bits stored on particles of light travel well—they can zip for kilometers down a fiber-optic cable—but they are tricky to store. Quantum bits stored on matter “keep” for milliseconds or longer, but they're usually confined to a trap and can't be transmitted from place to place.

    Bright bits.

    By routing laser beams through wisps of gas, physicists shuttled information between light and matter.


    In this issue (p. 663), Alexei Kuzmich and Dmitri Matsukevich of the Georgia Institute of Technology in Atlanta describe how they store a quantum bit in a cloud of rubidium atoms and induce the cloud to inscribe that information, undamaged, upon a photon. The researchers start with two clouds of rubidium gas. By shooting a laser through both clouds simultaneously, they force the clouds to emit a single photon that is quantum-mechanically entangled with both of the clouds simultaneously. (Quantum indeterminacy and the experimental setup make it impossible to say the photon came from one cloud or the other.) The entanglement links the fates of the photon and the clouds; tweaking the photon's polarization alters the quantum state of the clouds. By manipulating the photon, the physicists can inscribe a quantum bit upon the two clouds.

    A few hundred nanoseconds later, the researchers read out the information by shining another laser upon the rubidium samples. The laser induces the clouds to emit another photon—a photon whose polarization contains the information that the researchers had inscribed upon the cloud. That laser-driven retrieval transfers quantum information from matter to light, Kuzmich says.

    Although the procedure is beset by losses due, in part, to the inefficiency of rubidium clouds' absorption of laser light, Kuzmich believes that the method will lead to useful tools for quantum communication. “To be honest, I think that this will be a practical device eventually,” he says. Indeed, he and Matsukevich are hard at work trying to hook up two of the matter-light devices to create a quantum repeater—an amplifier that reverses the inevitable loss of signal strength that occurs when light is sent down a long stretch of fiber-optic cable. Such repeaters would be essential for long-distance quantum communication.

    “I firmly believe that these guys will do it,” says Møller. If they do, he says, quantum communications and large-scale quantum computation will be considerably closer than before. “Everyone will benefit.”


    Male Sweep of New Award Raises Questions of Bias

    1. Jeffrey Mervis

    Where are the women? That's what some scientists are asking after the National Institutes of Health (NIH) picked nine men to receive the inaugural Director's Pioneer Award for innovative research (Science, 8 October, p. 220).

    The 5-year, $500,000-a-year awards are part of NIH's “roadmap” for increasing the payoff from the agency's $28 billion budget, and Director Elias Zerhouni has compared the winners to famed U.S. explorers Merriweather Lewis and William Clark for their willingness “to explore uncharted territory.” Within hours of the 29 September announcement, however, some researchers had begun to bristle at the gender imbalance in that first class of biomedical pioneers.

    “It sends a message to women researchers that they are not on an even playing field,” wrote Elizabeth Ivey, president of the Association for Women in Science, in a 1 October letter to Zerhouni. “I hope that you [will] make an effort to correct such a perception.” The American Society for Cell Biology, in a 15 October letter to Zerhouni, commended him for creating the prize but lamented its “demoralizing effect” on the community. Critics noted that men constituted 94% (60 of 64) of the reviewers tapped to help winnow down some 1300 applications for the award and seven of the eight outside scientists on the final review panel, which grilled applicants for an hour before settling on the winners.

    NIH officials estimate that women made up about 20% of the Pioneer applicants. But only about 13% of the 240 who made it through the first cut were women, and only two of the 21 finalists. (In contrast, about 25% of the applicants for NIH's bread-and-butter R01 awards to individual investigators are women, and their success rate is within a percentage point of that of their male counterparts.) “With any elite award, there are so many deserving candidates that it's easy to choose only men,” says Stanford University neuroscientist Ben Barres, who says he was “outraged” by the gender imbalance. “I actually think it's more a matter of neglect than of sexism.”

    Men at work.

    Nine men won the first NIH Director's Pioneer Awards, chosen by panels that included few women.


    The gender of the final applicants did not come up during the discussion, says review panel member Judith Swain, professor of medicine at Stanford. Swain, who called the exercise “the most interesting review panel I've ever been involved in,” says she saw no evidence of “active discrimination.” But she concurs that the demographics of the reviewers and the winners lead to “a disturbing observation.”

    NIH officials are struggling to find the best way to respond to the charges of gender insensitivity. Stephen Straus, head of the National Center for Complementary and Alternative Medicine and team leader for the NIH-wide competition, told Science on the day of the awards that “we gave the gender issue a great deal of thought, but none of the women finalists came close to making the pay line.” A week later, in the first of a series of e-mail exchanges with Barres, Straus remarked that the absence of women was “noted with some surprise” by senior NIH officials and that “we know we can do better” in subsequent rounds. In a later exchange, however, Straus wrote, “I don't believe that NIH can credibly discard its two-level peer review system when nine grants out of the many thousands awarded this year turn out differently than some might wish.”

    NIH is evaluating how it ran the Pioneer program—including how the award was publicized and the demographics of the applicants—before launching the next competition in January. A thorough review is essential, says Arthur Kleinman, a medical anthropologist at Harvard University and chair of the final review panel, who believes NIH needs to do more to reach several groups—minorities and social and behavioral scientists as well as women—not represented in the first batch of winners. “I agree that they need to be more sensitive to diversity,” he says. “But at the same time, I think Zerhouni deserves a lot of credit for even trying something like this.”


    End of Cost Sharing Could Boost Competition

    1. Jeffrey Mervis

    The National Science Foundation (NSF) is leveling the playing field for grant seekers by removing a mandatory cost-sharing requirement for some programs. But the move is expected to result in a more crowded playing field, too, as institutions that couldn't afford the entry fee can now apply.

    Last week the agency's policymaking body, the National Science Board, voted to eliminate cost-sharing rules that, in 2001, affected roughly one-sixth of NSF's 20,000 grants—in particular large centers and major instrumentation programs—and added $534 million to the agency's own spending on research. In some cases, universities had to come up with funding that matched the size of the award.

    The requirements were seen as a way to stretch tax dollars and ensure that an institution's leaders were committed to a proposal, but university officials have long viewed them as a hidden tax on federally funded research. Two years ago, the board told NSF that the amount a university promises to contribute should not affect decisions on which proposals get funded. The new rule goes one step further, by banning mandatory cost-sharing entirely except for a statutory 1% fee.

    “NSF's current policy represented an unfair burden on some institutions that couldn't afford to enter the competition,” says Mark Wrighton, chancellor of Washington University in St. Louis, Missouri, and chair of the board committee that recommended the change. “This will give schools greater flexibility to invest their research dollars.” The board also asked for a report in 2 years on any unintended consequences of the new policy.

    NSF officials began cutting back on cost sharing after the board's 2002 directive, and this month a solicitation for its major research instrumentation program, written before the board's action, drops the requirement. But there's a quid pro quo: Institutions must pick up the full cost of maintenance and operations when a project starts rather than after the grant expires. Even so, Dragana Brzakovic, who manages the instrumentation program, expects success rates to drop from nearly 40% to about 30% as more institutions compete for the awards.


    Policing the Immune System

    1. Ingrid Wickelgren

    In hopes of finding new remedies for ills including cancer and diabetes, scientists are following a band of elusive immune-cell cops whose existence was once hotly debated

    From blood samples, pediatric immunologist Hans Ochs has diagnosed five infant boys who all had the same devastating problems. Their immune systems had gone haywire, attacking their gastrointestinal tracts within a few weeks of birth and causing severe, intractable diarrhea. Wayward immune cells also laid siege to each boy's pancreas, producing diabetes around 3 months of age. Within months of their births, several of the infants became depleted of red and white blood cells from the immune onslaught.

    No cure exists for this rare and frequently deadly immune disease dubbed immune dysregulation, polyendocrinopathy, enteropathy, X-linked syndrome (IPEX). But thanks in large part to recent work by Ochs at the University of Washington School of Medicine in Seattle and his colleagues, its cause in most cases is now known. A genetic defect severely impairs, if not abolishes, the body's ability to produce regulatory T cells, a mysterious class of immune cells apparently designed to squelch dangerous immune responses.

    Once dismissed as artifacts of misguided research, regulatory T cells—originally called suppressor T cells—are now white hot among immunologists, thanks to a body of research that over the past 8 years clarified their existence. Hundreds of researchers are flocking to the field, which could lead to novel treatments for an array of immune disorders, such as type I diabetes, multiple sclerosis, graft-versus-host disease, and allergy, that are far more common than IPEX. Studying regulatory T cells may also provide clues to the treatment of cancer; the cells actually seem to protect tumors against immune attack.

    “All anyone is talking about these days is regulatory T cells,” says Ethan Shevach, a cellular immunologist at the National Institute of Allergy and Infectious Diseases (NIAID) in Bethesda, Maryland. Nevertheless, fundamental mysteries remain about this newfound class of immune cells. Their mechanism of action is almost totally opaque. Also unclear is to what extent they play roles in more ordinary human autoimmune diseases that develop later in life, such as diabetes and multiple sclerosis. “Regulatory T cell research is very intriguing but is not yet ready for mass consumption,” Ochs says. “There are still a lot of puzzles.”


    Scientists have finally nabbed the elusive regulatory T cell.


    Even so, many researchers are optimistic that studying regulatory T cells will lead to new therapies. Drugs that seem to target these cells are already being tested in people with cancer and diabetes, and pharmaceutical companies are trying to develop drugs that augment or suppress regulatory T cells for other disorders. “Within 5 years, some clinical application of these cells will be here,” predicts Shimon Sakaguchi, an immunologist at Kyoto University in Japan and a pioneer in the field.

    From fantasy to reality

    The human body makes several types of T cells, including killer T cells, which eradicate infected cells, and helper T cells, which arouse killer T cells and various other immune cells to fight invaders. And for decades, researchers have kicked around the idea that the body also makes a class of T cells that act like a police department's internal affairs unit, keeping tabs on the immune system's cellular cops and cracking down on them if they threaten to spiral out of control. In the early 1970s, the late Yale immunologist Richard Gershon formally proposed the existence of these suppressor cells to explain a form of immune tolerance he observed in a mouse. The idea caught the fancy of immunologists, and numerous teams rushed to identify and characterize these cells.

    The entire concept of suppressor T cells fell into disrepute, however, when no one could verify reports of molecules that supposedly characterized the cells. “It was a kind of fantasy not supported by modern genetics and biochemistry,” recalls immunologist Alexander Rudensky of the University of Washington, Seattle.

    By the mid-1980s, virtually everybody had abandoned the idea of suppressor T cells except Sakaguchi, who continued his quest for the elusive cells. Extending earlier work, Sakaguchi and his colleagues showed, in the 1980s, that removing the thymus of a mouse on day 3 of its life—which depletes the animal of most of its T cells—causes various autoimmune diseases to develop. Inoculating the thymus-free mice with a mixed population of T cells from another mouse prevents those diseases, the researchers found.

    Sakaguchi felt that the autoimmune diseases resulted from a deficit in putative suppressor T cells that were made in the thymus on or after day 3; their absence left unchecked any T cells that had developed earlier. But critics contended that infections could have triggered the autoimmune reaction. Without pinpointing the suppressors, the Kyoto team could not prove its case.

    Finally, in 1995, Sakaguchi and his colleagues reported that they had identified suppressor T cells by the presence on them of a newly identified cell surface protein called CD25 as well as the more ubiquitous surface protein CD4. When they infused a batch of T cells devoid of ones with these markers into mice that lacked their own T cells, the mice developed autoimmune disease. But if they infused the suppressors along with the other T cells, no autoimmune disease appeared. These experiments convincingly showed that a small, specific population of T cells worked to dampen autoimmune reactions.

    Virtually no immunologists read the paper, however, because hardly anybody was interested in suppressor T cells anymore. But NIAID's Shevach did. He was so struck by the finding that he rushed to repeat it in his own laboratory—and succeeded. “I had a religious conversion to believe in regulatory T cells,” he says. That brought others into the fold, as Shevach had been an outspoken skeptic of the idea. Rudensky credits Sakaguchi's 1995 paper as the turning point: “The field took off.”

    In 1998, Shevach's and Sakaguchi's groups independently developed cell culture systems that enabled others to study the suppressive activity of the rodent cells in dishes. In 2001, several research teams, including Don Mason and Fiona Powry's at the University of Oxford, plucked out CD4+ CD25+ cells in human blood and determined that they halted the proliferation of other T cells, showing that the rodent data had some relevance to humans.

    And last year, the cells were shown to underlie human disease. Three teams of investigators reported that Foxp3, the protein that Ochs and others found to be missing or defective in IPEX patients in 2001, is specifically expressed in regulatory T cells and is essential to their development.

    Mice engineered with a defective Foxp3 gene have a deficit in CD4+ CD25+ T cells and suffer from an IPEX-like disease called scurfy that can be blocked by treatment with regulatory T cells at 1 day old, Rudensky and his colleagues revealed in Nature Immunology. In the same journal, a team led by Fred Ramsdell, formerly at Celltech Research & Development in Bothell, Washington, reported that the expression of Foxp3 by T cells in mice correlates with their ability to suppress immune responses. And transferring Foxp3 into naïve T cells converts them into regulatory cells, the Sakaguchi group showed the same year (Science, 14 February 2003, p. 1057). Together, the studies indicated that a deficiency of regulatory T cells in humans can lead to severe immune dysfunction.

    Fighting asthma.

    Compared to the clear lungs of a normal mouse (left), the lungs of an egg-white-allergic mouse become inflamed when exposed to the allergen (middle). Infusing such a mouse with regulatory T cells before exposure blocks the inflammation (right).


    How suppressor T cells do their job remains a mystery, however. In the test tube, natural regulatory T cells—the type that express CD25 and are made during immune system development—seem to suppress other T cells through direct contact. In living animals, they also may release anti-inflammatory cytokines such as interleukin-10 or transforming growth factor β. So-called adaptive regulatory T cells, which become regulatory only after being stimulated by an antigen and respond only to immune cells targeting that antigen, seem to exert their influence solely by means of cytokines. If that's not complicated enough, Shevach recently reported that regulatory T cells may directly kill the B cells that generate antibodies (Science, 6 August, p. 772).

    Good cop, bad cop

    Despite basic gaps in their understanding of regulatory T cells, researchers are tracking down potential roles for the cells in human disease. David Hafler and his team at Harvard Medical School in Boston reported in April in the Journal of Experimental Medicine that patients with multiple sclerosis seem to have defective regulatory T cells.

    Impotent regulatory T cells may also play a role in allergy and asthma. Allergist Douglas Robinson of Imperial College London and his colleagues isolated natural regulatory T cells from the blood of people with and without allergies. They then exposed the remaining T cells to an allergen. In all of the samples, the allergen (from grass pollen) triggered T cell proliferation and a release of inflammatory molecules, or cytokines, from immune cells. Adding back regulatory T cells from the nonallergic people completely suppressed this inflammatory response, whereas the regulatory T cells from the allergic individuals were far less effective in doing so. The suppression was weaker still from regulatory T cells from patients with hay fever during the pollen season, the researchers reported in February in The Lancet.

    “One implication is that people who get allergic disease do so because their regulatory T cells don't respond,” Robinson says. Boosting the response of these cells, he adds, might help prevent or treat their disease. Boosting the adaptive class of regulatory T cells may also be important. Two years ago, for example, Stanford's Dale Umetsu and his colleagues identified adaptive regulatory T cells that protect against asthma and also inhibit allergic airway inflammation in mice.

    Although regulatory T cells seem to be protective in autoimmune disorders and allergy, they may have a darker side. In the March issue of Immunity, a team led by immunologist Kim Hasenkrug of NIAID's Rocky Mountain Laboratories in Hamilton, Montana, suggests that some viruses, such as those that cause hepatitis and AIDS, may exploit regulatory T cells to dampen the body's antiviral response and allow chronic infections.

    Similarly, regulatory T cells may protect tumors from immune attack. Researchers have shown, for example, that removing such T cells from a cancer-afflicted mouse can cause the rodent to reject a tumor. High levels of regulatory T cells have also been found in samples from several types of human tumors. More recently, tumor immunologist Weiping Zou of Tulane University Health Science Center in New Orleans, Louisiana, and his colleagues linked the quantity of regulatory T cells associated with a tumor to disease severity in cancer patients.

    “Within 5 years, some clinical application of these cells will be here.” —Shimon Sakaguchi, an immunologist at Kyoto University in Japan


    Zou's team isolated and counted the T cells in tumor tissue from 104 ovarian cancer patients and noted that the higher the ratio of regulatory T cells to total T cells in the tumor, the farther the cancer had progressed. Regulatory T cells were also associated with a higher risk of death: The more tumor-associated regulatory T cells, the worse the prognosis. Zou and his colleagues further showed that the regulatory cells recovered from tumor tissue protected tumors in a mouse model of ovarian cancer by inhibiting both the proliferation and potency of tumor-attacking T cells.

    Zou's team also discovered that, as the cancer progressed, a patient's regulatory T cells appeared to migrate progressively away from their normal home in the lymph nodes to the tumor. The investigators determined that tumor cells secrete a chemical, dubbed CCL22, that attracts regulatory T cells. Blocking CCL22 with an antibody stopped regulatory T cells from migrating to the tumor in the mouse model, the team reported in the September 2004 issue of Nature Medicine.

    By extension, disarming these regulatory cells or preventing their migration to the tumor could leave the tumor vulnerable to immune destruction. The Tulane researchers and others are testing a regulatory T cell-killer called Ontak in advanced cancer patients. The drug binds to CD25 and kills the cells with diphtheria toxin. The results so far, Zou says, are “encouraging.”

    Immunologist Steven Rosenberg of the National Cancer Institute in Bethesda has tested another regulatory T cell-blocker in patients with metastatic melanoma. The treatment—an antibody to an essential molecule on the surface of regulatory T cells, called cytotoxic T lymphocyte-associated antigen 4— induced cancer remission in three of 14 treated patients who had end-stage cancer, Rosenberg's team reported last year. More than 2 years later, the patients are still in remission, and Rosenberg's team has now seen a similar remission rate among almost 100 patients.

    “The fact that inhibiting regulatory T cells enabled three patients to undergo cancer regression was very strong evidence that regulatory T cells are inhibiting the immune response against tumors,” says Rosenberg. “This is the first time that getting rid of this brake on the immune system has been shown to have any impact in humans.”

    The study also suggests how tricky it may be to interfere safely with the regulatory T cell system, however. Six of the initial 14 cancer patients, including the three who went into remission, developed autoimmune diseases affecting the intestines, liver, skin, or pituitary gland, although these were all reversible with short-term steroid treatment.

    Expansion plans

    Even in disorders such as type I diabetes, in which regulatory T cells have not been consistently shown to be abnormal in function or number, researchers are exploring them as potential therapy. “Traditionally, immunotherapy is designed to block effector cells or their activities. Now there is the entirely new possibility that we could treat the disease by expanding suppressors,” says Ralph Steinman, an immunologist at Rockefeller University in New York City.

    In June, Steinman's team and, separately, a team led by Jeffrey Bluestone at the University of California, San Francisco (UCSF), reported mouse studies in the Journal of Experimental Medicine that illustrated how this might work. Both research teams plucked out natural regulatory T cells from diabetes-prone mice that made only one type of T cell, one that responds to an antigen on the islet cells of the pancreas. Each team then used different methods to expand the mouse regulatory T cells in lab dishes and found that they could prevent or reverse diabetes when infused into other diabetes-prone mice.

    Tumor bodyguards.

    Regulatory T cells (red and green) interact with tumor-killing T cells (blue) in ovarian tumor tissue.


    A diabetes treatment that is thought to boost T cell regulation has already reached human trials. The treatment is an antibody to CD3, a cell-surface protein tightly associated with the T cell receptor. The antibody was first found to induce long-term remission of diabetes in mice a decade ago. That surprising result contradicted the idea that the CD3 antibody—which was then used to treat organ rejection—worked by inactivating destructive T cells, because the treatment's effects far outlasted the depletion of activated T cells.

    Last year, immunologists Jean-François Bach and Lucienne Chatenoud and their colleagues at Necker Hospital in Paris, along with UCSF's Bluestone, reported in Nature Medicine that the antibody appeared to activate natural regulatory T cells in mice. When diabetes-prone mice were treated with the antibody a month after diabetes onset, they became nondiabetic. But if the mice were also treated with drugs that block regulatory T cells, the diabetes remained. “It's a nice story indicating that, in the mouse, immunoregulation explains the long-term effect of the antibody,” Bach says.

    After initial tests of this antibody approach proved encouraging in a small number of diabetics, Kevan Herold, an endocrinologist at Columbia University School of Medicine in New York City, and his colleagues recently launched a six-center trial of the therapy in 81 diabetic patients. Meanwhile, Chatenoud and her colleagues are about to unveil the results of a multicenter, 80-patient, placebo-controlled trial of the CD3-targeting antibody conducted in Belgium and Germany.

    Boosting regulatory T cell activity might someday also induce drug-free immune tolerance to donor organs. In July, Sakaguchi's team reported removing natural regulatory T cells from a normal mouse and expanding them in cell culture with interleukin-2, a growth promoter, along with an antigen from a donor mouse of a different strain. This generated a population of antigen-specific regulatory cells, which they then infused into so-called nude mice, which lack T cells. The regulatory cell infusion enabled those rodents to accept skin grafts from the donor strain even though they were simultaneously infused with killer and helper T cells. By contrast, nude mice that received only killer and helper T cells—but no regulatory cells—quickly rejected the grafts. “With just a one-time injection of regulatory T cells, we can induce graft-specific tolerance without drugs,” Sakaguchi says.

    In cases in which organ donors—such as living donors—are known in advance, Sakaguchi envisions generating antigen-specific regulatory T cells prior to transplantation of human organs. If the therapy works, he says, it could replace the use of immunosuppressive drugs, which come with a significant risk of infection and cancer.

    A boost from bugs

    The growing understanding of regulatory T cells may eventually shed some light on an immunology-based theory called the hygiene hypothesis. According to this controversial idea, the rise in allergic disorders in recent decades in developed countries results from those countries' increasing cleanliness, which reduces children's exposure to protective microbes. A number of researchers have shown that exposure to parasitic worms called helminths may protect against allergy and asthma, among other immune disorders, largely through the induction of regulatory T cells (Science, 9 July, p. 170).

    Some strains of bacteria have also been shown to be protective—and again regulatory T cells may be involved. Immunologist Christoph Walker of the Novartis Institutes for Biomedical Research in Horsham, U.K., and his colleagues demonstrated that treating mice with killed Mycobacterium vaccae before sensitizing them to egg-white allergen significantly reduced the rodents' inflammatory responses to the allergen, as compared to mice that did not receive the bacteria. Regulatory T cells isolated from bacteria- treated mice could transfer the protection to untreated mice sensitized to the same allergen, demonstrating that the cells mediated the bacteria's protective effects, the team reported in 2002.

    The suppressive response was allergen-specific: The regulatory T cells generated in the egg white- sensitized mice could not dampen the response to cockroach allergen in mice made allergic to cockroaches. “Regulatory T cells generated by mycobacteria treatment may have an essential role in restoring the balance of the immune system to prevent and treat allergic diseases,” the authors wrote in Nature Medicine. Walker's team is now trying to mimic the bacteria's effects with a chemical that stimulates the same receptors on regulatory T cells that the bacteria stimulate.

    But some researchers note that opportunities for rational drug design may be limited by the paucity of knowledge about how regulatory T cells suppress their immune system colleagues. Says Shevach: “We won't know how to enhance the response until we know what it is.”

    Nevertheless, he, Sakaguchi, and others have succeeded in a vital first step. They've at long last convinced fellow immunologists that regulatory T cells exist and are important. “So many people are working on regulatory T cells,” Sakaguchi says. “It's been a pleasant surpirse.”


    A Few Good Climate Shifters

    1. Richard A. Kerr

    Meteorologists probing a dauntingly complex atmosphere have found patterns of natural climate change that offer hope for making regional climate predictions

    Look more than a week or two into the future, and none of the atmosphere's detailed behavior—the weather—can be predicted. Climate, on the other hand, is less capricious. All of northern Europe, for example, warms or cools for years or decades at a time. The shifts in atmospheric circulation behind such relative climatic stability once seemed likely to provide a way to predict patterns of regional climate change around the world. But a profusion of patterns discerned by applying different analytical techniques to different data sets—think dozens of blind men and an elephant—soon threatened to swamp the promising field.

    Researchers are now managing to stem the tide. “When I was a graduate student in the late '80s, there were a zillion” patterns of variability, says climate modeler John Fyfe of the University of Victoria, British Columbia. “Of those, only a handful have survived.” Three new studies show that almost all proposed patterns fit into one of three or four globally prominent patterns: the El Niño pattern tied to the tropical Pacific Ocean; two great rings of climate variability, each circling a pole at high latitudes; and a last pattern across much of the northern mid-latitudes. “At one point, it looked like there might be an infinite number” of patterns, says meteorologist Kevin Trenberth of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. “Well, actually, there are relatively few.”

    This emerging simplification of meteorologists' view of atmospheric dynamics “provides a basis to move forward on regional climate change,” says Trenberth. That ability will become increasingly important as the greenhouse intensifies: Policymakers want to know what's going to happen regionally, not just on the global average.


    When Siberian permafrost thaws, buildings can lose their footing and slowly crumble.


    Until recently, natural climate variations were beginning to look as complex and indecipherable as next month's weather. “The climate dynamics literature abounds with patterns of variability,” note meteorologists Roberta Quadrelli and Michael Wallace of the University of Washington, Seattle, in their October Journal of Climate paper. The Northern Hemisphere alone has had at least 17 patterns proposed for it, variously termed teleconnection patterns, oscillations, clusters, seesaws, circulation regimes, and modes. Quadrelli and Wallace believe they've narrowed it down to two “principal patterns of variability,” or more informally, modes.

    Quadrelli and Wallace tried to get as comprehensive a feel for the elephant as they could. They considered the entire Northern Hemisphere outside the tropics during the 4 months of winter, when atmospheric circulation is strongest. They analyzed the longest, most thoroughly vetted data set available, which runs from 1958 to 1999. They started with atmospheric pressure at sea level, but also included a measure of atmospheric pressure up through the lower atmosphere as well as surface air temperature. And they employed a statistical method that is widely used to identify the most common patterns of atmospheric behavior.

    The results were just two patterns in the Northern Hemisphere. One is the previously recognized Arctic Oscillation, now termed the Northern Annular Mode (NAM). The second is a pattern strongly resembling the long-established Pacific-North American (PNA) pattern. The NAM is an erratic pressure seesaw that raises pressures alternately over the pole and in a ring passing over southern Alaska and central Europe (Science, 9 April 1999, p. 241). These pressure shifts in turn weaken and strengthen the westerly winds there. The fluctuations in the NAM can favor cold outbreaks down through Canada into the lower 48 states, for example.

    The PNA is a pattern of alternating centers of high and low pressure arcing across the North Pacific and North America; in part, it is set up by the Tibetan Plateau and Rocky Mountains jutting into the atmosphere's westerly flow. Its oscillations can shift warmth into Alaska and cool, wet weather into the southeastern United States. These two patterns subsume many of the previously proposed leading modes of the Northern Hemisphere, say Quadrelli and Wallace, including the North Pacific Index, the cold ocean-warm land pattern, the Aleutian-Icelandic seesaw, and the North Atlantic Oscillation.

    Change in the air.

    Wind shifts—perhaps induced by rising greenhouse gases—brought 40 years of warming (oranges) and cooling (blues) to high latitudes.


    Between them, the two modes account for about half the variability of sea level pressure from year to year and on longer time scales, Quadrelli and Wallace find. Half is a lot for meteorologists, who eagerly pursue anything accounting for 10% or more of atmospheric behavior. But the two modes by themselves explain “virtually all” of the broad trends over the 42-year period, they say. There's no need for any others over the long haul.

    In another upcoming paper in the Journal of Climate, meteorologists Monika Rauthe of the Institute for Atmospheric Physics in Kühlungsborn, Germany, and Heiko Paeth of the University of Bonn, Germany, report that these two major modes of pressure variation in turn account for much of regional variability of winter temperature. Shifting pressure patterns produce wind shifts that pick up heat from the oceans and carry it to new regions. Rauthe and Paeth find that their rendition of the NAM and PNA account for 30% to 75% of temperature variations from year to year within swaths of the Northern Hemisphere that are each several thousand kilometers across. The regions include northwestern North America, northern Europe, and north-central Siberia. Precipitation can vary just as much, but over fewer and smaller regions.

    South of northern mid-latitudes, Trenberth, David Stepaniak, and Lesley Smith of NCAR find only two more modes, they will report in Journal of Climate. In their analysis of global, year-round variations of atmospheric mass—a more fundamental measure than pressure—there is no true Southern Hemisphere equivalent of the PNA. The Southern Hemisphere has nothing quite like the towering Tibetan Plateau and Rocky Mountains to create such a pattern. And the Southern Annular Mode presents a far more continual and symmetrical ring than its northern sibling, thanks to a dearth of those disruptive influences that distort the NAM.

    And then there is the tropical Pacific's El Niño. Meteorologists call it the El Niño-Southern Oscillation (ENSO) to include the interaction of atmosphere and ocean that produces cyclic ocean warming (El Niño) and cooling (La Niña) as well as the atmospheric circulation changes that accompany them. Trenberth and his colleagues find that ENSO dominates year-to-year variability in the tropics and mid-latitudes around the globe. It even seems to take such a strong hand in North Pacific variability that—at least on year-to-year and longer time scales—it dominates Quadrelli and Wallace's PNA-like pattern. Over years, decades, and presumably centuries, that would make ENSO and the two annular modes the rulers of the climate change roost. “A very large fraction of large-scale atmospheric variation can be explained by a few basic patterns,” says meteorologist Timothy Palmer of the European Center for Medium-Range Weather Forecasts in Reading, U.K.

    Climate researchers would now like to use these basic patterns to help predict how regional climate will change under the intensifying greenhouse. In their paper, Rauthe and Paeth report that hot spots of particularly intense warming seen in climate model simulations of greenhouse warming are due in large part to the circulation changes of modes. If the greenhouse caused modes to shift their behavior—spending more time at one extreme of a pressure seesaw than the other—that wouldn't by itself amplify global warming. However, the resulting circulation changes might redistribute heat, intensifying warming in some places and moderating it elsewhere, or it could shift storm tracks and redistribute precipitation.

    Rauthe and Paeth find that in an ensemble of seven models, changes in the intensity of modes under rising greenhouse gases account for almost 60% of temperature changes over Northern Hemisphere land. Northwestern North America and northern Siberia would experience the most added warming, whereas northern Africa, the southeastern United States, and far northeastern Canada/western Greenland would warm less than the global average. Smaller regions would see precipitation changes, notably enhanced dyring across southern Europe. Given the apparent utility of modes, “there will be more pressure on modelers to look at things from this standpoint,” says Trenberth. “This is the wave of the future.”


    NIMH Takes a New Tack, Upsetting Behavioral Researchers

    1. Constance Holden

    Basic behavioral scientists are feeling the squeeze as Thomas Insel makes a top priority of “translational” research

    When Thomas Insel took over as head of the National Institute of Mental Health (NIMH) in November 2002, he was seen as a reassuring choice to succeed psychiatrist Steven Hyman, who beefed up basic science and promoted large-scale clinical trials. A former NIMH researcher, Insel had sterling credentials, with groundbreaking work on the neurobiology of attachment in voles. But reassuring demeanor aside, he's now rocking the NIMH boat in a way that has some basic researchers sending out SOS signals.

    Early this month Insel put into effect a reorganization, in the works for the past 6 months, intended to move the institute closer to the front lines in battling mental illness through “translational” research—in other words, bringing the fruits of new knowledge to people with disorders such as depression and schizophrenia. Practically, it means that the agency is lowering the priority of basic cognitive or behavioral research unless it has a strong disease component.

    The original Basic Behavioral and Social Science Branch has been broken up: Research that can be tied with brain science is in a new Behavioral Science and Integrative Neuroscience branch. And some nonbiological research—including studies in cognitive science and social psychology—has been parceled out to NIMH divisions with clinical portfolios. But the welcome mat is no longer out for grant applications in some areas of personality, social psychology, animal behavior, theoretical modeling, language, and perception. When Mark Seidenberg of the University of Wisconsin, Madison, applied for funds to continue his research on models of language learning, he says, “I was told the agency no longer supported basic research on language.”

    NIMH has traditionally been the federal agency that supports such research. Alan Kraut of the American Psychological Society (APS) guesses that perhaps $400 million of the institute's $1.4 billion budget is devoted to it. But with budget growth slowing, Insel says he wants to tighten the focus on NIMH's mission. “We're one of the disease-specific institutes,” he asserts, arguing that other institutes should pick up some areas of research NIMH is downgrading.

    Narrowed focus.

    Insel says others should pick up research NIMH no longer funds.


    The move is welcomed by advocates for mentally ill people. The National Alliance for the Mentally Ill (NAMI), which has long pressed for NIMH to keep its eye on major mental illness, is delighted. “It's a start in the right direction,” says former NIMH psychiatrist E. Fuller Torrey, who runs NAMI's research arm, the Stanley Foundation. “They're shifting away from studying how pigeons think.” It's “a real quantum leap,” says James McNulty, former NAMI head and a member of the NIMH advisory council, applauding the agency's transition to “an applied research institute.”

    But among researchers, “there's a lot of angst and anxiety,” says Steven Breckler, executive director for science at the American Psychological Association. Cognitive psychologist Richard Shiffrin of the University of Indiana, Bloomington, argues that translational research is all very well, but there is still not much to translate, and “gains produced by a few extra dollars for translational research will be far outweighed by the harm it will do to basic research.”

    One researcher whose grant NIMH failed to renew this year is Mahzarin Banaji, a Harvard University social psychologist who examines unconscious mental processes in stereotyping and discrimination. She decries the timing—when “clinical psychologists are uncovering new mental-health uses” of a scale she and her colleagues developed. The scale can aid in studying phobias or probing attitudes of people with depression, she says. Terminating support for this work, says a National Institutes of Health (NIH) official who asked not to be quoted, “means unfortunately, for a topic of grave social importance, no one in the federal government will fund it.”

    Some animal studies are also being de-emphasized. Robert Seyfarth, a well-known psychologist at the University of Pennsylvania in Philadelphia, says his grant almost wasn't renewed, and when it was, funding was drastically cut, just as his team was moving beyond basic research on social behavior in nonhuman primates to research linking social behavior and stress.

    “Some people think I'm out to kill basic behavioral science,” concedes Insel, who says that view is all wrong. Instead, he says he wants basic behavioral scientists to be more aware of the problems NIMH needs to solve. For example, he says, cognitive deficits are a major part of schizophrenia, so “if someone's on the track of an important piece of cognitive science using healthy undergraduate [subjects], we might work with them to begin to study people with schizophrenia.”

    Insel wants to redirect some behavioral research to institutes dealing with relevant issues such as child development, aging, and communication. Others agree with him that NIMH has been carrying more than its share of the burden. The larger issue, they say, is which federal agency should be taking it on. The National Science Foundation doesn't spend much on that type of research, says Kraut of APS. And although some NIH institutes have big behavioral components—for example, on how to get people to stop smoking—Kraut says, “I can't tell you how hard it has been to convey the importance of basic behavioral science in any sophisticated sense.” Kraut and several members of Congress are pushing for the National Institute of General Medical Sciences to take up the slack.

    Basic behavioral research will not be abandoned, Insel hopes. Last March, NIH director Elias Zerhouni set up a working group, chaired by sociologist Linda Waite of the University of Chicago, which is pondering the role of social and behavioral science at NIH. And Insel himself heads another group, under the White House Office of Science and Technology Policy, that is looking at investments in social, behavioral, and economic research throughout the federal government. The NIH body is scheduled to make recommendations in December; Insel's group will weigh in later.


    Hawaii Girds Itself for Arrival of West Nile Virus

    1. Erik Stokstad

    Health officials and wildlife biologists hope vigilant surveillance and rapid response will prevent infected mosquitoes from establishing a beachhead

    On 24 September, officials at the Hawaii Department of Health (DOH) got the news that they'd been dreading for several years: An island bird had tested positive for West Nile virus. Although infected birds are now routine across the continental United States, Hawaii has so far been spared. And it is fighting to stay that way. Immediately after the discovery, the health department launched an assault; all night long a truck fogged the Kahului Airport on Maui, where the bird had been caught, with insecticide. Additional crews with backpack sprayers doused off-road sites to kill any potentially infected mosquitoes.

    State officials breathed a sigh of relief the following week when the case turned out to be a false positive. But they aren't letting down their guard. Should West Nile become established on the islands, virus-ridden mosquitoes could spread the disease year round. And many of the state's remaining endemic birds, already hammered by avian malaria and pox, might go extinct. “The effects could be disastrous,” says ornithologist Peter Marra of the Smithsonian Environmental Research Center in Edgewater, Maryland.

    To avert such a catastrophe, researchers have been scrambling to improve surveillance and eradication plans. Observers on other Pacific islands, which also face a similar threat, are hoping to learn from Hawaii's efforts to stamp out the virus as soon as it enters. “We're not just throwing our hands up in the air,” says epidemiologist Shokufeh Ramirez, who coordinates West Nile prevention efforts for the Hawaii DOH.

    On the mainland, West Nile virus has proved unstoppable. After first appearing on the East Coast, in New York in 1999, West Nile virus marched steadily across the country. The virus is transmitted by mosquitoes, which pass it on to birds and humans. Although infection is rarely deadly to people, it kills some bird species such as crows with a vengeance; other infected birds remain healthy enough to fly and spread the virus. Last year, it reached California.

    No barriers.

    Mosquitoes hitching a ride inside airplanes could bring West Nile virus to Hawaii, threatening honeycreepers and other native birds.


    But Hawaii has a chance, if not to keep West Nile virus out, at least to stop it upon arrival. That's because researchers know how it's likely to get there. Rather than infected humans or migratory birds, the most probable culprits are mosquitoes in the cargo holds of planes, concluded A. Marm Kilpatrick of the Consortium for Conservation Medicine at Wildlife Trust in Palisades, New York, and others in a paper published in EcoHealth in May. Based on previous research, they estimated that seven to 70 infected mosquitoes probably reach Hawaii each year. Far less is known about the risks of introduction via shipping containers, some 1200 of which arrive in Hawaiian harbors each day. The number of overseas flights—about 80 a day—also makes prevention difficult. Moreover, airlines have balked at treating their cargo holds with insecticides that kill mosquitoes on contact. The state has made progress on another front: preventing infected poultry and pet birds from entering by mail. In 2002, the U.S. Postal Service prohibited the mailing of most live birds to Hawaii. Quarantine regulations have also been strengthened.

    The health department has focused primarily on monitoring 11 airports and harbors. In 2002, they began checking dead birds by polymerase chain reaction (PCR) for West Nile virus. Last year, they added mosquito traps that are sampled each week and also examined by PCR for the virus.

    At the same time, researchers are trying to figure out just what might happen if West Nile virus manages to evade detection. “Bird biodiversity will probably be severely impacted,” says Jeff Burgett of the U.S. Fish and Wildlife Service in Honolulu, who heads an interagency task force. One reason is that Hawaii's endemic birds have not had a chance to build resistance to West Nile through exposure to related viruses, such as St. Louis encephalitis, that are not present on the islands. Those species that survive only in captive breeding programs, such as the Hawaiian crow, might never be able to return to the wild.

    As a first step to gauge the consequences, biologists with the U.S. Geological Survey (USGS) have sent 20 native Hawaiian honeycreepers (Hemignathus virens) to the survey's National Wildlife Health Center in Madison, Wisconsin. There, veterinarian Erik Hofmeister has injected some of the birds with West Nile virus and is following their health and ability to serve as reservoirs for the virus. He also plans to investigate how efficiently the primary vector in Hawaii, the mosquito Culex quinquefasciatus, can infect these birds.

    A similar experiment should help solve a problem that hampers the effort to spot the virus in dead birds. Hawaii doesn't have the North American birds—crows, magpies, jays—that provide the most obvious warning of the virus. So Hofmeister plans in December to examine which introduced birds in Hawaii, such as minahs, might be most susceptible to the virus. This will assist efforts to model potential spread of the virus. “It will also tell you which species might be amplifying the virus, and which species you may want to control,” says ecologist Dennis LaPointe of USGS.

    While the health department waits for these results, it is trying to speed its lab testing and streamline the response plan. Meanwhile, DOH and wildlife biologists have their fingers crossed that Hawaii's defenses will be adequate to stave off the virus—forever. “Every year it's going to be knocking on Hawaii's door,” says Peter Daszak of the Consortium for Conservation Medicine at Wildlife Trust.

  17. Getting the Noise Out of Gene Arrays

    1. Eliot Marshall

    Thousands of papers have reported results obtained using gene arrays, which track the activity of multiple genes simultaneously. But are these results reproducible?

    When Margaret Cam began hunting for genes that are turned up or down in stressed-out pancreas cells a couple of years ago, she wasn't looking for a scientific breakthrough. She was shopping. As director of a support lab at the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), she wanted to test-drive manufactured devices called microarrays or gene arrays that measure gene expression; she had her eye on three different brands. These devices are hot, as they provide panoramic views of the genes that are active in a particular cell or tissue at a particular time. Gene array studies are increasingly being used to explore biological causes and effects and even to diagnose diseases. But array experiments are expensive, and Cam wanted to be sure that her colleagues would get high-quality, repeatable, credible results.

    She was taken aback by what she found. Not only was she unable to pick a clear winner, but she had a hard time figuring out whether any of the arrays produced trustworthy results. As she delved deeper, she found that the devices produced wildly incompatible data, largely because they were measuring different things. Although the samples she tested were all the same—RNAs from a single batch of cells—each brand identified a different set of genes as being highly up- or down-regulated.

    The disharmony appears in a striking illustration in Cam's 2003 paper in Nucleic Acids Research. It shows a Venn diagram of overlapping circles representing the number of genes that were the most or least active on each device. From a set of 185 common genes that Cam selected, only four behaved consistently on all three platforms—“very low concordance,” she said at an August forum in Washington, D.C., run by the Cambridge Healthtech Institute, based in Newton Upper Falls, Massachusetts. Using less rigorous criteria, she found about 30% agreement—but never more than 52% between two brands. “It was nowhere near what we would expect if the probes were assaying for the same genes.”

    Cam's findings caused “one's jaw to drop,” says Marc Salit, a physical chemist at the National Institute of Standards and Technology (NIST). This was not the first paper to highlight such inconsistencies, but Cam's little diagram had an impact: With support from commercial array makers and academics, Salit is now coordinating an effort at NIST to understand exactly what is measured by these devices.

    A few ex-enthusiasts think the promise of gene arrays may have been oversold, especially for diagnostics. Take Richard Klausner, the former director of the National Cancer Institute (NCI) now at the Bill and Melinda Gates Foundation in Seattle, Washington. “We were naïve” to think that new hypotheses about disease would emerge spontaneously from huge files of gene-expression data, he says, or that “you could go quickly from this new technology to a clinical tool.” His own experience with arrays indicated they were too sensitive and finicky: The more data he gathered on kidney tumor cells, the less significant it seemed.

    But those who have persevered with gene expression arrays attribute such problems to early growing pains. They claim that experienced labs are already delivering useful clinical information—such as whether a breast cancer patient is likely to require strong chemotherapy—and that new analytical methods will make it possible to combine results from different experiments and devices. Francis Barany of Cornell University's Weill Medical College in New York City insists that arrays work well—if one digs deeply into the underlying biology.

    Hot technology.

    The number of studies using microarrays to analyze genes being turned on and off in concert has exploded in the last decade.



    Digging into the biology is just what Cam did after her experiments produced reams of discordant data. She and colleagues in Marvin Gershengorn's group at NIDDK wanted to pick out a set of key genes active in pancreatic tumor cells undergoing differentiation. From there, they meant to go on to examine how islet cells develop. “We were very surprised,” she recalls, when they couldn't cross-validate results from studies done with Affymetrix, Agilent, and Amersham arrays. So she began pulling the machines apart.

    Cam soon ran into a barrier: Manufacturers weren't eager to share information about the short DNA sequence probes each kit uses to spot gene activity. Each commercial system uses a similar approach. Short bits of DNA from known genes are printed as probes on arrays. When an experimental sample is exposed to the array, RNAs made by genes cling specifically to the probes that have a complementary sequence, triggering a fluorescent signal that can be read by an optical scanner. The more RNA on a probe, the stronger the signal. The activity of thousands of genes can be tracked simultaneously this way.

    Although manufacturers identified which genes the probes targeted, they would not reveal the actual nucleotide sequence of each probe. This made it difficult to know exactly what the probes were picking up. Recalls Zoltan Szallasi of Harvard's Children's Hospital in Boston, “For the first 6 years researchers were actually flying blind.” That changed in 2001, he says, when the companies began sharing data.

    Cam says, “We managed to get partial sequences [of array probes] from Agilent,” along with “full sequences from Affymetrix and Amersham.” Her team analyzed the probe and gene matchup in detail and found that supposedly different probes were responding to pieces of the same gene. Targeting different parts can be a problem because genes often contain many components that can be spliced into variant RNA packages. The result, several experts say, is that probes can over- or underestimate gene activity.

    Sorting out the confusion is tough because the probes have not been designed to be specific to gene-splice variants, and no one has even created a master list of variants. Cam is encouraged that companies are beginning to make arrays specific to different splice variants. “That should reduce the ambiguity.”

    Little overlap.

    Three array systems rated the activity of 185 genes differently in one test.


    Another confounding factor, Szallasi claims, is promiscuous matches: Probes often respond not only to gene products that exactly fit the sequence but also to those that “cross-hybridize” with near matches. “Every manufacturer claims to have avoided this problem, but there must be a reason why microarray probes targeting almost the same region of a given gene give wildly different intensity signals,” he says.

    Moreover, many probes just don't correspond to annotated sequences in the public database, RefSeq, Szallasi says; removing these problematic probes significantly improves study results. But the best way to build confidence in gene array results and novel analytical methods, he argues, is to validate probe-gene matches using the more rigorous and time-consuming polymerase chain reaction methods of sequence testing. Szallasi has been doing just that as part of an effort to help collaborators at Harvard and at Baylor College of Medicine in Houston, Texas, merge their gene expression data sets. He's also been trying to get other researchers in Boston to share information on validated matches.

    Emerging proof

    The difficulty in comparing gene array results, say Szallasi and others, raises questions about how much confidence to have in the thousands of papers already published and whether it will ever be possible to merge existing data and find significant results. Relatively few labs have tried to replicate large gene expression studies, even those using identical test devices, says NCI's Richard Simon, a cancer clinician who directs gene array studies. People don't want to waste hard-to-obtain tissue on such work, and they'd rather not spend money on replicating others' findings. Simon argues that the correct test of comparability in clinical medicine is not “whether you come up with the same set of genes” in two studies, but whether you come up with an accurate and consistent prediction of patient outcomes.

    He and others note that gene arrays have already proved their mettle in two clinical areas: breast cancer and lymphomas. Molecular geneticist Todd Golub of the Broad Institute in Cambridge, Massachusetts, says his group has independently validated gene expression results of Louis Staudt of NCI and Pat Brown of Stanford University that identify subcategories of lymphoma that have relatively poor or good outcomes. And Lyndsay Harris, a breast cancer researcher at Harvard's Dana-Farber Cancer Institute, says many of her colleagues have confidence in gene expression data that identify a high-risk form of breast cancer associated with cells in the basal epithelium, a strategy that Charles Perou, now at the University of North Carolina, Chapel Hill, pioneered.

    Map of discordance.

    An experiment at NIH found that three commercial devices rated different genes as being turned on (red) and turned off (green) in a single batch of pancreatic cells.


    In basic research as well, Golub agrees with Simon that broad themes, not specific genes, should be the focus of comparison studies. He looks for a “biologically robust” response in patterns of gene activity—such as activation of coordinated cell signals—as a sign that two experiments have detected the same result. Spotting a signal in the noise is like “recognizing a face, regardless of whether you're wearing bifocals, or sunglasses, or no glasses.” Eventually, Golub says, biostatistical methods can probably be used to define such “signatures” in a flexible way to recognize different expression patterns as alternative forms of the same result.

    Trials have begun to test some of the newer interpretations of cancer pathology based on gene expression, such as efforts to profile high-risk breast cancer at the Netherlands Cancer Institute (Science, 19 March, p. 1754). But many champions of gene-expression tests contend that they are not yet ready for “prime-time” clinical use.

    Staudt thinks the time will come when “every cancer patient gets a microarray-based diagnosis.” But before then, “we still have to show how reproducible the results are.” He is part of an NCI-sponsored consortium that is attempting to correlate results from his own group, obtained from one type of device (spotted arrays of lengthy cDNAs), with those from a type of mass-produced device (printed arrays of short oligonucleotides) made by Affymetrix. Already, they have achieved “very good prediction” of tumor type in retrospective studies of 500 samples. Now they plan to test the model prospectively.

    Seeking harmony

    Researchers have now got “all the major journals” using a single format to report array data, says Alvis Brazma of the European Bioinformatics Institute in Hinxton, UK, a co-founder of the Microarray Gene Expression Data Society. But eliminating discordance in the hardware may not be so easy, says Ernest Kawasaki, chief of NCI's microarray facility: “If I had all the money in the world, I would say the best thing is to start over from the beginning”—presumably with a set of validated gene expression probes and shared standards.

    That kind of money isn't available, but Salit says NIST recently agreed to spend $1.25 million a year for 5 years to tackle the problem of “discordance.” Salit is coordinating a group that includes microarray manufacturers and a coalition of academics—the External RNA Control Consortium—to develop a set of standards that can be used to calibrate gene arrays and ensure that results can be translated meaningfully from one lab to another. If it succeeds, “the pie is going to get bigger” because “everybody's results will improve.”

  18. Searching for the Genome's Second Code

    1. Elizabeth Pennisi

    The genome has more than one code for specifying life. The hunt for the various types of noncoding DNA that control gene expression is heating up

    Molecular biologists may have sequenced the human genome, but it's going to take molecular cryptographers to crack its complex code. Genes, keystones to the development and functioning of all organisms, can't by themselves explain what makes cows cows and corn corn: The same genes have turned up in organisms as different as, say, mice and jellyfish. Instead, new findings from a variety of researchers have made clear that it's the genome's exquisite control of each gene's activity—and not the genes per se—that matters most.

    “The evolution of the genetic diversity of animal forms is really due to differences in gene regulation,” says Michael Levine, an evolutionary biologist at the University of California (UC), Berkeley. Turning on a gene at a different time, or in a new place, or under new circumstances can cause variations in, say, size, coloration, or behavior. If the outcome of that new regulatory pattern improves an organism's mating success or ability to cope with harsh conditions, then it sets the stage for long-term changes and, possibly, the evolution of new species.

    Unfortunately, “people don't look to regulatory elements as the cause of the variation they see,” says Stephen Proulx of the University of Oregon, Eugene. These elements are “a major part of the [evolution] story that's been overlooked,” Levine says.

    That neglect is now being righted. Although many biologists remain gene-centric, an increasing number are trying to factor in the effects of gene regulation. Researchers are beginning to come up with efficient ways to locate the different regulatory regions scattered along the strands of DNA. Others are piecing together the workings of transcription factors, proteins that interact with regulatory DNA and with each other to guide gene activity. They are homing in on exactly where transcription factors operate along the DNA and what they do to ensure that a gene turns on at the right time and in the right amount.

    Most are tackling the functions of regulatory elements one at a time, although a few are taking more global and bioinformatics approaches (see sidebar on p. 635). At the California Institute of Technology (Caltech) in Pasadena, one group is trying to identify all the regulatory interactions in maturing embryos; their goal is to elucidate how genes and regulatory DNA work together to guide development and also how those interactions change over evolutionary time.

    All this work is making clear that buried in DNA sequence is a regulatory code akin to the genetic code “but infinitely more complicated,” says Michael Eisen, a computational biologist at Lawrence Berkeley National Laboratory in California. Researchers can predict the proteins specified by the genetic code, but, he adds, “we can't predict gene expression by simply looking at the sequence.”

    Manolis Dermitzakis of the Wellcome Trust Sanger Institute in Cambridge, U.K., agrees: “The complexity of the genome is much higher than we have defined for the past 20 years. We have to change our way of thinking.”

    Model organism.

    Fruit flies have played a critical role in the search for stretches of regulatory DNA called enhancers, which control gene expression by binding to one or more transcription factors.


    From genes to regulation

    At the Medical Research Council's Laboratory of Molecular Biology in Cambridge, U.K., Francis Crick—co-discoverer of DNA's structure—Sydney Brenner, and their colleagues took the first steps toward figuring out how genomes work. In 1966, they proved that genes are written in a three-unit code, each of which specifies a particular amino acid. By combining these threesomes, called codons, in different ways, the genome encodes instructions for thousands of proteins. This discovery focused the spotlight on genes themselves and the coding regions within them; for decades biologists called the intervening DNA “junk.”

    Consequently, the notion of gene regulation languished, even when results pointed to its importance. In the early 1970s, Allan Wilson of UC Berkeley and his student, Mary Claire King, demonstrated that humans and chimps are quite similar in their genes. The key to what makes the two species so different, they suggested, lies in where and when those genes are active.

    But not until 2 years ago did experiments begin to bear out this idea. Wolfgang Enard of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and his colleagues compared the expression of many genes in tissues from chimps and humans. Certain genes are far more active in the human brain than in the chimp brain, they reported in the 12 April 2002 issue of Science (p. 340), a finding that supports Wilson and King's ideas.

    Enard's 2002 work came on the heels of dozens of other studies suggesting that gene changes are not the be-all and end-all of evolutionary innovation. Instead, researchers increasingly attribute innovation to a variety of types of regulatory DNA, some just recently detected. Certain genes code for the proteins that make up the transcription machinery, which binds to promoters, the DNA right at the beginning of a coding sequence. Other genes code for transcription factors that can be located anywhere in the genome. All affect their target genes by attaching to regulatory DNA—sometimes called modules—that's usually near but not next to a gene. Protein-laden modules that stimulate gene activity are called enhancers; those that dampen activity are called silencers.

    Genome cryptography.

    The regulatory code is encoded in the arrangement of an enhancer's DNA binding sites (A), in the spacing between binding sites (B), or by the loss or gain of one or more of these sites (C).


    As a plethora of meetings and research reports suggests, enhancers are hot. They are small genetic command centers, consisting of stretches of 500 or so bases. Those clusters in turn are peppered with transcription factor binding sites, which can be less than 10 bases long. The target of a particular enhancer—and its effect—depends on the spacing and order of the binding sites within it.

    Sometimes the enhancer simply contains multiple copies of the same binding site and therefore uses the same transcription factor throughout its length. Other times, it has multiple transcription factors, and slight differences among these proteins can cause one gene to turn on and another to turn off.

    Both enhancers and silencers are especially hard to find because they are very small pieces of sequence and, unlike promoters, reside at varying distances from the gene they regulate. “We have a lot of potentially short stretches of DNA where the action is and stretches of DNA that don't matter,” Patrick Phillips, an evolutionary developmental biologist at the University of Oregon, Eugene, points out. Theorists predict that humans could have as many as 100,000 enhancers and silencers, but fewer than 100 are known, says Levine.

    Hot on the enhancer trail

    To understand the role of enhancers in development, Levine is studying their architecture and function in the fruit fly genome. The first challenge he encountered was simply finding the elusive quarry: Several years ago he encouraged his graduate student Michele Markstein (and her computer-savvy parents) to write a computer program that could begin to do just that.

    The trio worked first with a transcription factor, dorsal, which was known to affect a gene called zen. They already knew that the enhancer for zen contained four binding sites for dorsal, packed close together. The researchers used that signature sequence as a probe for finding other enhancers that also had clusters of dorsal binding sites.

    The method worked. Proof positive came when the program pinpointed three previously identified enhancers that control other genes. It also turned up a dozen more clusters containing three or four of dorsal's binding sites. The researchers have since shown that at least two are definitely enhancers. Levine is encouraged: “Sometimes the clustering of a single factor's binding sites is sufficient to find new enhancers,” he says. Indeed, using a similar strategy, Eisen identified a set of enhancers responsible for anterior-posterior development in the fruit fly. The groups published their results 2 years ago.

    That same year, Eisen, Levine, and UC Berkeley's Benjamin Berman teamed up to use this approach, along with other bioinformatics tools, to look for more complex enhancers. Instead of containing repeated binding sites for the same transcription factor, complex enhancers contain binding sites for several different factors, thereby providing precise regulation of gene expression.

    To find these sequences, Berman and his colleagues looked for pieces of DNA with binding sites for five transcription factors. They didn't have a specific enhancer in mind but sought out DNA with those binding sites sitting close enough together to suggest they formed an enhancer.

    The five transcription factors all affect embryonic genes. Initially, Berman's program found several dozen of what seem to be complex enhancers; recently the count has more than doubled. And in pinning down these enhancers, the researchers uncovered almost 50 genes that seem to be controlled by this same set of transcription factors and thus are likely to work together to guide early development

    So far, the researchers have confirmed that at least some of these newly identified clusters really are enhancers by testing their activity in transgenic fruit flies. They add DNA consisting of the putative enhancer and a marker gene. If the marker gene shows up in the same place as the enhancer's target gene, then the researchers know they have got what they want. These data are showing that when several enhancers have a similar binding site composition, they tend to work together and coordinate the expression of sets of genes.

    One gene, many sizes.

    Enhancers from different species alter the extent of a Hox gene's expression (dark stain), variation that leads to species-specific numbers of thoracic vertebrae.


    Enhancer encryption

    With the first enhancers in hand, Levine and his colleagues were ready to take the next step: to figure out how enhancers orchestrate development and effect the changes underlying evolution. They began to dissect the architecture of these bits of sequence, determining exactly where the transcription factor dorsal attached and whether those locations had anything to do with the enhancer's function. They also tracked down transcription factors that interacted with the same enhancers as dorsal.

    Through these efforts, Levine and UC Berkeley collaborator Albert Erives have been able to decipher another layer of “code” scattered in the arrangement of binding sites within the enhancer. This code is critical to directing patterns of differentiation in embryonic tissue, they reported in the 16 March Proceedings of the National Academy of Sciences.

    Dorsal, whose concentration in the embryo is highest on the ventral side and decreases toward the dorsal, is key in defining these regions. Gene activity varies along the dorsal-ventral axis, leading to the differentiation of tissue types. High amounts of dorsal lead to mesoderm, a precursor to muscle; low amounts stimulate the development of precursor nervous system tissue.

    When the researchers probed more closely into the enhancer, they found that a binding site for dorsal was always next to the binding site for a transcription factor called twist. Levine found that the close proximity and specific order of the binding sites were conserved in an equivalent enhancer in mosquito and a different fruit fly: “They were not randomly arranged,” Levine says.

    These results suggested that the effect of an enhancer on a gene is determined not just by the combination of transcription factors but also by the spacing between the binding sites. Levine thinks that dorsal and twist have to be quite close together for the enhancer to work dorsally, where concentrations of dorsal are low; when far apart, much more dorsal is needed. Thus it seems that the genome can use the same subset of transcription factors to regulate different genes simply by changing the order or spacing of those proteins, or where they bind along the enhancer. With this work, “Levine has gotten inside the mind of enhancers,” says Eisen.

    Enhancers and evolution

    Alterations in the order and spacing of binding sites can also affect how the same gene works across several species, new research is finding, suggesting that it might take only a relatively simple rearrangement to change an enhancer and affect evolution. At a meeting on developmental diversity held in April in Cold Spring Harbor, New York, developmental biologist Cooduvalli Shashikant of Pennsylvania State University, University Park, described his survey of enhancer effects on a gene called Hoxc8. This gene, found in many organisms, helps define the number and shape of thoracic vertebrae. Shashikant suspected that the enhancer, rather than the gene alone, plays a pivotal role in delineating different vertebrae configurations among species. To find out, he and his colleagues analyzed the sequence of the 200-base-pair enhancer that lays just upstream of Hoxc8 in zebrafish, puffer fish, and mice.

    After adding a reporter gene to each enhancer so they could see where it was active, Shashikant's group inserted the combination into mouse embryos. Then they compared the pattern of expression generated with the zebrafish and puffer fish enhancers to that of the mouse enhancer. In all three cases, the activity of the gene was restricted to the back part of the developing spine.

    Shashikant suspects that the subtle differences among the enhancers in each species changes the physical boundaries of Hoxc8 expression within the embryo, thereby helping explain why chickens wind up with seven thoracic vertebrae and snakes about 200, he says. Shashikant is also looking at the sequences of this enhancer in other species, including whales and coelacanths, and again he has found changes that probably help define each organism's shape. Sometimes they are simple sequence changes. Other times, missing or additional DNA alters the mix of transcription factors involved. Through evolution, “a lot of tinkering has gone on at the enhancer level,” Shashikant says.

    Enhancing differences.

    Sea urchins and starfish share much of their embryonic genetic circuitry, but their enhancers can vary, altering developmental pathways.


    Evolving embryonic differences

    Similarly, Eric Davidson, a developmental gene regulation biologist at Caltech, has found that a small change in one enhancer's structure, and likely many alterations in all sorts of enhancers, pave the way to the different developmental pathways that make each species distinctive. Five years ago, he and his colleagues embarked on an ambitious project to map all the genetic interactions necessary to build the embryonic sea urchin's endomesoderm, cells that are precursors to the skeleton, gut, and other tissues. They worked gene by gene, determining the expression pattern of each and deducing their functions by knocking them out in developing embryos. From these data and more, they pieced together a computer model of the 50-gene circuitry controlling the embryo's first 30 hours as an initial glob of cells began to differentiate into endomesoderm.

    The circuit is a hairball of proteins such as transcription factors and signaling molecules, and their genes, all connected by means of regulatory DNA into multiple feedback loops. Multiple transcription factors partner with an enhancer to control the activity of other transcription factors. Thus, even though the circuit involves just a few cell types, “it's a very complex network,” says Davidson's Caltech colleague Veronica Hinman.

    Hinman and Davidson have now taken the next step: elucidating the role of gene regulation in helping to define developmental differences in two echinoderms. Hinman has been working out the same genetic circuitry in a starfish. Whereas the starfish and the sea urchin shared a common ancestor about 500 million years ago and still have similar embryos, the two species have long since gone their separate ways. Adult sea urchins look like pincushions, with rounded exoskeletons; starfish are flat, with arms protruding from a central cavity. Hinman has focused on the earliest moments of the starfish's life. Despite the differences in the adults, much of the embryonic genetic circuitry studied so far is almost identical in both species, she reported in 2003.

    Yet subtle variations have had a big impact. For example, there's a five-gene circuit both species share. A key gene in this pathway is otx, and it sets off the circuit in the sea urchin and the starfish. Hinman has found a tiny change in this enhancer: Between the two species, this enhancer varies by just one binding site, for a transcription factor called t-brain. The starfish has this binding site; the sea urchin does not. In the sea urchin, t-brain works in concert with other regulatory genes and sets off the embryo's skeleton-forming circuitry, a genetic pathway absent in the starfish embryo. But because the otx enhancer is missing t-brain, the sea urchin must also rely on a different transcription factor to get the otx gene in the five-gene circuitry to kick in.

    Meanwhile the t-brain binding site on the starfish's otx enhancer keeps otx focused on genes for the incipient gut. Davidson thinks that ancestral echinoderms had a t-brain site on the enhancer for otx, one that disappeared from that enhancer in the sea urchin. “This looks like species-specific jury-rigging,” he points out. “The evolution of body plans happens by changes in the network architecture.”

    Enhancing genome studies

    Through these kinds of studies, researchers are beginning to decode the regulatory genome. If this code can be made clear, they should be able to piece together how organisms diversify, says Nipam Patel, a biologist at UC Berkeley. For example, through Davidson and his colleagues' thorough descriptions of the gene pathways guiding development, they can pin down where enhancer modules have been added or lost. That understanding, in turn, is changing how some researchers make sense of evolution, adds Michael Ludwig of the University of Chicago. It's a vision in which regulatory elements, including enhancers and silencers, are as important, if not more important, than gene mutations in introducing genetic innovations that may set the stage for speciation. Changes in one type of regulatory element, the transcription factors, can be quite detrimental, as each influences many genes. By contrast, enhancer mutations work locally, affecting just their target genes, and so are less likely to have deleterious effects on the rest of the genome.

    Yet Ludwig and others are the first to admit that they have not cracked this regulatory code. “We need to learn what it is and how this information is written in these sequences,” says Eisen. “At this moment, we still do not have that ability.”

  19. A Fast and Furious Hunt for Gene Regulators

    1. Elizabeth Pennisi

    Genes may be essential, but researchers increasingly recognize the pivotal role that another element of the genome—regulatory DNA—plays in human disease, speciation, and evolution. In many labs, the search to find where these regions are buried is intensifying (see main text). While some researchers are tackling these regions one at a time, others are experimenting with high-speed methods to detect regulatory regions, such as enhancers, en masse and determine what each one does.

    At the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, Richard Young and Ernest Fraenkel are using microarrays to analyze yeast in an effort to turn up all its promoters, the sites at the beginning of a gene that bind to activating proteins. The team starts by treating a cell and one of its transcription factors so that the factor permanently sticks to its DNA-binding sites, thereby tagging all the promoters that use this factor. Then they use that transcription factor to isolate these sequences from the rest of the genome. After filling a microarray with pieces of yeast DNA whose positions on the genome are known, they add the tagged DNA, which then links to its matching sequence in the microarray, revealing the approximate location of each piece.

    The computer program takes over from there, says Fraenkel, refining the location of the binding sites using similar DNA from other organisms as a guide. In this way the team has been able to pinpoint the promoter-binding sites for each individual transcription factor in yeast simultaneously. Now, Young and Fraenkel are using this technique to find enhancers and regulatory DNA in organisms that have more complex genomes than yeast's. Already they have cornered enhancers on both human liver and pancreatic cells.

    Others are also using comparative genomics techniques to fish out regulatory regions. The idea is straightforward enough: Compare two or more sequenced genomes to identify those places where DNA outside genes is highly similar and presumably functional. The challenge lies in deciding which genomes to compare, explains postdoc Marcelo Nobrega of Lawrence Berkeley National Laboratory (LBNL) in California. If the animals are too closely related, their genomes will share many noncoding sequences that have no connection to regulation. If the organisms are too distant, then even functional regions, including regulatory regions, will be too different to detect.

    Nobrega's LBNL colleague Len Pennacchio thinks he has the answer. When his team compared the human genome to that of a puffer fish, they came up with a whopping 2500 potential enhancers. To test the effectiveness of this method, the LBNL team has been inserting 100 of the putative enhancers, identified on human chromosome 16, into transgenic mouse embryos, which they then analyze for signs of regulatory activity. At a meeting in Cold Spring Harbor, New York, in May, Pennacchio reported that 48 of the 60 enhancers tested to date were real.

    But Ewan Birney and his colleagues at the European Bioinformatics Institute (EBI) in Hinxton, U.K., and the European Molecular Biology Laboratory in Heidelberg, Germany, worry that comparisons alone will yield spurious matches as well as valid ones. “If you look at conservation itself, it doesn't tell you a great deal,” says Birney.

    Testing, testing.

    High-throughput screens for enhancers can turn up false positives, which can be excluded by looking for enhancer function in mouse embryos.


    To refine the comparative approach, his team has written a program that considers only short sequences that are conserved between human and mouse—and only when they are present in higher than usual densities in front of a gene. EBI's computers divide human and mouse genomes into small pieces, say, six or eight bases, then compare them.

    In the initial experiments, the researchers asked their computers to pick out a well-known piece of regulatory DNA called the TATA box, which is required for the activation of many genes. The program did just fine, says Birney, which gives him hope that it will also be able to pin down other regulatory regions elsewhere in the genome.

    Birney eventually hopes to tie these data in with forthcoming results from other labs suggesting that gene regulation is controlled by other aspects of the genome as well—such as how chromosomes are twisted around various proteins. And that, he hopes, will enable him to begin to address “how all this is put together as a code” that researchers can use to decipher the true workings of the genome.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution