News this Week

Science  11 Mar 2005:
Vol. 307, Issue 5715, pp. 1540
  1. BIODEFENSE

    Unnoticed Amendment Bans Synthesis of Smallpox Virus

    1. Martin Enserink

    With hardly anyone noticing, Congress has slapped new restrictions—and hefty penalties—on one type of study involving the most dreaded pathogen on Earth. By adding a last-minute amendment to a massive intelligence reform bill in October, Representative Pete Sessions (R-TX) has made it illegal for most U.S. researchers to synthesize the smallpox virus, variola, from scratch. But some virologists, who are only now becoming aware of the amendment, say the law is ambiguous on what exactly is banned, and it could be interpreted to include some research on closely related poxviruses.

    By international agreement, only two labs in the world, one in Russia and one in the United States, can store and study variola. U.S. law also criminalizes possession of the virus—along with many other “select agents”—for purposes other than “bona fide” research. But theoretically, nothing has stopped researchers from trying to assemble the virus except for their own conscience.

    The new provision, part of the Intelligence Reform and Terrorism Prevention Act that President George W. Bush signed into law on 17 December 2004, had gone unnoticed even by many bioweapons experts. “It's a fascinating development,” says smallpox expert Jonathan Tucker of the Monterey Institute's Center for Nonproliferation Studies in Washington, D.C.

    Made to order?

    It may soon become possible to synthesize variola, the smallpox virus.

    CREDIT: SCIENCE VU/VISUALS UNLIMITED

    Since smallpox was eradicated, the only known variola stocks sit at the Russian State Research Center of Virology and Biotechnology in Koltsovo, Novosibirsk, and the Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia. But advances in DNA synthesis have made it possible to create viruses in the lab; synthesizing a full, working variola virus may be possible within 5 years, predicts Eckard Wimmer of Stony Brook University in New York, who first synthesized the tiny poliovirus 3 years ago (Science, 9 August 2002, p. 1016).

    The primary goal of Sessions's amendment—originally introduced as two separate bills, one sponsored by Senator John Cornyn (R-TX)—was to impose much stiffer penalties on the possession of terror weapons, including shoulder-fired missiles, “dirty” bombs, and variola. Until now, for instance, unregistered possession of a select agent carried a maximum penalty of 10 years in prison; under the new law, the minimum is 25 years for variola. Where the law breaks new ground is by also making it illegal to “produce, engineer, [or] synthesize” variola. (Research carried out under the authority of the Secretary of Health and Human Services, who oversees the CDC, is exempt.)

    It's extremely rare for the federal government to outlaw specific types of research, says Mark Frankel, who directs the Scientific Freedom, Responsibility and Law Program at AAAS, the publisher of Science; the only example he recalls is a 1956 law banning recording or observing jury proceedings, passed in response to certain behavioral studies. To Frankel, the lack of debate about the bill is “worrisome.”

    Pox police.

    Rep. Pete Sessions introduced stiff penalties for making variola.

    Virologists zooming in on the bill's small print, meanwhile, cannot agree on what exactly it outlaws. The text defines variola as “a virus that can cause human smallpox or any derivative of the variola major virus that contains more than 85 percent of the gene sequence” of variola major or minor, the two types of smallpox virus. Many poxviruses, including a vaccine strain called vaccinia, have genomes more than 85% identical to variola major, notes Peter Jahrling, who worked with variola at the U.S. Army Medical Research Institute of Infectious Diseases in Fort Detrick, Maryland; an overzealous interpretation “would put a lot of poxvirologists in jail,” he says.

    Bernard Moss of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, believes the word “derivative” means that existing orthopoxviruses are allowed, even if they are highly similar to variola. But on the other hand, the definition does not seem to prevent researchers from taking another poxvirus and adding genes to make it more like variola. “That seems to leave a bit of a hole,” Moss says. “It's a funny definition, and it should certainly be clarified,” says Paula Traktman of the Medical College of Wisconsin in Milwaukee. A spokesperson for Sessions said that the amendment was “a collaborative effort between the executive and the legislative branches” with “many sources of input” but did not know who had provided the variola definition.

  2. BIODEFENSE

    Report Faults Smallpox Vaccination

    1. Jocelyn Kaiser

    A review of the ill-fated 2003 U.S. smallpox vaccination campaign charges that the Bush Administration diverged from scientists' advice and moved ahead on a major effort without a clear explanation. The report, issued last week by the Institute of Medicine (IOM), also blames external “constraints” on the Centers for Disease Control and Prevention (CDC) for the program falling short of its goals. CDC Director Julie Gerberding denied the charges.

    After the 9/11 attacks and anthrax letters, President George W. Bush in December 2002 announced a plan to vaccinate 500,000 health care workers, and eventually up to 10 million other emergency responders as well as an unspecified number of interested members of the public, against smallpox. But the effort soon foundered, especially after the vaccine caused heart problems in a few people, an unexpected side effect. The program wound down in mid-2003, and ultimately only about 40,000 people were vaccinated.

    Ouch.

    CDC's scientific authority was “constrained” regarding smallpox vaccinations.

    CREDIT: JAMES GATHANY/CDC

    The IOM report* notes that “top officials of the executive branch” departed from the recommendations of CDC's vaccination advisory panel, which initially wanted to vaccinate only 20,000 people and later, under political pressure, raised that to 500,000 (Science, 20 December 2002, p. 2312). The officials offered “only vague explanation” for vaccinating 10 million more workers and the public, even though the vaccine carried known risks, and there was no evidence of an imminent attack. As a result, workers implementing the program and volunteers expected to line up for vaccinations “remained skeptical,” leading to “poor participation,” the report says.

    The campaign was further hindered because CDC's normally open process of communicating scientific rationale to public health departments “seemed constrained by unknown external influences,” the report says. In a strongly worded statement, Gerberding counters that CDC's voice was not “constrained” and that the program “was based on the best scientific advice.”

    The IOM report refrains from calling the effort a failure. It has apparently improved public health preparedness, as shown by the responses to a subsequent monkeypox outbreak and to severe acute respiratory syndrome, says IOM panel chair and biostatistician Brian Strom of the University of Pennsylvania in Philadelphia. But the panel concluded CDC needs to define and measure smallpox preparedness. Above all, Strom says, while national security concerns have to be balanced against scientific information, CDC “or any other agency needs to speak from the science.”

  3. SPACE SCIENCE

    NASA Plans to Turn Off Several Satellites

    1. Andrew Lawler

    NASA intends to stop operating more than a half-dozen existing science probes at the end of this year, including the famed Voyager 1 and 2 spacecraft now racing toward the edge of the solar system. Although space agency officials say no final decisions have been made, the agency's 2006 budget request includes no money for a host of solar and space physics projects that currently cost a total of $23 million annually.

    In a 2003 speech marking the 100th anniversary of the Wright brothers' flight, President George W. Bush praised the Voyager missions, launched in 1977, as a prime example of “our skill and daring” in exploration. “If the U.S. wants to explore, then turning off Voyager is exactly the wrong signal to send,” says William Kurth, a space physicist at the University of Iowa in Iowa City. NASA spokesperson Dolores Beasley says that “Voyager is not canceled,” although no funding is planned beyond 1 October.

    Voyager 1 is currently 95 astronomical units (AUs) from Earth and may have already passed through the termination shock that marks the solar system's boundary with interstellar space. Physicists are eager to understand what happens when the solar wind ceases and deep space begins, and additional data from Voyager 1 and Voyager 2—which is now 76 AUs from Earth—could resolve the debate over whether Voyager 1 has passed that point. NASA spends $2 million a year to operate the two spacecraft, which are thought capable of transmitting data for another 15 years. “It will be a great loss to shut Voyager off,” says Edward Stone, former head of the Jet Propulsion Laboratory in Pasadena, California, which operates the mission.

    So long, Voyager?

    NASA may not have money next year to operate Voyager and several other science missions.

    CREDIT: NASA

    Voyager is not the only casualty in the 2006 budget plan. NASA also has not budgeted money for five other solar physics missions: the 1997 Transition Region and Coronal Explorer, the 1996 Fast Auroral Snapshot Explorer, the Wind mission launched in 1994 to examine the solar wind, the 1996 Polar to examine the upper atmosphere, and the 1992 Geotail to study Earth's magnetic field. The space agency would also stop funding its portion of the 1990 European Ulysses mission to study the sun. In addition, NASA plans to halt funding for the 4-year-old Thermosphere, Ionosphere, Mesosphere, Energetics, and Dynamics mission at the end of 2006, as well as for the U.S. portion of the European Cluster mission to study the solar wind, which last month was extended through 2009.

    Daniel Baker, a solar physicist at the University of Colorado, Boulder, and a member of the National Academies' space studies board, says he is appalled by NASA's decision. He worries that the result will be a lengthy gap in coverage and a dearth of graduate students to seed a new generation of scientists. Margaret Kivelson, a planetary physicist at the University of California, Los Angeles, and also a space studies board member, sees the move as a sign that NASA is willing to sacrifice science projects for Bush's exploration vision to focus on the moon and Mars.

    Beasley says that NASA will review the space missions next month. “Just because the budget says zero [funding] does not mean they will not be getting money,” she added. One congressional aide who has begun hearing from worried scientists says that the space agency shouldn't expect to turn off the probes without a fight.

  4. NEUTRINO PHYSICS

    Fermilab Experiment Shoots the Muon

    1. Charles Seife

    BATAVIA, ILLINOIS—Nobody along the 700-kilometer beamline will notice the trillions of particles zooming underfoot—but scientists are certainly taking notice. Last week, a new experiment at the Fermi National Accelerator Laboratory (Fermilab) began sending neutrinos from an accelerator here to a detector deep underground in a Minnesota iron mine. Physicists working on the detector, known as NuMI/MINOS, have high hopes that the experiment will soon eclipse a similar one in Japan and put the most stringent limits on several properties of the mysterious neutrino.

    “Within a few years of running, we should have of the order of 10,000 events,” says Stan Wojcicki, co-spokesperson of MINOS, referring to particle detections. For comparison, the previous best long-distance neutrino-beam experiment, the Japanese K2K, has seen roughly 100 events in the past 6 years (Science, 2 November 2001, p. 987). “By summer, we may have a result comparable or even better than K2K,” he adds.

    At a ceremony at Fermilab last week, Speaker of the House Dennis Hastert (R-IL) officially launched the experiment. “With the launch of this project, Fermilab has positioned itself for the future,” he said, shortly before pressing a button on the laptop and getting NuMI/MINOS under way.

    Bull's-eye.

    Steel plates in an underground lab in Minnesota are designed to capture neutrinos from Fermilab, 700 kilometers away.

    CREDIT: FERMILAB PHOTO

    NuMI refers to a beam at Fermilab that creates muon neutrinos—nearly massless elementary particles that occasionally change varieties (or “oscillate”) into other flavors of neutrino. To create these neutrinos, scientists divert high-energy protons, which ordinarily feed the Tevatron atom smasher, and send them to a graphite target. The protons hit the graphite, creating pions, which are then focused into a beam by two magnetic horns and release muon neutrinos when they decay. Because neutrinos barely interact with matter, most of the muon neutrinos sail through Earth toward Minnesota and out into space. A few times a day, however, one of them strikes an atom in the MINOS detector—a 6000-ton lump of steel plates with scintillator panels sandwiched in between, shielded from stray particles and cosmic rays by nearly a kilometer of overlying rock. When that happens, the neutrino tends to release a muon, which zooms through a few dozen steel plates before running out of steam. The scintillators flash with light when the muon passes through; by tracking the flashes, scientists can figure out the properties of the neutrino that created it.

    Sometimes the beam from Fermilab brings electron neutrinos or tau neutrinos, the results of oscillations. By comparing the number of muon neutrinos produced at the source with the number that reach the Minnesota mineshaft, physicists can figure out how often the muon neutrinos change flavor. This, in turn, reveals the mass difference between two varieties of neutrino, as well as one “mixing angle,” a value that describes the fundamental makeup of neutrinos (Science, 12 July 2002, p. 184).

    Because of the large number of neutrinos produced at Fermilab as well as the bulk and sensitivity of the MINOS detector, physicists believe that NuMI/MINOS will yield orders of magnitude more information about neutrino properties than similar experiments performed in the past. “This is really a new regime in neutrino physics,” says Robert Plunkett, deputy project manager for NuMI. “It's a very hot beam. It has to be to do this.”

    Fermilab's outgoing director, Michael Witherell, says the NuMI/MINOS project, some proposed neutrino follow-ons, and a bid to design and build a huge linear accelerator known as the International Linear Collider (ILC) are the keys to the lab's future. “Neutrinos and the ILC are the headline items,” he says.

  5. GENE THERAPY

    Panel Urges Limits on X-SCID Trials

    1. Jocelyn Kaiser

    A U.S. advisory committee last week recommended limits on gene therapy trials in light of a third case of leukemia in a study in France. The panel suggested that U.S. studies of the same disease, X-linked severe combined immunodeficiency (X-SCID), should enroll only patients for whom conventional treatment has failed. However, trials of related diseases, as well as gene therapy trials using similar retroviral vectors, should continue, the panel said. The third leukemia “doesn't change the sense of unease dramatically,” said chair Mahendra Rao of the National Institutes of Health (NIH).

    Gene therapy trials for SCID have been the field's only success; since 1999 gene therapy has restored the immune systems of at least 17 children with two forms of the disorder. Excitement turned to worry in late 2002, however, when two children developed T-cell leukemia in a trial of X-SCID led by Alain Fischer at the Necker Hospital in Paris; one child died last fall. Although trials put on hold later resumed, a report that a third child in the French trial developed leukemia in January rekindled concerns about the therapy's risks (Science, 18 February, p. 1028).

    This latest leukemia appears to be different from the previous two. Those occurred after a retrovirus carrying a gene called gamma c inserted into the oncogene LMO2 in bone marrow cells in infants less than 3 months old, noted Food and Drug Administration (FDA) official Carolyn Wilson at a meeting of the FDA Cellular, Tissue, and Gene Therapies Advisory Committee. According to data provided by Fischer and French authorities, the third child, who was treated at 9 months old, does not appear to have an LMO2 insertion. Although the vector again apparently landed on an oncogene or oncogenes, the insertions occurred at three sites that have not yet been identified.

    Success story.

    Christopher Reid, a patient in a British X-SCID gene therapy trial.

    CREDIT: GREAT ORMOND STREET HOSPITAL FOR CHILDREN

    The panel also heard other new data, which offered a mixed message. Last September, a monkey died from a leukemialike cancer at NIH, apparently as a result of being treated in 1999 with a retrovirus carrying two marker genes, reported Cynthia Dunbar of NIH. On the other hand, NIH's Utpal Davé described a report last year in Science on a retrovirus-induced mouse leukemia that contained insertions in both LMO2 and gamma c, the gene corrected by the X-SCID therapy (Science, 16 January 2004, p. 333). The two genes seem to “cooperate” in causing cancer, Davé said, suggesting that gene therapy for diseases not involving gamma c—which itself may be oncogenic when expressed by a retrovirus—may be safer.

    Indeed, panelists noted, no leukemia cases have yet been seen in trials of ADA-SCID, which does not involve the gamma c gene. Nor have leukemias appeared in an X-SCID trial in the United Kingdom that has treated 7 patients. However, the French leukemias appeared roughly 33 months after treatment, and the U.K. patients have not reached that point.

    The panel concluded that if two X-SCID trials now on hold in the United States resume, they should enroll only children who have failed bone marrow transplants. “That's going to be a very small number,” said panelist Daniel Salomon of the Scripps Research Institute in La Jolla, California. But the panel suggested FDA could lift its hold on a U.S. trial for ADA-SCID. Researchers will be watching closely to see whether any leukemia cases turn up in the British trial. If not, “that would certainly change things” because it would suggest conditions specific to the French trial are leading to the leukemias, concluded Rao.

  6. INDIA

    Prime Minister Backs NSF-like Funding Body

    1. Pallava Bagla

    NEW DELHI—Indian Prime Minister Manmohan Singh has endorsed the creation of an independent agency to support basic research—with a proposed budget that's more than three times the amount the government is now spending.

    Scientists have long complained about the current process for winning grants, including inflexible rules and funding decisions that take more than a year. Last week Singh attended the first meeting of the new Science Advisory Council to the Prime Minister and embraced its recommendation for a National Science and Engineering Research Foundation with a mandate to “strongly promote and fund research in all fields of science and engineering.” The new foundation “is being patterned on the lines of the acclaimed U.S. National Science Foundation,” says C. N. R. Rao, chair of the council, who has campaigned for more than a decade for such a freestanding body. “A foundation that manages its own accounts and is run by a scientist is the only hope for reversing the rapid decline in Indian science,” he adds.

    A solid foundation.

    Prime Minister Singh is flanked by top science aides Kapil Sibal (left) and C. N. R. Rao (right).

    CREDIT: P. BAGLA

    The council recommended an annual budget of $250 million for the foundation. That amount would dwarf the $72 million now being spent by the Science and Engineering Research Council (SERC), an arm of the Department of Science and Technology (DST). The management and operating structure of the new foundation would be familiar to most U.S. scientists: five research directorates and a part-time body of distinguished scientists setting its overall direction. The council also recommended that the new foundation be responsible for “assessing the overall health of Indian science” (as NSF does with its biennial Indicators report) as well as funding “units of excellence [run by] researchers of exceptional merit” (as NSF does with centers focused on particular research areas).

    An evaluation of the existing structures by the prime minister's council was sharply critical of SERC, which was founded in 1972 and supports the bulk of fundamental research done in India. “Science funding in academic institutions and universities has not kept pace with the growing costs of basic research,” it concludes. Instead, the process has become “mired in bureaucracy, with complex financial procedures inhibiting efficient operation.” Even so, the secretary of DST, Valangiman Subramanian Ramamurthy, say he “has no objections to the new body, since the basic idea is not bad.”

    Science Minister Kapil Sibal has been asked to work out the details, including the fate of SERC. “There is no question of anybody saying no when the prime minister has said ‘Yes, it must be set up,’” says Sibal. The change can't come too soon for Rajendra Kumar Pachauri, director general of The Energy and Resources Institute in New Delhi. “An independent foundation,” he says, “is vital for resuscitating … a moth-eaten” scientific establishment.

  7. PALEOANTHROPOLOGY

    Skeleton of Upright Human Ancestor Discovered in Ethiopia

    1. Ann Gibbons

    Scientists working in the remote badlands of Ethiopia have found the oldest known skeleton of an upright walking hominid, roughly dated to nearly 4 million years ago. The remarkably preserved partial skeleton includes many bones of the pelvis, leg, back, and arms, as a team led by paleoanthropologists Yohannes Haile-Selassie and Bruce Latimer of the Cleveland Museum of Natural History in Ohio announced last week at a press conference in Addis Ababa, Ethiopia.

    The shape of the top of the lower leg bone and pelvis have already convinced the discoverers that this hominid walked on two legs, which is the traditional hallmark of being a member of the human family rather than an ancestor of apes. “It's a once-in-a-lifetime discovery,” says Haile-Selassie.

    The skeleton so far also includes precisely the anatomical parts below the neck that can allow scientists to distinguish whether it walked like a modern human or in a more primitive manner. “It's a monumentally important skeleton, a real key to understanding hominid origins,” says paleoanthropologist Carol Ward of the University of Missouri, Columbia, who cautions that she has not seen the as-yet-unpublished skeleton. “The bits from the skeleton are exactly the pieces we need to see if we came from something like a chimp or something more primitive.”

    Early walker.

    The owner of this shinbone walked upright in Ethiopia 4 million years ago.

    CREDIT: ANTHONY MITCHELL/AP PHOTO

    The skeleton was found on 10 February near the village of Mille in the central Afar Depression, where a sharp-eyed fossil hunter named Alemayehu Asfaw spotted an elbow bone. Soon team members found the other part of the arm bone, the pelvis, leg bones, ribs, vertebrae, clavicle, and scapula. Extinct pigs found with the skeleton suggest that it lived 3.8 million to 4 million years ago, a critical time when humans were evolving the ability to walk. The team is now dating samples of volcanic rock taken from layers above and below the fossil and studying fragmentary fossils, including leg and toe bones, from 11 other individuals.

    The identity of the new skeleton is still unclear, in part because the specimens are still embedded in matrix and also because most of the known fossils of this age are so fragmentary. There are only four other partial skeletons of human ancestors older than 1 million years. Contenders for the new skeleton's identity include the slightly younger Australopithecus afarensis, whose most famous member is Lucy, a partial skeleton that lived 3.2 million years ago at Hadar, 60 kilometers south of Mille. An older Kenyan species thought to be bipedal, 4.1-million-year-old A. anamensis, is also a possibility. Haile-Selassie says the new skeleton is slightly younger and distinct from the mysterious 4.4-million-year-old Ardipithecus ramidus, known from teeth and a crushed, still unpublished, skeleton that he also found; he adds that the new skeleton may connect the dots between Ardipithecus and later australopithecines, revealing how the human mode of walking evolved. Three even earlier species have been proposed as bipedal hominids but are known only from fragmentary fossils or a skull.

    The discovery of the new skeleton comes at a good time for Haile-Selassie, one of the first black Africans to launch his own fossil-hunting expedition (Science, 29 August 2003, p. 1178). The U.S. National Science Foundation rejected his grant application last year to look for hominids in the localities around Mille. Instead, he and Latimer got foundation funding for a small team of mainly Ethiopian fossil hunters. With a find like this, Haile-Selassie hopes getting future grants will not be a problem. “We want to go out and see if we can find the head and mandible,” he says.

  8. ALZHEIMER'S DISEASE

    Play and Exercise Protect Mouse Brain From Amyloid Buildup

    1. Jean Marx

    As the population ages, finding ways to stave off the debilitating brain degeneration of Alzheimer's disease becomes ever more critical. New results with a mouse model of the condition now provide further support for the idea that “use it or lose it” applies as much to the mind as to the body.

    A leading explanation for Alzheimer's disease blames abnormal buildup of a small protein called β amyloid, which accumulates in pathological structures called plaques in patients' brains. Now, working with mice genetically engineered to produce similar β-amyloid plaques, a research team led by Sam Sisodia of the University of Chicago, Illinois, has found that the β-amyloid buildup can be greatly reduced by a lifestyle change: housing the animals in an enriched environment—one amply stocked with toys and exercise equipment—instead of in standard lab cages equipped with nothing more than food, water, and bedding material.

    The experiments, reported in today's issue of Cell, also provide clues to how an enriched environment might protect against β-amyloid accumulation. Zaven Khachaturian, editor of the journal Alzheimer's and Dementia, calls the work “very provocative. … It opens new ways of getting at the underlying mechanism” of plaque formation.

    Several epidemiological studies have suggested that environmental enrichment, including education and intellectually challenging leisure activities such as reading and playing bridge, diminishes the risk of Alzheimer's disease. Others have pointed to a possible protective role of exercise. But lower activity levels could be an early symptom of the disease rather than a risk factor.

    With mice, though, it's possible to study environmental influences on the earliest stages of plaque formation. Sisodia and his colleagues Orly Lazarow and John Robinson started their experiments when the mice were just 1 month old, many weeks before they normally show symptoms of Alzheimer's disease; the genetically modified animals they used ordinarily develop β-amyloid plaques by about 4.5 months of age. The researchers put seven animals in standard cages and another nine in the enriched environment, where the activities of the mice were closely monitored.

    Fun and games.

    Mice in cages with toys and exercise equipment develop less β amyloid than do ones in standard cages (right).

    CREDITS: ORLY LAZAROW, JOHN ROBINSON, AND SANGRAM S. SISODIA

    After 5 months, the researchers killed both sets of mice and examined their brains. Animals kept in the enriched environment showed “a marked reduction in amyloid burden,” Sisodia says. The decrease appeared to be related to exercise. “The animals that were most active as determined by their time on the running wheels had the least [β-amyloid] burden,” Sisodia adds. He notes, however, that other aspects of the enrichment, such as increased visual stimuli and social interactions, could still account for the reductions.

    The researchers also identified changes in the brain that might explain a lessening of β-amyloid deposition. They saw increased activity of a β-amyloid-degrading enzyme called neprilysin in the brains of the enriched mice, as well as changes in gene expression that could promote neuronal survival and enhance learning and memory.

    In late 2003, Joanna Jankowsky of the California Institute of Technology in Pasadena, David Borchelt of the Johns Hopkins University School of Medicine in Baltimore, Maryland, and their colleagues reported that enriched environments actually increase plaque formation. The reason for the discrepancy is unclear, although the design of the 2003 experiment was different. For one, that study involved only female mice, whereas the Sisodia team used males. The Jankowsky-Borchelt group also had many more animals in their enriched cages and added young mice as they removed older ones. “To me that spells stress,” says David Arendash of the University of South Florida in Tampa, who also studies the effects of enrichment on Alzheimer's mice. That stress might have overcome any beneficial effects of the enhanced environments.

    Sisodia's group didn't test whether the enriched cages improved learning and memory in their animals, although work by others suggests that it may. This was the case in the experiments performed by Arendash. The improvement occurred even though the Tampa team did not see reductions in β-amyloid deposition in their mice. But those animals were very old—16 months at the start of enrichment—and they already had extensive β-amyloid deposition.

    How much these mouse studies of enriched environments relate to Alzheimer's disease in people remains to be seen. Adding another clue, Constantine Lyketsos and his colleagues at the Johns Hopkins Medical Institutions in Baltimore will report in the April issue of the American Journal of Epidemiology that engaging in a variety of different physical activities can reduce the risk of developing Alzheimer's disease by as much as 50%, although only in people who did not carry a particular gene variant called APOE4 that increases Alzheimer's risk.

    Lyketsos says that his team's results and Sisodia's provide an “interesting convergence” about the possible effects of physical exercise on Alzheimer's risk. So while you're out running to save your heart, you might also be saving your brain.

  9. NEUROIMAGING

    Brain Scans Raise Privacy Concerns

    1. Steve Olson*
    1. Steve Olson's latest book is Count Down: Six Kids Vie for Glory at the World's Toughest Math Competition.

    Advances in neuroimaging may provide the ability to “read” someone's mind, rightly or wrongly

    If you could find out whether those occasional moments of forgetfulness herald an old age ravaged by Alzheimer's disease, would you want to know? Would you want other people to know?

    What if tests were available that could determine whether a child could benefit from accelerated classes, whether someone on the witness stand were lying, or if a violent criminal were likely to attack again? Should such tests be used?

    None of these tests is available today, and some may never be. But rapid progress in imaging the structure and function of the human brain is forcing neuroscientists and bioethicists to consider the possible consequences of ongoing brain research. The President's Council on Bioethics has launched a series of discussions on neuroimaging and other issues raised by the neurosciences, and the newly dubbed field of neuroethics has received a boost because of concerns about what brain scans might eventually reveal. Many speculations remain uncertain because the ethical quandaries posed by new means of imaging the brain will depend on what those technologies eventually can do. But researchers are already talking about a future in which issues of privacy—keeping information to oneself—and confidentiality—preventing the unauthorized release of sensitive information—loom large.

    CREDIT: JUPITER IMAGES

    Triumphs and challenges

    Neuroimaging technologies such as positron emission tomography, functional magnetic resonance imaging, and near infrared spectroscopy have produced wonders in medical clinics and research labs. Physicians have been able to pinpoint damage caused by injuries or illness, and brain scientists have begun to piece together the neural mechanisms involved in perception, cognition, behavior, and emotion.

    But the ability to watch the brain in action raises many questions about when, if ever, society has a right to know what someone is thinking. “If some of these technologies become available, it could change how we live enormously,” says Henry Greely, a law professor at Stanford University in California who has written extensively about the legal and social implications of neuroimaging technologies. “To the extent that small, easy-to-use devices could tell, either voluntarily or surreptitiously, what was going on inside someone's head, that could have enormous uses throughout society—and also what we today would consider abuses.”

    Many ethical issues arise from straightforward extensions of current studies. For example, neuroscientist Turhan Canli and his colleagues at Stony Brook University in New York have been examining the correlations between brain scans and personality. Several years ago they showed that when people classified as extroverts on personality tests viewed smiling faces, they tended to have greater activation of the amygdala, a brain region involved in processing emotions (Science, 21 June 2002, p. 2191), than did less extroverted people. Since then, Canli and his co-workers have drawn similar connections between personality traits and other subcortical and cortical regions. Meanwhile, other researchers have been linking patterns of brain activity to characteristics such as neuroticism, risk aversion, pessimism, persistence, and empathy.

    The links between brain activation patterns and personality are still too tentative to find applications outside the research lab, Canli says. But he points to a number of people who might like to supplement existing sources of information with brain scans, such as school admissions officers, potential employers, or law enforcement personnel. Another worrying possibility, he says, is that a personality assessment could be performed while ostensibly conducting a scan for other reasons, because the person in the scanner could be asked simply to look at pictures or respond to questions.

    Laid bare.

    Neuroimaging techniques may offer a glimpse into the tumult and pandemonium inside someone's head, as this 16th century print by Mattias Greuter suggests.

    CREDIT: JUPITER IMAGES

    Beyond personality assessment lies the prospect of detecting defects in brain functioning that could contribute to criminal acts. Imaging studies have shown that moral reasoning engages parts of the brain that are not involved in other forms of reasoning, and other studies have found reduced activity in some of the same brain regions among convicted murderers. One goal of “forensic neuroimaging,” says Canli, is to determine whether individuals with a reduced ability to feel empathy, guilt, or remorse about criminal acts exhibit a unique neural signal. If so, this information could be used to monitor individuals at risk of carrying out a criminal act or in sentencing and parole decisions.

    A window on thought

    Privacy issues are an even greater concern with neuroimaging techniques that can detect ongoing thought processes. In one of the most widely reported neuroimaging studies of recent years, Elizabeth Phelps of New York University, Mahzarin Banaji of Harvard University, and their colleagues used behavioral tests to measure the attitudes of a group of European-American research subjects toward African Americans. They then scanned the brains of those subjects while they were viewing unfamiliar African-American faces. Subjects with more negative views of African Americans tended to have greater activation of the amygdala. “I don't think we've gotten to the point where we can say anything about how people will act in the future, but I think we will—it's a matter of time,” Phelps says. Other investigators have been looking for distinctive brain activation patterns associated with sexual preferences, political affiliations, and feelings of religious transcendence.

    Among the most controversial neuroimaging studies have been those focused on deception. Several research groups have claimed that they can detect brain activation patterns indicative of lying, and one commercial company has begun offering a brain test for deception. Whether these techniques are more reliable than existing approaches such as polygraphs has yet to be determined. Still, the Defense Department and CIA are sufficiently interested that they have been investing millions of dollars in neuroimaging technologies that might be used in law enforcement or intelligence. A particular focus of this work: brain scans that might reveal the identities of terrorists.

    The ability to detect deception reliably could have profound consequences for the legal system, Greely points out. The truthfulness or biases of defendants, witnesses, judges, and juries could be assessed. Entirely new legal procedures might be necessary. For example, if people swore that their testimony was truthful, would the state have the right to test those oaths with brain scans?

    Proceed with caution.

    Law professor Henry Greely says the brain-imaging technologies on the horizon have the potential for enormous good—and abuse.

    CREDIT: COURTESY OF H. GREELY

    The need for perspective

    Such scenarios can be chilling, but they also should be viewed with caution, say researchers and ethicists. No one can be sure if any of these possibilities will be realized. For one, current neuroimaging technologies remain expensive and ungainly. “You have to stick someone in a scanner, and they have to be compliant,” says Randy Buckner, a neuroscientist at Washington University in St. Louis, Missouri, who helped develop functional magnetic resonance imaging. “It's presently not useful for rapid screening.” The equation might change if imaging technologies were no bigger than a set of headphones, or if sensing could be done from a distance, but today such devices remain in the realm of science fiction.

    Many questions also surround the validity of brain scans. Some skeptics already refer to neuroimaging as high-tech phrenology, pointing toward poorly designed and impressionistic studies. Others wonder if brain scans can ever match even the accuracy of polygraphs, which use physiological measures of nervousness to detect deception. Polygraph evidence has generally been rejected by all federal courts and state courts except those of New Mexico because of concerns about accuracy. Before neuroimaging could offer useful guidance in the legal system or elsewhere, it would need to be thoroughly tested to see how often brain scans are misleading or incorrect and whether people can train their minds to fool the machines.

    Another fundamental question is whether brain scans necessarily reveal information that is not available in other ways. If brain scans are used to draw correlations between neural activation patterns and personality or behavioral tests, why not just rely on the behavioral tests? “We have other ways of finding out how people think about things,” says Phelps. “Brain imaging brings another measure of that.”

    False accuracy?

    The striking colors and contrast of a brain scan can convey a sense of “objectivity” that may not be warranted, experts caution.

    CREDIT: T. CANLI/STONY BROOK UNIVERSITY

    Neuroimagers agree that any brain scan must be compared to an average level of activity, either for an individual or a group. But brain activation patterns differ from person to person and from one instance to another, so measuring departures from an average inevitably involves considerable judgment. Brain scans represent “statistical inferences rather than absolute truths,” in Canli's words.

    Indeed, researchers and bioethicists alike say that the greatest threat to individual liberty may come not from the capacity of scanners to reveal hidden thoughts but from the mistaken belief that the results of brain scans are highly accurate. The striking colors and contrasts of a brain scan can seem objective or “scientific,” even when the appearance of the scan is the product of a technician's image processing. “Probably the only thing worse than having people successfully reading your mind with brain imaging is having people unsuccessfully reading your mind with brain imaging and thinking that they can trust that information,” says Martha Farah, who directs the Center for Cognitive Neuroscience at the University of Pennsylvania in Philadelphia.

    Maintaining a sense of perspective is important, say researchers and bioethicists. Despite the remarkable technological advances of recent years, human beings are unlikely to give up their secrets easily. As Canli says, “If we could predict what someone will do with 100% accuracy, it would mean that free will doesn't exist—and I'm not prepared to accept that.”

  10. NEUROIMAGING

    An Image of Disease

    1. Steve Olson*
    1. Steve Olson's latest book is Count Down: Six Kids Vie for Glory at the World's Toughest Math Competition.

    Diagnosing diseases through neuroimaging raises issues posed by other biomedical technologies, but often in startlingly personal ways. Consider what neuroscientists call incidental findings. When subjects receive brain scans as part of a research project, the resulting images sometimes bear unwelcome news. According to Judy Illes, who directs the program in neuroethics at the Stanford Center for Biomedical Ethics, 2% to 8% of research subjects turn out to have tumors, malformations, or other clinically significant neurologic problems that were previously undetected.

    At a meeting at the National Institutes of Health in January, participating clinicians, researchers, and bioethicists agreed that the possibility of incidental findings should be considered when designing a study and obtaining consent from subjects. Another point of agreement: To protect privacy, the research subject or a surrogate should be the first to hear about a problem, not a physician. But participants could not settle on a standard procedure to detect and respond to incidental findings. Some researchers have every brain scan examined by a radiologist for signs of trouble, whereas others refer only those with obvious abnormalities. “There were areas where the different disciplines had different viewpoints, and those were extremely valuable in understanding the problem and identifying appropriate pathways to solving it,” Illes says.

    Incidental findings.

    Research scans sometimes turn up unexpected brain abnormalities, such as this malformation in the right frontal cortex.

    CREDIT: LUCAS CENTER FOR MAGNETIC RESONANCE SPECTROSCOPY AND IMAGING

    Similar issues arise when a brain scan, advertently or inadvertently, reveals a medical condition for which there is no known treatment. For instance, neuroimaging technologies have proven fairly successful in identifying mild to moderate cases of Alzheimer's disease. But in the absence of a cure, a positive diagnosis may be more of a curse than a blessing. “Are there some things we would be better off not knowing about ourselves? Absolutely,” says Martha Farah, the director of the Center for Cognitive Neuroscience at the University of Pennsylvania in Philadelphia.

    Getting a brain scan for early signs of Alzheimer's disease is comparable to being tested for Huntington's disease, an incurable neurologic disorder caused by a defective gene. But most genes are several layers removed from our physical or behavioral traits, bioethicists point out. Brain scans, in contrast, tap into mental processes that relate directly to our personalities, our behaviors, and even our private thoughts.

  11. OPTOELECTRONICS

    New Generation of Minute Lasers Steps Into the Light

    1. Robert F. Service

    Long-awaited long-wavelength Raman lasers built on microchips are primed to take the next strides in merging light beams and electronics

    Microchip-based diode lasers have had a good run. They're at the heart of CD and DVD players, computer disc optical drives, and a host of medical devices. Together, these and other applications add up to a sweet $3.5 billion market. But diode lasers can't do it all. Researchers have struggled to get them to produce the long-wavelength light—ranging from the midinfrared to terahertz frequencies—that is highly sought after for applications from explosives detection to biomedical imaging. Researchers have also had a tough time making the lasers out of silicon, the workhorse of computer technology, an advance that could vastly improve computer processing speeds by enabling chips within computers and local networks to send signals through high-speed glass fibers instead of metal wires. Now a spate of advances could finally help chip-based lasers leap those hurdles.

    In recent months groups at the University of California, Los Angeles (UCLA), and Intel Corp. have reported major strides in making “Raman” lasers out of silicon. Like other lasers, the new silicon-based devices trap light waves, force their peaks and troughs into orderly alignment, and then release them in energetic beams. The one downside is that in order to work, these lasers must be primed by light from another laser. But 2 weeks ago, a group at Harvard University in Cambridge, Massachusetts, reported creating a chip-based Raman laser that works when fed electricity. “Over the past 5 months, this field has exploded,” says Philippe Fauchet, an optics expert at the University of Rochester in New York.

    The lasers take their name from the Indian physicist Chandrasekhara Venkata Raman, who discovered the principle behind them in 1928. When monochromatic light passes through a transparent material, he found, most of the photons emerge with their wavelength unchanged. Others, though, collide with atoms in the material and lose or gain energy, causing them to emerge at a shorter or longer wavelength.

    The effect lies at the heart of fiber optic-based commercial devices called Raman amplifiers, which boost longer- wavelength optical signals streaming through glass fibers for long-distance data transmission and telecommunications. The devices work by using an initial high-energy “pump” pulse to prime the fiber so that when photons in a data pulse pass through, they stimulate the release of additional photons at the same energy, amplifying the pulse. By reflecting the growing light pulse back and forth through a transparent fiber, engineers can create a Raman-based fiber-optic laser. But because the Raman effect is so slight in glass fibers, these devices typically require kilometers of fiber to work.

    Chip shot.

    The first continuous-wave silicon laser.

    CREDIT: INTEL CORP.

    The good news is that the Raman effect is 10,000 times stronger in pure silicon than in glass. “We can do in centimeter-sized devices in silicon what is done in kilometers in glass,” says Mario Paniccia, who directs Intel's photonics technology laboratory in Santa Clara, California. At least, that's the theory. Unfortunately, silicon has an appetite for eating laser photons. When an incoming laser pulse—known as the pump pulse—is trained on silicon, silicon atoms can absorb two photons simultaneously. The energy excites one of the atom's electrons, freeing it to roam through the crystal. Such mobile electrons are strong photon absorbers and quickly quench any amplification of laser photons in the material.

    Last fall, UCLA optoelectronics researchers Ozdal Boyraz and Bahram Jalali were the first to overcome this problem and create a silicon-based Raman laser. In the 18 October 2004 issue of Optics Express, the pair reported that to prevent the buildup of excited electrons, they zapped their silicon chip with a staccato of pulses, each lasting just 30 trillionths of a second, or picoseconds. Between pulses they gave the excited electrons time to relax back to their ground state, so they wouldn't reach a level that kept photons from building up in the material. The UCLA device, however, wasn't pure silicon: It also used 8 meters of optical fiber to carry the emerging laser light back to the silicon crystal for additional passes in order to boost the output of the Raman-shifted pulse.

    Three months later, researchers at Intel did away with the optical fiber. In the 20 January issue of Nature, a team led by Paniccia reported creating the first all-silicon-based Raman laser. Like the UCLA device, it relied on pulsing an incoming beam, but mirrors in the silicon bounced the light back and forth without the need for the fiber. The Intel team also added another trick: They routed the light down a path within the chip lined with positive and negative electrodes. When the researchers applied a voltage, charged particles swarmed to the electrodes, sweeping the mobile electrons out of the path of the incoming photons. As a result, the team could blast the silicon chip with a stronger pump pulse to increase the output of the Raman-shifted laser light. Last month in Nature, the Intel team reported another improvement, the first silicon Raman laser that emits a continuous beam of photons. Boyraz and Jalali jumped back into the fray as well, reporting in the 7 February issue of Optics Express that they had incorporated an electric modulator into their optically pumped device to switch their new lasers on and off.

    The string of advances, Fauchet says, sets the stage for a host of innovations, such as silicon-based optoelectronic devices to replace copper wires in speeding short-distance communication between computers, as well as other military, medical, and chemical detection applications. By leveraging the semiconductor industry's decades of experience in fabricating silicon components, the new work could help slash costs for optical components. “It's a potential sea change that allows you to do new things because they are cheap,” Fauchet says.

    The new lasers have their drawbacks. “The major limitation of Raman lasers is that to get a laser you need another [pump] laser,” Fauchet says. “Ideally, you would like to have an electrically pumped laser. That would be the Holy Grail.”

    As if on cue, in the 24 February issue of Nature, researchers led by Federico Capasso of Harvard reported just such a device. Unlike the previous lasers, however, the new one is made from alloys of aluminum, gallium, indium, and arsenic rather than silicon and works in a different manner. Known as a “quantum cascade” (QC) laser, it consists of hundreds of precisely grown semiconductor layers. As electrons pass through the layers, they lose energy at each step, giving up photons, which combine to create the laser beam.

    Capasso and his colleagues at Harvard and Lucent Technology's Bell Laboratories in Murray Hill, New Jersey, had spent a decade building QC lasers that emit light in the midinfrared range. In hopes of extending their reach to longer, terahertz frequencies, Capasso teamed up with theorist Alexey Belyanin of Texas A&M University in College Station, who had suggested modifying the device by adding new sections that use the Raman effect to shift the initial laser light to a longer wavelength. In essence, the group created a pair of Raman lasers on a single chip: one that converts electricity into an initial pump laser, and a second that shifts the light to longer wavelengths. The new QC Raman lasers turn out beams of infrared light with a wavelength of 9 micrometers. Capasso says his team is working to create similar devices that turn out beams at terahertz frequencies, which are widely sought after for use in detecting explosives and other chemicals. Fauchet notes that the advance doesn't produce the shorter wavelength photons ideal for telecommunications, but “it demonstrates you don't need an external laser to get a Raman laser,” he says.

    No matter which of the new Raman lasers proves most successful, the devices look likely to extend diode lasers' run for a long time to come.

  12. OCEAN DRILLING

    Japan's New Ship Sets Standard as Modern, Floating Laboratory

    1. Dennis Normile

    Scientists expect the Chikyu's massive size and unique capabilities to unlock important secrets that lie underneath the ocean's floor

    NAGASAKI, JAPAN—When was the last time scientists got almost everything they wanted? Japan's new riser drilling ship may eventually turn out to have some flaws and limitations. But as the $550 million Chikyu nears completion, researchers involved in the 18-nation Integrated Ocean Drilling Program (IODP) can hardly contain their excitement. “They've pretty much done it all,” says Richard Murray, a marine geochemist at Boston University and chair of an IODP panel that put together a wish list of instruments for the vessel.

    Last month reporters were invited to tour the Chikyu as it sat in the Mitsubishi Heavy Industries shipyard here, preparing for a series of shakedown cruises beginning this fall. On display is a vessel designed to drill more than twice as deep as previous drill ships, up to 7 kilometers below the sea floor. It can also work in areas with gas or oil deposits that have been off limits for environmental reasons.

    Those capabilities promise a better understanding of key questions such as seismicity beneath the seas, the recycling of oceanic mantle, geologic changes in sea levels, and Earth's climate history. At 210 meters and 57,500 metric tons, the Chikyu is 45% longer and 2.4 times the weight of IODP's current workhorse, the JOIDES Resolution, and it has 60% more laboratory space, spread over four decks. The labs are now being filled with $18 million worth of equipment, some of which has never been installed on a drill ship before. “[Chikyu] will probably be as well-equipped as the best land-based laboratories in the world,” marvels Mike Coffin, a geophysicist at the University of Tokyo's Ocean Research Institute. In addition, Chikyu's living quarters are close to luxurious compared to what researchers and crew endured on the older ship, a converted oil-exploration vessel.

    Designing Chikyu from the hull up to be a research ship “allowed us to plan very smooth handling of the cores,” says Shin'ichi Kuramoto, a seismologist with the ship's owner, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The layout ensures that fragile cores will get a minimum of handling before undergoing critical testing and that biological samples will be moved quickly to oxygen-free or cryogenic storage to minimize degradation and contamination.

    Ocean Goliath.

    The Chikyu is 45% longer and displaces more than twice the weight of its predecessor in the global ocean drilling program.

    CREDIT: JAMSTEC

    From the outset, Chikyu was designed “to go deep,” says Asahiko Taira, director general of JAMSTEC's Center for Deep Earth Exploration. That translated into a 4000-meter riser, a tube that encloses the drill pipe and allows the circulation of a heavy drilling mud that lubricates the drill pipe, flushes cuttings from the drilling face, and shores up unstable sediments. The riser and a blowout preventer—a 300-ton device that sits on the sea floor—will prevent oil or gas from fouling the sea if the drill pokes into pressurized deposits.

    Once the cores are extracted, they will be cut into 1.5-meter lengths and then routinely put through several nondestructive analyses never before available on a drill ship. They include a computed tomography (CT) scan, using a standard medical imager. Previously, scientists have used gamma ray scanning to image the surface of the cores. The CT scan will provide a three-dimensional image showing the porosity, microstructures, deformations, and stratigraphy of the cores' key features—data that will shed light on the geological history of the sample. The information will be used to “set a strategy for splitting the core,” Kuramoto says, including selecting the best axis to expose strata or anomalies such as hard rocks suspended in soft sediments.

    Once split, core halves will go through an x-ray fluorescence (XRF) scanner. The technique is just now being introduced to earth sciences, with fewer than a dozen scanners currently available worldwide. “Right now what happens is that people take plugs at 5-cm intervals down the core, and you don't know what you've sampled until you get home and analyze it,” says Boston University's Murray. XRF scanning is nondestructive and produces detailed, continuous data on the core's chemical composition. Murray says determining changes in sedimentary deposits on a millimeter scale “will lead to being able to document changes in climate at very high resolution.”

    Cutting-edge technology.

    The Chikyu comes with a 4000-meter tube, called a riser, that encloses the drill pipe and helps it operate in difficult and unstable conditions. It works in combination with a blowout preventer (right), a 300-ton device that will sit on the sea floor, to contain any explosions if the drill pokes into pressurized deposits.

    CREDITS (LEFT TO RIGHT): JAMSTEC; D. NORMILE/SCIENCE

    Another major piece of equipment making its ship debut is a magnetically shielded chamber that blocks out 99% of Earth's magnetic field. Scientists rely on magnetic signatures locked in core samples to decipher details of plate tectonics, date sediments and rocks, and read the historical behavior of Earth's magnetic field. Previously, cores had to be taken to one of a few land-based laboratories for such measurements. JAMSTEC's Taira says soft sedimentary samples often became deformed in transit, which changed the orientation of the magnetic minerals.

    Chikyu's size is a boon to microbiologists, says David Smith, a microbiologist at the University of Rhode Island's Graduate School of Oceanography in Narragansett. He recalls installing the first microbiology lab—an aluminum storage shed of the kind seen in suburban backyards—in 1999, on selected cruises of the Resolution. More recently, he says, “we inherited some space [in the laboratories] and elbowed our way in.” But installing key equipment, such as a radiotracer lab to determine how fast sea-floor microorganisms grow and their metabolic rates, often meant leaving somebody else's experiment behind. That could lead to some tensions on board, says Smith: “You had to basically step on someone else's toes to get this lab onboard, and then you had to go on a cruise with those people.” On Chikyu, Smith notes, a radiotracer lab will be available on every voyage.

    Long-term monitoring will also benefit from Chikyu's heft. Its heavy lifting capacity will also allow researchers to place larger packages of instruments on the sea floor. Coffin says strategically placed seismometers and instruments to measure fluid flow through rock “could revolutionize our knowledge of the oceanic lithosphere.”

    Chikyu's designers did not forget creature comforts. Those sailing on the JOIDES Resolution slept in bunk beds, with up to four people in a room, and shared bathrooms and showers. In contrast, the Chikyu has single rooms complete with bathrooms, showers, desks, and even Internet connections for each of the roughly 50 scientists and 100 crew members expected to live on the ship for stretches of 4 to 8 weeks. There will be better recreation facilities as well.

    JAMSTEC and IODP officials hope that the Chikyu will begin a series of shakedown cruises this fall. They are particularly interested in familiarizing the crew and scientists with the riser drilling capabilities and working the bugs out of a new on-board database system that will display on one screen all the information associated with a particular core sample.

    Scientific drilling is expected to begin in earnest in the summer of 2007. The first target is the seismogenic zone of the Nankai Trough, where the Philippine Sea Plate is being forced beneath the Eurasian Plate. Achieving a better understanding of the process, which has generated some of Japan's most devastating earthquakes, presents a fitting first challenge for the world's most impressive scientific drill ship.

  13. STRUCTURAL BIOLOGY

    Structural Genomics, Round 2

    1. Robert Service

    As NIH plans to extend its high-speed structural biology program for another 5 years, researchers remain divided on how to best allocate its shrinking budget

    Five years ago, facing some opposition, the U.S. National Institutes of Health (NIH) in Bethesda, Maryland, launched an ambitious effort that some have compared in scale and audacity to the Human Genome Project. Its ultimate goal: to obtain the three-dimensional structures of 10,000 proteins in a decade. Like the genome project, this effort, called the Protein Structure Initiative (PSI), could transform our understanding of a vast range of basic biological processes. And just as the genome project attracted debate and dissent in its early days, the initiative split the structural biology community. The effort is now approaching a critical juncture, and the debate is heating up again.

    The project is nearing the end of its pilot phase, a 5-year effort to develop technologies that has begun to transform labor-intensive, step-by-step procedures into a production-line process. Now, the initiative is poised to move into the production phase, dubbed PSI 2. In the next few months, NIH is expected to designate three to five centers, each of which could receive grants of about $12 million a year to crank out protein structures at an unprecedented clip. It will also pick a handful of smaller labs to work on problems that have so far proven difficult to solve, such as how to obtain the structures of proteins embedded in cell membranes. Officials at the National Institute of General Medical Sciences (NIGMS), which is bankrolling the initiative, are reviewing proposals for the two types of grants, and the winners are expected to be announced this summer.

    But, in a debate eerily similar to the one that roiled the genome community a decade ago, structural biologists are divided on how fast to proceed—especially in the light of constraints on NIH's budget. The central issue is whether the technology is far enough along to justify the move to mass production, or whether the emphasis should continue to be on technological development.

    Brian Matthews, a physicist at the University of Oregon, Eugene, and chair of PSI's external advisory board, argues that the time is ripe to move ahead in cataloging thousands of new structures. “This information will be broadly applicable to biology and medicine,” he says. Raymond Stevens, a structural biologist at the Scripps Research Institute in La Jolla, California, agrees that “the technology that has come out so far has been truly impressive.” But he has strong reservations about PSI 2's planned emphasis on mass-production of structures. “It's premature to start production centers until better technologies are in place,” Stevens says.

    This is not just an academic debate. The PSI could determine whether a key goal of structural genomics is achievable: the development of computer models to predict the structure of a new protein from its amino acid sequence. The initiative could also provide insights into how proteins interact to choreograph life's most fundamental processes and help researchers identify important new drug targets.

    Pure speed.

    Researchers at the Midwest Center for Structural Genomics use robotic gear to speed protein purification.

    CREDIT: MIDWEST CENTER FOR STRUCTURAL GENOMICS

    Picking up the pace

    In one respect, the scientists who planned the human genome project had it easy. Gene sequencing relies chiefly on one technology: reading out the string of letters in DNA. By contrast, producing protein structures requires mastering nine separate technological steps: cloning the correct gene, overexpressing the gene's protein in bacteria, purifying it, coaxing it to form a crystal, screening out the best crystals, bombarding them with x-rays at a synchrotron, collecting the diffraction data as the rays bounce off the protein's atoms, and using those data to work out the protein's precise structure. (Researchers turn out a smaller number of structures using another technique known as nuclear magnetic resonance spectroscopy.)

    Initially, the nine centers participating in the pilot phase of PSI had trouble dealing with that complexity (Science, 1 November 2002, p. 948). But structural genomics teams have now automated every step. “It took these groups a couple of years to get all the hardware in place,” says Matthews. “But I think [the PSI's first phase] has been very successful.”

    Among the advances is a robot being built at the Joint Center for Structural Genomics (JCSG) in San Diego, California, that can run 400,000 experiments per month to find just the right conditions to coax given proteins to coalesce into high-quality crystals. Synchrotron facilities too have seen vast improvements in robotics. Setting up a crystal for measurement has historically been a cumbersome process, typically taking hours of fine-tuning. JCSG researchers and others have now created robotic systems to carry out this work, enabling data collection on up to 96 crystals without interruption. “That has been a tremendous benefit,” says JCSG chief Ian Wilson, a structural biologist at the Scripps Research Institute.

    As the technologies advanced the centers accelerated their output. They produced 350 structures in PSI's fourth full year, up from just 77 in the first year, and are on track to complete 500 this year. That pace is still well short of the initial goal of 10,000 structures in 10 years—that goal was little more than an optimistic guess, PSI leaders now say—but it's a big step forward and should be fast enough to accomplish most of the effort's scientific goals. Equally important, says John Norvell, who directs NIGMS's PSI program, the average cost of each structure has dropped dramatically, from $670,000 in the first year—a number inflated by the cost of purchasing and installing robotic gear—to $180,000 in year 4. This year, Norvell expects that the cost will drop to about $100,000 per structure. By comparison, he adds, traditional structure biology groups typically spend $250,000 to $300,000 for a structure, although some of the proteins they tackle are far more complicated than those PSI has taken on.

    The types of proteins targeted by PSI are, however, one bone of contention. Traditional structural biology groups tend to go after similar proteins in important families, such as kinases, that participate in many biological pathways. And they often determine the structure of complexes of one protein bound to different molecular targets, in order to tease out the details of how the protein functions. As a result, 87% of the structures deposited in the major global protein database are closely related to those of other proteins.

    The PSI, however, was set up to acquire structures from as many of the estimated 40,000 different protein families as possible. Indeed, 73% of the structures the PSI centers have solved so far have been “unique,” which by the PSI definition means that at least 30% of the gene sequence encoding a protein does not match that encoding any other protein. The idea behind casting such a broad net is to acquire structures from representatives of each family in the hope that this will enable computer modelers to predict the structure of other family members. Already, the data suggest that there is not as much structural variation between families as many biologists expected (see sidebar).

    Some structural biologists argue, however, that this approach has limited value, and that the tens of millions of dollars currently going to structural genomics centers would be better spent on traditional structural biology groups. Yale biochemist Thomas Steitz, for example, says that most of the structures PSI groups have produced so far are “irrelevant” to understanding how the proteins work because they are not bound to their targets. The PSI focuses on bacterial rather than eukaryotic proteins, he also complains.

    Berg acknowledges that “tension certainly exists,” between traditional structural biologists and structural genomics groups. Although some PSI 2 centers will likely focus on producing structures of protein complexes and eukaryotic proteins, he notes that NIH's structural genomics effort was never set up to go after the same type of information as conventional structural biology. Rather, the goal was to explore the far reaches of the protein landscape. “To my mind the most important message is structural genomics and structural biology are largely complementary and synergistic,” Berg says.

    Numbers game.

    This protein crystallization robot aids structure solving.

    CREDIT: STRUCTURAL GENOMICS OF PATHOGENIC PROTOZOA CONSORTIUM

    Berg and others add that the PSI has already provided numerous important biological insights. For example, the Northeast Structural Genomics Consortium (NESGC) recently solved the structure of a protein that adds a methyl group to ribosomal RNA and in the process confers antibiotic resistance to bacteria. That structure, says NESGC director Guy Montelione, has suggested inhibitory compounds that could revive current antibiotics and spawned a separate research program on the topic. Another structure revealed details of the way plants bind a signaling molecule called salicylic acid, challenging conventional wisdom on the functioning of plants' immune systems. “Not only are we spinning out new science, but new science initiatives,” Montelione says.

    Chapter 2

    What comes next is, however, a matter of debate. NIGMS officials had expected to scale up a handful of the current PSI centers to full-scale production facilities and fund as many as six additional technology centers, each tackling a separate bottleneck.

    A tight NIH budget has already forced NIGMS officials to rethink those plans, however. They had hoped to boost the current PSI budget of $68 million to $75 million next year, the first year of PSI 2. But they are now anticipating a decline in funding, to $64.5 million. The cut is likely to force structural genomics leaders to rein in their goals, and ultimately it could extend the date by which they complete the program. “That's clearly going to be a problem,” Matthews says.

    Those budget cuts will also make it tough for PSI leaders to strike the proper balance between production and technology development in the next phase. Each of the existing pilot centers currently receives some $8 million a year. The plan for PSI 2, Norvell says, had been to spend $12 million a year on each production center. That means five centers would eat up nearly the entire budget for PSI 2's first year, leaving little for technology development. If that happens, “I think we'll regret it in 5 years,” Stevens says.

    Stevens points out that many technical problems remain. For example, even though PSI centers have increased their output of protein structures, their success rate in turning targeted genes into solved structures has remained essentially unchanged. At each stage in determining a structure—cloning the gene, expressing the protein, and so on—researchers take a hit. For example, only 57% of cloned genes are successfully expressed as proteins, and of those, only 28% can be purified. “It's like doing chemical synthesis” that involves numerous steps, says Wilson. “If you have a 90% success rate at each step, that's not going to give you much material out at the end.”

    In the end, Stevens notes, only 2% to 10% of the proteins targeted by PSI centers wind up as solved structures. In view of this “pretty poor success rate,” Stevens argues that the phase 2 efforts should focus more on technology development. “I think structural genomics can do even better if the technologies are allowed to mature further,” he says.

    SOURCE: PSI, IMAGE: BERKELEY STRUCTURAL GENOMICS CENTER

    Not many of Stevens's colleagues agree. “Clearly we have to capitalize on the production centers we've already invested in,” says Wilson. Thomas Terwilliger, a structural biologist at the Los Alamos National Laboratory in New Mexico and head of the TB Structural Genomics Consortium, adds that the limited success rate isn't a major issue because if one protein in a family doesn't yield a structure, researchers can typically find another one that does. Furthermore, Montelione points out that the new production centers will spend about one-third of their funds on improving the technology. Stevens counters that “there will be so much pressure to produce structures that any technology developments will take a significant back seat to the structure focus.”

    Berg says “it's hard to imagine funding fewer than three of the large-scale [production] centers.” At $12 million apiece, that would still leave $28.5 million—more than $4 million for each of the six proposed technology centers. The balance between production and technology development is “still very much up in the air,” says Berg, and will depend on the outcome of the reviews of the grant proposals. The NIGMS advisory committee will then decide which centers to fund in May and announce their decision in early July.

    Whatever the outcome, it's now unlikely that the PSI effort will achieve the initial goal of solving 10,000 protein structures by 2010. With budget cutbacks and continued technical challenges, the final tally will probably be somewhere between 4000 and 6000, about the number that PSI leaders now believe computer modelers will need to accurately predict structures of related family members. Still, that means the program will solve structures for only a small fraction of the estimated 40,000 protein families. “This mixed bag of production and technology development will require another cycle, another 5 years to finish the job,” says Montelione. So will there be a PSI 3? That debate is just starting.

  14. STRUCTURAL BIOLOGY

    A Dearth of New Folds

    1. Robert Service

    The Protein Structure Initiative (PSI) has already come up with one surprise: Proteins apparently come in a relatively limited variety of shapes. The initiative is targeting “unique” proteins, ones in which the DNA that encodes them differs markedly from that for proteins with a known structure. Researchers expected that many if not most of those proteins would have structural patterns never seen before, but the vast majority look quite familiar.

    The general shape of a protein once it assumes its three-dimensional (3D) form is known as a fold. So far PSI groups have found that only 12% of their completed structures sport new folds. “The number of folds will be considerably less than previously thought,” says Ian Wilson, a structural biologist at the Scripps Research Institute in La Jolla, California, and head of its Joint Center for Structural Genomics. This means that proteins with vastly different patterns of amino acids adopt similar 3D shapes. That, Wilson says, is critical information for computer modelers working to predict the structures of proteins based only on their DNA sequence.

    Protein landscape.

    This graph reveals how proteins cluster into four structural classes.

    CREDIT: S. H. KIM/BERKELEY STRUCTURAL GENOMICS CENTER

    Researchers are also mapping out how all these unique proteins relate to one another. In a report published online on 10 February by the Proceedings of the National Academy of Sciences, researchers at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory (LBNL) in California compared nearly 2000 different protein structures, calculating the difference in shape between each protein and all of the others in the collection. They then graphed the results, showing similar structures as close to one another. They found that the global protein structure landscape is a bit like the cosmos, where galaxies cluster together amid vast regions of emptiness.

    That map does have sharp features, however, says study author Sung-Hou Kim, an LBNL structural biologist and the head of the Berkeley Structural Genomics Center. It shows the four main classes of protein structures—known as α helixes, β strands, and proteins with mixtures of α and β domains called α+β and α/β—as four elongated arms emerging from a common center. The map, Kim says, suggests that much of the protein structure space is empty because proteins with certain shapes are architecturally unstable. That in turn suggests that structural genomics groups are unlikely to find any new structural classes of proteins. Says Kim: “I would be very surprised if they did.”

Log in to view full text

Log in through your institution

Log in through your institution