News this Week

Science  22 Mar 2002:
Vol. 295, Issue 5563, pp. 2188

    Statistical Analysis Provides Key Links in Milosevic Trial

    1. Richard Stone*
    1. With reporting by Eliot Marshall.

    THE HAGUE, NETHERLANDS—Science took center stage last week in one of the most widely watched war-crimes trials since Nazi leaders were brought to justice after World War II. In testimony spanning 2 days, the International Criminal Tribunal for the Former Yugoslavia (ICTY) heard from the sole scientific expert that the prosecution is expected to call against Slobodan Milosevic, Yugoslavia's president and commander of its armed forces during the conflagrations in the Balkans in the 1990s. Patrick Ball, a statistician with the American Association for the Advancement of Science (AAAS, publisher of Science), testified that evidence his team has gathered is consistent with the hypothesis that Yugoslav forces conducted a systematic campaign of killings and expulsions of ethnic Albanians from Kosovo in the spring of 1999.

    Ball's testimony offered an overview of the Kosovo tragedy while also lowering the level of emotions in court. Until last week, the prosecutors had relied heavily on Kosovar Albanian witnesses, who sometimes broke down on the stand as they described atrocities by Serb forces. In contrast, Ball's testimony, based on statistical analyses, hardly made for courtroom drama. But in a case hampered by the apparent lack of a paper trail linking the Kosovo atrocities to orders given by Milosevic, “the beauty of [Ball's] study is that it actually gets to the truth” of what happened, says David Tolbert, executive director of the American Bar Association's Central and East European Law Initiative (ABA/CEELI).

    Moreover, the use of statistical tools to discern patterns in human rights abuses involving many victims could find wide application in other hot spots, from Bosnia to East Timor, says Tolbert, who served as senior legal adviser and chief of staff at ICTY in the mid-1990s: “In that sense, it's groundbreaking testimony.”

    The first phase of Milosevic's trial for alleged war crimes in the former Yugoslavia deals with events in Kosovo. Between 1 January and 20 June 1999, according to a detailed indictment, Yugoslav and Serb forces “executed a campaign of terror and violence directed at Kosovo Albanian civilians.” During the ethnic cleansing operation, the indictment states, some 800,000 civilians were driven from Kosovo; many were killed on their way to neighboring countries. Milosevic is charged with four counts of crimes against humanity in Kosovo on the basis of his “superior criminal responsibility” for the deportation, murder, and persecution of Kosovar Albanian civilians.

    Grim correspondence.

    Slobodan Milosevic did not challenge a central finding of Patrick Ball's report, that killings in Kosovo peaked in sync with emigration flows in the spring of 1999.


    Ball, deputy director of AAAS's science and human rights program, has had plenty of experience with those sorts of abuses. For more than a decade he has conducted statistical analyses for truth commissions, tribunals, and United Nations missions in countries from Guatemala to Sri Lanka. His work caught the attention of the nonprofit group Human Rights Watch, which invited Ball to Albania in March 1999 to launch an investigation.

    At the time, refugees were flooding into Albania through one point in particular, the village of Morina. That spring, Albanian border guards registered, by name and town of origin, 272,000 individuals who crossed into Morina, while international observers, using handheld counters, tallied 404,000. Refugees interviewed by a range of organizations—from ABA/CEELI to the Organization for Security and Cooperation in Europe—told horrific stories of looting, massacres, and other abuses at the hands of the Serb military.

    Milosevic and Serb leaders have blamed the mass exodus on two causes: the activities of the ethnic Albanian Kosovo Liberation Army (KLA) and civilian suffering caused by NATO bombing of Serb troops in Kosovo. But the tribunal prosecutors say Milosevic caused the tragedy by sending into Kosovo troops who attacked both the KLA and civilians.

    On 13 March, prosecutors led Ball through key sections of Exhibit Number 67, a technical report by AAAS and ABA/CEELI that examined these hypothesized causes. Relying mainly on the border records, Ball and his group estimated when the refugee flows began and where they had originated. Ball's team—which included researchers from ABA/CEELI, the University of Chicago, and Carnegie Mellon University—identified three distinct peaks of migration: in late March, mid-April, and early May (see figure).

    Ball's team also investigated killings, finding that the murders peaked in sync with the emigration flows. Validating the information was difficult, he says, requiring a complex statistical approach known as “capture-tag-recapture” used widely to adjust census counts. The team drew data from four sources, including interviews conducted by human rights volunteers and ICTY- ordered exhumations throughout Kosovo. Starting with tens of thousands of reports, the group identified 4400 “unique individuals” who had been killed, leading them to estimate that the death toll among Kosovar Albanians was 10,356. “That piece of work was very impressive,” says Ronald Lee, a demographer at the University of California, Berkeley, who found the report “quite persuasive and done with a great deal of care.”

    “We made every attempt to be as conservative as possible” in estimating the number of victims, Ball, 36, told the court, “to present a statistical case as favorable as possible to the hypotheses that we ultimately rejected.”

    His team discovered that the timing of the deaths and refugee movements lined up —even when broken down by region— indicating that they had been triggered by “a common cause.” The researchers found that KLA activity, as reported in the Serb press, rarely occurred in Kosovo municipalities at times that could be linked to killings and refugee movements. Nor did these events correlate with NATO air strikes, which occurred after the peak in killings—or did not occur at all—in 20 of 29 municipalities. “If something is to cause something else, then the cause must precede the effect,” Ball noted dryly.

    Yugoslav army activity, on the other hand, ebbed and flowed roughly in sync with refugee movements and killings. Particularly damning was what happened after a 2-day cease-fire that Yugoslav authorities called on the evening of 6 April 1999 to honor Orthodox Easter. “We found a consistent and drastic decline both in refugee movement and people killed,” Ball stated. The findings, he said, contradict Milosevic's claims that NATO bombing or the KLA triggered the disaster in Kosovo, and they are “consistent with the hypothesis that Yugoslav forces were the cause.”

    Milosevic, who has refused to recognize the tribunal's legitimacy and is representing himself in the trial, often seemed distracted during the 2 hours that the prosecutors took to question Ball. But he sprang to life in the cross-examination. Early on, he suggested that the report's conclusions were contrived to satisfy U.S. foreign policy, as the U.S. government had funded the report. Ball rejected this, noting that previous investigations he had undertaken into human rights abuses in El Salvador and Guatemala had been critical of U.S. foreign policy.

    Warned by presiding Judge Richard May to focus on the evidence, Milosevic, after a brief, ironic smile, chastised Ball for ignoring the plight of Serb refugees and suggested that the data from the Albanian border guards had been fabricated. “You have been deceived,” Milosevic said. He also asserted that the three hypotheses of Ball's focus oversimplified the events in Kosovo.

    “I'm not a politician,” replied Ball, looking directly at Milosevic, unlike many previous witnesses. While acknowledging that his group's statistical approach “does not exclude the possibility that there may be other causes,” Ball reiterated that the three claims tested were those put forward by one side or the other to explain what happened in Kosovo.

    The trial, which is expected to last until early 2004, will undoubtedly detail many more atrocities in Kosovo—and in Bosnia-Herzegovina and Croatia, where Milosevic faces a further 61 counts of war crimes. The prosecution is hoping that Exhibit 67 will at least provide a thread that ties together the events in Kosovo.


    Dam Threatens Iraqi Ancient Sites

    1. Andrew Lawler

    LONDON—Construction has begun on a Tigris River dam that will flood dozens of important archaeological sites in northern Iraq, including the ancient royal capital of Assyria. A senior Iraqi antiquities official attending a scientific meeting here last week pleaded for international help in salvage excavations, but researchers say there may be too little time and too much politics to save more than a fraction of the Assyrian heartland before the floodwaters finish rising in 2007.

    The Makhool Dam, located between Baghdad and Mosul, is expected to alleviate a severe water shortage stemming from Turkish dams upstream, says Muayad Damerji, antiquity adviser to the Ministry of Culture. Those dams have flooded key archaeological sites in Turkey—and more are planned, including one that would submerge the ancient city of Hasankeyf. But the impact of the Makhool Dam will have much more far-reaching consequences: Damerji has identified 65 important sites in the region that must be salvaged in the next 5 years.

    Preeminent among those sites is Ashur, which served as the religious and cultural capital of the Assyrian Empire for half a millennium. Set on a 40-meter-high bluff overlooking the Tigris, Ashur rose to prominence as a trading center during the Old Assyrian period in the middle third millennium B.C., says Arnulf Hausleiter, an archaeologist at Berlin's Free University who is digging at the 65-hectare site.

    Before the flood.

    Iraqi archaeologists work at a site that will be inundated by 2007.


    Ashur later served as the spiritual center of the Assyrian Empire, which by the ninth century B.C. had stretched from the borders of Nubia in Africa to the Persian Gulf. The Assyrians apparently bestowed the city's name on their primary god, and generations of rulers were laid to rest near a ziggurat (temple tower) that still stands on the promontory. After the city was sacked in 612 B.C., Ashur and its empire never recovered. “Ashur is Assyria,” says John Russell, an archaeologist at the Massachusetts Institute of Art in Boston. “If that site is lost, we lose the whole matrix” of Assyrian culture. Adds Georgina Herrmann, a University College London archaeologist: “It's an absolute disaster.”

    Plans for a coffer dam surrounding the site to protect Ashur were abandoned after its projected cost was higher than that of the Makhool Dam itself, says Donny George, research director of the Iraqi State Board of Antiquities, who with a half-dozen Iraqi archaeologists was in London for a conference on Assyria hosted by the British Museum and the British School of Archaeology in Iraq. Even a coffer dam would offer limited protection, George says, because a rising water table would wreak havoc on Ashur's buried mud-brick structures. “We are trying to convince the Ministry of Irrigation to impound less water—about 50% less—so we can save Ashur,” he says. But that must be weighed against providing desperately needed water to farms and cities, says Damerji.

    Flooding will also destroy dozens of more obscure but important sites in Assyria's heartland. For example, little digging has been done at Kar-Tukulti-Ninurta, a city just upstream from Ashur that served as an Assyrian capital in the 13th century B.C. “Who knows what's there?” says Michael Roaf, an archaeologist at the University of Munich. Iraqi experts are hurriedly surveying areas near the dam that would be submerged first; a catalog of endangered sites should be ready soon.

    Damerji has invited foreign assistance in the Makhool effort, but it is unclear how quickly an effective rescue operation could be organized. U.S. and British archaeologists are barred by their own countries from working in Iraq, and researchers from other European countries and Japan are only now returning after a decade-long hiatus (Science, 6 July 2001, p. 38). The Iraqi government, hobbled by sanctions, has little funding for archaeology.

    “Ashur is a site of world significance, and this affects the whole academic community,” says Harriet Crawford, director of the British School of Archaeology in Iraq. The conference organizers intend to issue a statement deploring the destruction of Ashur. But with tensions in the region rising over a possible military campaign to oust Saddam Hussein in the coming months, researchers concerned with humanity's heritage have a tough fight to gain the ears of politicians in Baghdad and beyond.


    Scientists Deplore OK for Sturgeon Catch

    1. Richard Stone

    CAMBRIDGE, U.K.—Marine biologists are livid over an international panel's decision to allow nations to resume fishing beluga sturgeon from the Caspian Sea this year. Quotas were endorsed last week by a policy committee of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). However, some pressure groups are demanding to see the data CITES officials used to conclude that the beluga, prized for its caviar, can withstand commercial harvesting.

    Last June, three of the five nations around the Caspian's shores—Azerbaijan, Kazakhstan, and Russia—agreed to an unprecedented 6-month ban on fishing sturgeon. But in January, the Caspian states proposed sturgeon quotas for 2002 with the expectation that the CITES Secretariat would allow them to resume trade in caviar. The beluga variety can fetch more than $2500 per kilogram.

    The secretariat obliged. In a 6 March statement, CITES Secretary-General Willem Wijnstekers said that all five Caspian governments had demonstrated “stable or, in some cases, increasing” sturgeon numbers through a program to survey and manage stocks. “This breakthrough on sturgeon management marks a dramatic step forward toward transparency and cooperation,” said CITES Deputy Secretary-General Jim Armstrong.

    Looks fishy.

    Scientists have challenged claims that beluga stocks are stable.


    But many experts are shocked by the suggestion that beluga stocks are stable. “It's perplexing that CITES, an organization charged with protecting endangered wildlife, has hung the beluga out to dry,” says marine biologist Ellen Pikitch of the Wildlife Conservation Society in New York City. She and others contend that CITES officials ignored the results last year of a comprehensive survey of Caspian fish stocks by the Caspian Environment Programme, a World Bank and European Union initiative. The survey found few mature sturgeon, prompting a call for a 10-year fishing ban (Science, 18 January, p. 430).

    Opponents of the new quotas want CITES officials to reveal the underlying data mentioned in the 6 March statement. The secretariat “has not provided a rationale to justify its decision nor any scientific evidence to support its estimates of beluga sturgeon numbers,” charges the lobbying group Caviar Emptor. Armstrong could not be reached before Science went to press, and other officials declined to give details.

    Caviar Emptor and other groups want beluga elevated to the Appendix One list, which would ban its export from any signatory nation. The first opportunity for that will come at the November meeting of the CITES parties in Santiago, Chile.


    Engineers Marginalized, MIT Report Concludes

    1. Andrew Lawler

    BOSTON—A 1999 report that documented the plight of female researchers at the Massachusetts Institute of Technology (MIT) sparked a heated national debate about the need to improve the status of women scientists in academia. Now a new study of MIT's School of Engineering cites a host of similar barriers, leaving dean Thomas Magnanti to conclude that “MIT is not a hospitable environment” for many women.

    The largest of MIT's schools, engineering also has the lowest percentage of female professors—fewer than 10% of the school's 357 faculty members. Those hired are subject to “a consistent pattern of marginalization,” states the 30-page study, which Magnanti commissioned in 1999 as one of four reviews of individual schools. Women's representation is far less than in the overall student body, which declines from the first to the final degree (see chart). Unlike the 1999 report on the school of science, however, the engineering study did not find significant inequities in salary and space based on gender. But there are “more subtle biases” that may be harder to redress, Magnanti says, including a dearth of women faculty members on Ph.D. committees and in senior administrative posts.

    “Simply put, this situation is unacceptable,” he says in a letter accompanying the report, which contains narratives along with some grim statistics. Magnanti also endorses the report's recommendations, which include doubling the percentage of women engineers in a decade, hiring consultants for job searches, and holding workshops to increase gender awareness. “Barriers persist,” he writes, “and all too many of us remain oblivious to them.”

    MIT engineer Lorna Gibson, who chaired the study, says that much of the exclusion “is not malicious; it's unconscious.” Such behavior takes a variety of forms. Gibson recalls that she was typically the one asked to cover for male colleagues on sabbatical. “It was like being a substitute teacher” rather than a valued professor, she says. That attitude changed, however, once she pointed out the disparity. And some women had never been asked to serve on a Ph.D. committee, a situation that Magnanti says he found “stunning.”

    Behind the times.

    The percentage of women earning MIT engineering degrees far outstrips their presence on the engineering faculty.


    The situation is better on the pay front. In late 1995, female engineers requested a salary review, which resulted in significant increases. Additional boosts followed a 2000 review. “The data suggest that salary inequities have occurred in the School of Engineering” but have since been addressed, the report concludes.

    With regard to hiring practices, the engineering faculty has twice as many women as a decade ago, and this year three women accepted faculty positions for 2002. But the growth has been uneven. Between 1990 and 1998, for example, the electrical engineering and computer sciences department, one of the largest, hired 28 men and no women, although it has added three women since then. Between 1981 and 1999, according to the report, nearly three times as many women as men rejected job offers, citing “the difficulty in collaborating with colleagues.”

    Hiring women is one thing; keeping them is another. In the mechanical engineering department, for example, only one of the five women hired between 1987 and 2001 is still at MIT. “We need to make this a more welcoming environment,” Magnanti says. Toward that end, MIT is modifying its family leave and child-care policies. The dean also has agreed to use consultants to search for qualified women and to examine why women reject MIT offers at a higher rate than men do. But Magnanti concedes that doubling the percentage of women in a decade will be “a stretch.”

    Women account for only 15% of MIT's total faculty, and reports from the architecture and management schools found smaller but similar numerical imbalances. There were no signs of salary inequities based on gender in the humanities school, which has the highest percentage of women faculty members.

    Magnanti, along with Gibson and other women engineers, hopes that the report will extend the debate launched by the science report and serve as a model for other academic institutions. “This is not just an MIT problem,” says Gibson. MIT officials also hope to lead the way in fostering diversity among female faculty members, although provost Bob Brown did not offer specific proposals at a recent meeting of minority women scientists.

    The fact that “MIT is saying everybody should pay attention” is an important statement, says Evelynn Hammonds, a science historian who organized the meeting and is the only tenured African-American woman at MIT. There are just four women of color among MIT's 94 tenured women, including one engineering professor.


    African Skull Points to One Human Ancestor

    1. Ann Gibbons

    Almost 1.8 million years ago, a new kind of human appeared on the scene in Africa and Eurasia. It stood as tall as living humans do and had a relatively large brain, slender hips, and a barrel-shaped rib cage. These early humans used stone tools adeptly, scavenged meat on the open savanna, and colonized more than one continent. But anthropologists have been divided for 2 decades about their identity: Were they members of one peripatetic species, Homo erectus, which included later fossils in China and Indonesia? Or did they belong to a different species called H. ergaster?

    A report in this week's issue of Nature offers an answer, based on a million-year-old skull from Ethiopia, that meshes with the judgment of a previous generation. The team of American and Ethiopian researchers has concluded that all of the African and Asian fossils belong together in one species, H. erectus. The debate is more than academic quibbling about classification: The skull shares key features with both the early African and somewhat later Asian and African fossils, and it therefore links them all as interbreeding members of the same wide-ranging species that gave rise to living humans.

    “This fossil is a crucial piece of evidence showing that the splitting of H. erectus into two species is not justified,” says co-author and paleoanthropologist Tim White of the University of California, Berkeley. “This African fossil is so similar to its Asian contemporaries that it's clear H. erectus was a truly successful, widespread species throughout the Old World.” If White and his colleagues are right, there was a single species that spread from Africa to Europe to Asia 1 million years ago, rather than several different species alive at once.

    One, at a million.

    This skull suggests that one human species—not many—lived 1 million years ago.


    But others say it is premature to write a death notice for H. ergaster. “I don't think it takes the wind out of the sails of H. ergaster,” says Bernard Wood of George Washington University in Washington, D.C., who still thinks more than one species was alive 2 million to 1 million years ago. “I'm not at all convinced it is an intermediate,” agrees Jeff Schwartz of the University of Pittsburgh. “To me, it says there was more diversity in these hominids.”

    The idea of H. erectus as the direct ancestor of living humans is a return to a view embraced by most anthropologists until the mid-1980s. That's when several scientists, including Wood, proposed that fossils found in Africa in the 1970s—including hominids that had lived as early as 1.8 million years ago on the shores of Lake Turkana in Kenya— differed from the classic specimens of H. erectus from Java, Indonesia, which appeared between 200,000 and 750,000 years later (Science, 2 March 2001, p. 1735). The Asian fossils, they argued, had generally more robust features and belonged to a separate species. That meant that H. ergaster was the human ancestor—and H. erectus was an Asian dead end, says Philip Rightmire of the State University of New York, Binghamton.

    More than a decade of debate ensued. Then, in 1997, White's graduate student W. Henry Gilbert found a calvaria—a skull without a jaw—in the 1-million-year-old Daka member of the Bouri Formation of Ethiopia. Although gnawed by animals, it was well preserved. Most importantly, it shared features with both Asian and African fossils, including large, projecting brow ridges like those of the Asian H. erectus, says co-author Berhane Asfaw of the Rift Valley Service in Ethiopia.

    The team compared the Bouri fossils with others from Africa, Europe, and Asia and used cladistic methods to rank 22 characters in the skulls, sorting them on an evolutionary tree. The researchers found that the Bouri skull, along with another skull from Olduvai in Tanzania, overlapped extensively with Asian forms and later African fossils. “This clearly shows that the features previously considered to separate the Asian and African forms do not hold,” says Asfaw. That evidence is persuasive for Rightmire and Eric Delson, a paleoanthropologist at Lehman College of the City University of New York. “So, H. erectus is still a pivotal species,” says Delson. “This was the only game in town for a million years.” But Wood and Schwartz continue to think there were other players on the scene, suggesting that the question is far from settled. “I don't think the issue will dry up and go away,” predicts Rightmire.


    The Good, the Bad, and the Anterior Cingulate

    1. Greg Miller

    Making good decisions on the fly is a skill critical for many activities, from navigating freeway traffic to trading stocks on the Internet. Now researchers have linked a key component of this type of decision-making—the split-second evaluation of how well things are going—to a distinct pattern of brain activity.

    On page 2279, psychologists William J. Gehring and Adrian R. Willoughby of the University of Michigan, Ann Arbor, report that electrical activity in the anterior cingulate cortex (ACC)—an area tucked into the crease between the two cerebral hemispheres— registers financial wins and losses as people play a gambling game. The authors believe that this brain activity may represent an immediate emotional reaction to the outcomes. The findings add a twist to theories on the role of the ACC and may provide insight into how decisions are swayed by emotion.

    In recent years, studies by Gehring and others have suggested that the ACC plays a critical role in evaluating the outcomes of one's behaviors. For example, one theory holds that the ACC reacts when people make mistakes. But the new study suggests that the ACC may be doing something even more fundamental: making subjective judgments about whether outcomes are good or bad, even before people are consciously aware of the results of what they've done.

    “This starts to shed light on how subconscious processes can affect our decision-making and starts to provide a bit of the neural basis for that,” says George Bush, a research psychiatrist at Harvard Medical School in Boston.

    Winning big?

    The anterior cingulate cortex can tell good news from bad.


    Gehring and Willoughby used electroencephalogram (EEG) electrodes to monitor the brain activity of people playing a gambling game. The gamblers chose one of two boxes that appeared on the screen of a computer monitor. One box indicated a 5-cent bet, the other a 25-cent bet. After a short delay, the boxes changed color. If the chosen box turned green, the amount bet was added to the person's stash; if it turned red, money was taken away. The color of the other box revealed how the players would have fared had they chosen differently. Win or lose, the EEG trace showed a distinctive dip arising from the medial frontal cortex—a response Gehring and Willoughby call the medial-frontal negativity (MFN). The MFN was more pronounced on loss trials—a difference that was evident within 200 to 300 milliseconds after the outcome of each bet was revealed. “This shows that the brain evaluates things very quickly,” Gehring says.

    The researchers don't see the MFN as simply a reflection of detecting mistakes, because the stronger response showed up even after correct choices, such as taking a 5-cent loss when the alternative was a 25-cent loss. Conversely, the MFN registered a win even when a choice led to the lesser of two gains. That might prompt people to reinterpret some of the studies linking the ACC to error detection, says experimental psychologist Don Tucker of the University of Oregon in Eugene: “You might even begin to think the reason this area responds to errors is because of their emotional significance.”

    Gehring and Willoughby also found that after losing a bet, people were more likely to bet big the next time around. Their MFN response to subsequent losses was enhanced, almost as if each successive loss was more painful. “It's the gambler's fallacy: If you lose money, you think you're due for a win,” Gehring says. “Here's a brain system that's tuned the same way.”

    The findings fit well with studies of people with damage to the ACC and surrounding areas, says neurologist Antoine Bechara of the University of Iowa in Iowa City. These patients make poor decisions in lab tests and in everyday life, Bechara says, because they have difficulties judging the emotional significance of the results of their behaviors.

    The study also represents a step toward the scientific study of human subjectivity, according to experimental psychologist Brian Knutson of Stanford University: “A basic feature of subjectivity is deciding whether things are good or bad. For a long time, scientists have considered that unstudiable.” But as the new study shows, in some cases the difference between good and bad can be caught in a dip on a graph.


    Distant Galaxy Heralds End of Dark Ages

    1. Andrew Watson*
    1. Andrew Watson is a writer in Norwich, U.K.

    After the big bang and the scorching fireball that followed it faded, the infant universe fell into what cosmologists call its Dark Ages. Light returned half a billion years later, when galaxies formed and the first stars ignited. Now a team of astronomers claims to have seen a galaxy—the most distant object ever detected—that pushes back the date when the Dark Ages ended and may imply that they were not so uniformly dark after all. “It's an important paper—as long as the results hold,” says theoretical astrophysicist Abraham Loeb of Harvard University.

    The Dark Ages began when the hot plasma of the fledgling universe cooled and recombined into neutral gas atoms, mainly hydrogen with some helium. This cold gas then slowly amalgamated into the first stars and galaxies. Only after those stars began cooking the opaque neutral hydrogen gas around them into a clear gas of ions did the Dark Ages lift and stars and galaxies become fully visible.

    Any infant galaxy dating from this “epoch of reionization” at the end of the Dark Ages is likely to be at an immense distance and therefore very faint, at the limits of what existing telescopes can view. In an online paper ( to be published in the 1 April issue of Astrophysical Journal Letters, Esther M. Hu of the University of Hawaii, Manoa, and colleagues describe how they found one such galaxy using a natural image intensifier: a gravitational lens. They trained the 10-meter Keck telescopes atop Mauna Kea, Hawaii, on a cluster of galaxies called Abell 370, about 6 billion light-years away. The gravity of this cluster acted as a lens, bending the light from a more distant galaxy behind it and brightening it by 4.5 times. “If it wasn't for the lens, you'd have to use a 30-meter [telescope] to get this data, which doesn't exist,” says Hyron Spinrad, an astronomer at the University of California, Berkeley.

    Bright spot.

    The most distant known galaxy (arrow) appears to light up the universe's Dark Ages.


    Hu's team looked for a red-glowing galaxy, the telltale signal of a star foundry whose ultraviolet emission has been stretched by redshift as it traversed the universe. The higher an object's redshift is, the farther the light has traveled and the earlier in the universe's history it left its source. The galaxy Hu and her team found has a redshift of 6.56, putting it about 15.5 billion light-years away, so we are seeing it as it was just 780 million years after the big bang. But seeing a star-forging galaxy that early on means that “the end of the Dark Ages lies earlier in time than people had previously thought,” says Hu. “The thought had been that galaxies were put together somewhat later than this time,” says Spinrad, so this new galaxy “is a little bit of a novelty.”

    Loeb is not entirely convinced. It's a “surprising claim,” he says, not only because the galaxy formed so early, but also because stars and quasars must have already cleared a path for its light by sweeping away the opaque neutral hydrogen. That scenario, Loeb says, sits uncomfortably alongside the findings of Xiaohui Fan of the Institute for Advanced Study in Princeton, New Jersey, who reported a quasar at a redshift of 6.28, last year's candidate for the “most distant” prize. Fan and his colleagues believe that missing wavelengths in their quasar's light indicated that there was still neutral hydrogen around at a redshift of 6.10, which would have blotted out the star-forming signature of Hu's more distant galaxy.

    But Hu says both observations may be right. Seeing the signature of star formation in the newly discovered galaxy suggests that enough galaxies already existed to see off the neutral hydrogen around it, in turn implying a largely transparent universe. By contrast, a quasar is so brilliant—about 1000 times brighter than an ordinary galaxy—that it clears away the neutral hydrogen fog around itself. So although the quasar itself may shine through, this says little about the condition of other parts of the universe. And the neutral hydrogen that Fan sees may well be due to cold, dark clouds of neutral hydrogen between us and the quasar, says Hu, rather than evidence of the widespread pre-reionization fog.

    Spinrad is confident that Hu's faint source is a true early galaxy. “I think the result is right,” he says. “The idea that the universe was dark … at that kind of redshift, it can't be a completely correct statement any longer.” Loeb, however, would prefer to wait for more results: “It would be much more convincing if there were more objects of this type.”


    Whisper of Magnetism Tells Molecules Apart

    1. Robert F. Service

    High-energy physicists aren't the only scientists with a lust for power. For decades, chemists have built ever stronger magnets to improve nuclear magnetic resonance (NMR) spectroscopy, a technique that gleans the structure of molecules from the unique magnetic signatures of their component atoms. But generating those high magnetic fields is expensive, which drives up the cost of those probes and related medical imaging scanners.

    A team led by researchers at Lawrence Berkeley National Laboratory (LBNL) and the University of California (UC), Berkeley, is bucking the trend with a strategy that could pay off for physicians and their patients. On page 2247, the group describes a new way to get detailed chemical information at ultralow magnetic fields. Because NMR forms the basis of magnetic resonance imaging (MRI), the new technique might someday eliminate the massive and costly magnets used in today's medical imaging systems.

    “It's a very elegant piece of work,” says Warren S. Warren, who heads the center for molecular and biomolecular imaging at Princeton University in New Jersey. Allen Garroway, a physicist at the Naval Research Laboratory in Washington, D.C., says that the prospect of low-field medical imaging is “tantalizing” because of the “huge market” for low-cost MRI. But all agree that extending this technique to medical MRI machines still faces significant hurdles.

    In traditional NMR, bigger magnets make it easier to track atoms. Some atomic nuclei behave like tiny bar magnets and align when placed in a magnetic field. In NMR, researchers disrupt that alignment slightly and use the telltale oscillation, or precession, of nuclei around the magnetic axes to identify particular atoms and their positions. The more powerful the external magnetic field is, the more pronounced this “chemical shift” signal becomes. That makes it possible to work out the structure of larger molecules and make use of smaller samples.


    A sensitive detector (background) picks up molecular fingerprints in NMR spectra without high-field magnets.


    But NMR spots other telltale magnetic signatures beyond the chemical shift. In a related effect, called “J-coupling,” for example, electrons around different atoms in a molecule influence one another in ways that split the atoms' spectral signatures from one line into two or more. “It tells you which atoms are bonded to which,” says Alexander Pines, a chemist at UC Berkeley, who led the new study along with physicist John Clarke of UC Berkeley and LBNL. The signal for this effect, it turns out, remains constant as the applied magnetic field drops.

    Pines and his colleagues decided to see whether they could use this effect to identify compounds using only a very small magnet and a simple two-part test. In a test tube, they placed a solution of water and two different test molecules: methanol and phosphoric acid. They then used an ultrasensitive magnetic field detector, called a SQUID, to try to pick up the characteristic spectral-line splitting signature of a phosphorus atom bound to an oxygen that is, in turn, bonded to a hydrogen. Phosphoric acid has this phosphorus-oxygen-hydrogen configuration. But when it's mixed with water, hydrogen atoms quickly drop off and reattach themselves to the acid molecules. The NMR detector didn't see this as a three-atom configuration and registered just a single spectral line.

    Next, the researchers reacted the methanol and phosphoric acid to form trimethylphosphate, a compound that also has the three-part phosphorus-oxygen-hydrogen configuration, but with the hydrogens fixed in place. In this case, the SQUID spotted a phosphorus-oxygen-hydrogen configuration and registered it as a split in the spectral line.

    Pines hopes the work will lead to a low-magnetic-field approach to MRI imaging. But that effort faces at least one very difficult challenge, says Warren. MRI builds images piece by piece, detecting the magnetic spins of hydrogen nuclei in small volumes of a material. Reducing the applied magnetic field makes it harder to pick the spins out of random background noise and could degrade the resolution of a scan. Warren says alternative advanced imaging techniques may solve the problem. If so, J-coupling could revolutionize medical imaging by making the machines, now housed in specialized centers at hospitals, cheap enough for the doctor's office.


    Judge Reverses Decision on Fingerprint Evidence

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Boone, North Carolina.

    A federal judge in Philadelphia has changed his mind and decided that fingerprint examiners should have their say in court—even if what they do isn't science.

    In January, Judge Louis H. Pollak of the U.S. District Court for the Eastern District of Pennsylvania found that fingerprint identification didn't meet the U.S. Supreme Court's standards for scientific evidence (Science, 18 January, p. 418). Pollak ruled that fingerprint identification failed three of four criteria set by the high court in its 1993 decision, Daubert v. Merrell Dow Pharmaceuticals. He said that the technique hadn't been scientifically tested, wasn't subject to scientific peer review, and didn't possess a known rate of error. Fingerprinting was “generally accepted” among forensic scientists, he found, but that did not establish its reliability. Pollak said fingerprint examiners could testify in an impending murder trial, but he forbade them from stating whether prints found at the crime scene matched those taken by authorities.

    On second thought.

    Fingerprint identification can continue, Judge Pollak ruled.


    Worried that the ruling would undermine one of their most powerful tools, federal prosecutors persuaded Pollak to reconsider the issue. On 13 March, he ruled that his interpretation of Daubert had been too narrow. Fingerprinting is a form of technical expertise akin to accident reconstruction or art appraisal, he said, so it need not meet the scientific peer-review requirement. And although the method's error rate is unknown, he writes, there is no evidence that it is unacceptably high. In the absence of rigorous testing, he said, limiting the testimony of fingerprint examiners would “make the best the enemy of the good.”

    The new decision is a mixed blessing for practitioners, allowing them to declare matches but implying that they are technicians, not scientists, says James Starrs, a forensic scientist and law professor at George Washington University in Washington, D.C. “It's a blow to their status in the scientific community,” he says. However, Pat Wertheim, a forensic scientist at the Arizona Department of Public Safety in Tucson, says the distinction between scientists and technicians makes little difference in a trial. “For all practical purposes,” Wertheim says, “the 12 people on the jury couldn't care less.”


    Ancient DNA Untangles Evolutionary Paths

    1. Elizabeth Pennisi

    Analyzing ancient DNA for clues into the deep past has had a bad rap: Too many false reports of recovered dinosaur DNA have sullied the field's reputation. Now, that's about to change. Two independent research groups have shown that, when studied correctly, genetic material preserved in cold environments can reveal quite a bit about the past.

    Alan Cooper, a molecular evolutionist at the University of Oxford, United Kingdom, and his colleagues used ancient DNA to reconstruct the migration patterns of subarctic brown bears living up to 60,000 years ago. Similarly, David Lambert's group at Massey University in Palmerston North, New Zealand, examined 6000-year-old DNA from Antarctic penguins to determine the rate at which the birds' genomes evolved.

    The two teams report their findings on pages 2267 and 2270, respectively. “Both these papers begin to chart a new course in ancient DNA studies,” comments Robert Wayne, an evolutionary biologist at the University of California, Los Angeles.

    Researchers first began extracting sequence information from DNA in museum specimens about 15 years ago, gradually moving on to ever older material. Much progress has been made using antique DNA to study genetic variation and to place extinct species in their family trees, but supposed extractions of DNA from dinosaur fossils or million-year-old insects in amber casts proved to be studies in modern contamination.

    Cooper avoided some common technical pitfalls by studying bones from northern Alaska, the Yukon, and Siberia, where DNA was kept on ice for thousands of years in the permanently frozen soil. In addition to these fresh-frozen samples, he and his colleagues had access to hundreds of bones in museums with their DNA still in intact fragments. To check the work, a second lab reanalyzed the DNA samples.


    Antarctic penguins that have historically stuck to the same breeding sites proved perfect for studies of ancient DNA.


    Cooper's team used the samples to look at the history of an intriguing geographic region. Until 11,000 years ago, the area called Beringia had a land bridge between the Asian and American continents, enabling species to cross back and forth—including humans about 13,000 years ago. The bones “provided an amazing opportunity to look at the genetic record” of several species, such as bears, bison, and lions, at what was “a biological crossroads,” Cooper explains.

    The researchers analyzed DNA from bones of 36 brown bears, using radiocarbon dating to verify the ages of 30 bones. They sequenced two pieces—one 135 and the other 60 bases long—of the mitochondrial genome and grouped the bears according to the degree of similarity in the sequences. The work indicates that the distributions of these ancient genetic groups did not correspond to those of modern populations. This finding serves as a caution to researchers that “what we see in present-day [distributions] is not necessarily true about the past,” says Rob Fleischer, an evolutionary biologist at the Smithsonian Institution in Washington, D.C.

    The data further indicate that brown bears may have disappeared from much of Alaska and the Yukon 35,000 years ago, only to reappear 14,000 years later. Because this reappearance corresponds with the disappearance of a larger animal called the short-faced bear, Cooper suggests that competition between the two species influenced the changes in their distributions.

    Meanwhile, Lambert and colleagues took advantage of frozen remains from a species at the other end of the globe. Adélie penguins' large colonies have existed for thousands of years, with the same birds and their descendents returning to the sites year after year. As a result, “here you have [layers of bones] in colonies that are beautiful” and ripe for DNA studies, says Axel Meyer, an evolutionary biologist at the University of Konstanz, Germany.

    Lambert's team members collected and dated 96 bones from various layers and gathered 300 blood samples from living birds. They and a lab at the University of Auckland, New Zealand, analyzed a rapidly evolving 352-base sequence from the mitochondrial genome, cataloging changes between the ancient and modern samples. From that, they calculated the rate of evolutionary change.

    In the past, studies estimating mutation rates have obtained just a few data points over very short (one or two generations) or very long (perhaps millions of years) time scales. But in this study, the researchers were able to “use the whole history of the past 6000 years and the genealogy of the genes to extrapolate the mutation rate,” says Wayne. Lambert's group puts that rate at about two to seven times faster than were previous estimates for other species.

    Questions remain about both studies, however. Lambert says more bones were needed for Cooper's group to draw its conclusions. And Cooper wishes the DNA for the penguin study went back farther than 6000 years, so the variation would better reflect long-term rates of evolution. Others, including Fleischer, applaud the amount of DNA that Lambert collected but still worry that ancient changes in the penquins' distribution could have distorted the results.

    Nonetheless, says Svante Pääbo, an evolutionary biologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, “the study of ancient DNA has now advanced to a point where one can study genetic variation within species over thousands of years and [obtain] data that can be trusted.” Lambert's and Cooper's studies may thus push the use of ancient DNA farther along the path to redemption.


    Setbacks for Endostatin

    1. Eliot Marshall

    Just as clinical trials of a widely heralded cancer treatment are about to be expanded, two groups report that they couldn't get it to work, indicating again how fickle and mysterious the compound remains

    Harvard University's Judah Folkman electrified cancer researchers 5 years ago when he and his colleagues reported on a new compound that could shrink tumors in mice virtually to nothing. A surgeon at Children's Hospital Boston, Folkman had long pursued a strategy of fighting cancer by cutting off the blood supply to tumors, rather than by poisoning patients with toxic drugs. Using a substance called endostatin, the Harvard group obtained dramatic results; clinical trials soon followed. But some other researchers who attempted to follow this lead were unable to find endostatin's seemingly miraculous properties. Now two new studies, published in the April issue of Molecular Therapy, take aim at endostatin again, both reporting that it had no effect on tumors in mice.

    Although these papers are not the first to raise questions about endostatin, they are among the most pointed. One title speaks of “the unfulfilled promise of endostatin” in a type of gene therapy for mice with leukemia. And the other reports that, “despite continuous, high-level secretion of endostatin” in the bloodstream of mice, “we detected neither inhibition of [blood vessel growth] nor anti-tumor activity.” In a companion essay, Molecular Therapy's editor, Fintan Steele, writes that “results from these two groups certainly contradict much of what has appeared in prior publications.” The confusion about which data are reliable prompts Steele to ask whether there is “sufficient basic science to understand what endostatin is and what it does”—and whether it makes sense to expand clinical trials built upon the early reports.

    Folkman and Michael O'Reilly, the researcher in his lab who discovered endostatin, see no reason to pause. Although Folkman acknowledges that some gene transfer experiments such as those reported in Molecular Therapy have not worked out, he says others have been more promising. He and O'Reilly, who is now at the M. D. Anderson Cancer Center in Houston, Texas, also argue that the simpler approach of injecting endostatin directly has yielded positive results in animals that justify expanding clinical trials. So far, fewer than 200 patients have taken part in tests designed to measure safety. No cures were expected, and none have been reported.

    Conscious of endostatin's mixed record, some leaders in this field agree that the picture is not as simple as it seemed 5 years ago. As Robert Kerbel of the University of Toronto says, the pharmacokinetics of compounds designed to prevent blood vessel growth (antiangiogenics) may be “very complex,” and the method of administration can have a “huge impact” on efficacy. Folkman himself views the complexity as intriguing, adding that even negative reports are useful because they may help unravel the mysteries.

    Lapsed believer

    When O'Reilly and Folkman first described endostatin in the 24 January 1997 issue of Cell, it seemed like an ideal anticancer weapon. This naturally produced, nontoxic compound selectively shrank blood vessels and repeatedly caused tumors in mice to shrink to microscopic size. A later paper in Nature, with still more promising results, triggered bold predictions, including a report in The New York Times quoting Nobel laureate James Watson to the effect that Folkman would “cure cancer in 2 years.” This led to front-page stories and turned Folkman into a reluctant hero. He also became the subject of a popular book, Dr. Folkman's War, published last year.

    Since then, Folkman's group has expanded its work to other compounds that inhibit blood vessel growth and explored dozens of ideas for new therapies. Many other groups around the world also have plunged in. A private company—EntreMed Inc. of Rockville, Maryland—obtained rights to manufacture endostatin and since the late 1990s has sponsored small clinical trials. The National Cancer Institute (NCI) provided support too, funding a couple of clinical trials and animal studies on endostatin and other antiangiogenics carried out in well-established laboratories. But the two papers in Molecular Therapy have raised new red flags, including a report from one lab that it couldn't repeat the original 1997 experiment.

    Philippe Leboulch, a contributor to both papers in Molecular Therapy and the senior author of one of them, has turned from endostatin enthusiast to skeptic. A molecular geneticist, Leboulch investigates gene therapy techniques with a joint position at the Massachusetts Institute of Technology (MIT) and Harvard Medical School. He also has a small company, Genetix Pharmaceuticals Inc. in Cambridge, Massachusetts. Inspired by early data from Folkman's lab, he embraced endostatin in 1995.

    Powerful result.

    Endostatin therapy dramatically shrank mouse tumors in a 1997 experiment that raised high hopes for antiangiogenesis treatment.


    When Leboulch first connected with Folkman's team, he says: “We were very excited about collaborating.” One barrier to research in the early days, Leboulch explains, was that endostatin was hard to get. The endostatin for the successful 1997 Folkman lab mouse experiment had been produced in the bacteria Escherichia coli. But output was low, and the product was an insoluble aggregate. The MIT group— including Robert Pawliuk, Thomas Bachelot, and Omar Zurkiya—took a different route: This team spliced the endostatin gene into mouse hematopoietic stem cells, the progenitors of blood cells that live in bone marrow. This looked like a great strategy for getting endostatin expressed continuously and at high levels in animals.

    Gene transfer worked “as we had planned,” recalls Leboulch. “We got very high levels of secretion” in the bloodstream of mice: about 746 nanograms per milliliter (ng/ml), he says. Leboulch estimates that this systemic concentration, on average, was 750% higher than that naturally found in the animals. Eighteen mice received endostatin-expressing stem cells, and 10 received cells that didn't express the protein.

    To look for effects on blood vessel formation, Leboulch collaborated with Yihai Cao of the Karolinska Institute in Stockholm, who is an expert on angiogenesis. Cao compared five endostatin-treated and four control mice and saw no antiangiogenic effect. “In theory, [endostatin gene transfer] should have worked,” says Cao. “I don't see why it didn't.” He speculates that the protein produced by the transplanted gene may have been misfolded—a possibility Leboulch concedes. But no one knows how the active form of endostatin is folded, or whether a change in folding would make a difference.

    Not only did the MIT experiment have no effect on blood vessel growth, but it also failed to control tumors. The MIT researchers injected fibrosarcomas—one of the tumors used in the O'Reilly-Folkman mouse experiment—into mice in different ways to simulate local and metastasized tumors. Again, they found no difference between endostatin-treated mice and controls.

    The second experiment reported in Molecular Therapy—run by a group at the British Columbia Cancer Agency in Vancouver, Canada, including Connie Eaves and Wolfgang Eisterer —took a similar approach. This group targeted a cancer of the blood, acute lymphocytic leukemia (ALL), for endostatin therapy. The Vancouver group withdrew ALL cells from four patients and implanted them into immune-deficient mice. With gene transfer, the researchers also got mice to express relatively high levels of endostatin—180 ng/ml—in the blood. But when they compared the endostatin- producing mice with controls, they found no difference in cancer burden.

    Leboulch says his group took steps to see that the endostatin it produced was as close as possible to the original form. The researchers tested the protein produced by the transplanted gene to ensure that it inhibited endothelial cell proliferation, examined its amino acid sequence, and ran confirmatory antibody checks.

    He also says an earlier version of their paper was turned down for publication by Science because it lacked a “positive control”—a substance illustrating effective tumor control to compare with the endostatin failures. To remedy this, Leboulch tried to repeat the original 1997 mouse experiments. Leboulch's postdoc, Bachelot, asked Folkman's group for samples of the original E. coli precipitate but never received any. So Bachelot made injectable endostatin using the original E. coli recipe. This also produced no effect.

    Quiet celebrity.

    Judah Folkman with Senator Edward Kennedy (D-MA) and on the cover of a recent book.


    Failed experiments such as these often don't get published, but Leboulch says he decided to submit the results partly because his ex-postdocs wanted this work to get out, and partly because “some of my colleagues at Harvard encouraged me to make the data available.” One former Harvard researcher, asking not to be named, grumbles that he and “thousands of postdocs” have had the same disappointing experience. Although Leboulch admires Folkman and endorses his antiangiogenic program, he says: “We think we will get out of this endostatin business.”

    The beat goes on

    The failure of these two experiments points up what Folkman calls “a paradox": Endostatin delivered to the body by gene therapy appears to be less effective than when the protein is simply injected. Last year, in a paper co-authored by Folkman in the Proceedings of the National Academy of Sciences, Richard Mulligan's group at Harvard compared the potency of five antiangiogenic compounds delivered by modified adenoviruses to mice. Ranked by efficacy, endostatin was at the bottom.

    “The mechanism of this paradox is unknown,” Folkman writes in a comment faxed to Science. The high concentrations of the protein produced by gene therapy, he speculates, might lead to “protein aggregation” that renders endostatin inactive. Mouse receptors might become overloaded at high serum concentrations, although the identity of the receptor is not known. And the gene-produced molecule might be more vulnerable to degradation or metabolic processes.

    Yet even this paradoxical behavior is not consistent. Folkman notes that a gene therapy experiment by Andrew Feldman and Steven Libutti at NCI did produce some promising results. Feldman and Libutti transplanted an endostatin gene into mouse liver tumor cells and implanted the cells into mice. As they reported in the Journal of the National Cancer Insitute last year, the implants expressing the highest amounts of endostatin were most strongly inhibited from growing. Although Folkman speculates that high levels of endostatin may overload receptors, Libutti thinks that endostatin concentrations of 1 μg/ml or more—higher than described in either Molecular Therapy report—are needed locally to have an effect.

    To O'Reilly, the fact that some groups have seen at least modest tumor inhibition in gene therapy experiments suggests a simple explanation for the failure of the two studies reported in Molecular Therapy: The proteins produced in both experiments were defective.

    In contrast to gene therapy experiments, Folkman says protein-injection studies have yielded many positive reports. A recent one, co-authored by Folkman, Oliver Kisker, and other Harvard scientists in Cancer Research last October, reports “tumor regression” in immune-deficient mice treated with endostatin delivered continuously by a small implanted osmotic pump.

    The researchers used a soluble, yeast-produced form of human recombinant endostatin, the same material that EntreMed gives patients in its clinical trials. They calculated that the minipumps delivered systemic doses of 200 to 300 ng/ml. Although this is lower than in the Leboulch gene therapy experiment, Folkman notes that this method of delivery was up to 10-fold “more effective” at controlling new blood vessels than periodic injections in most studies were—with the exception of the remarkable effects seen in the 1997 study.

    O'Reilly agrees that it makes sense to investigate all of the discrepancies and puzzles in the results with endostatin so far. But he argues that these investigations should not hold up clinical trials, because “patients with advanced cancer are desperate” and “don't have the luxury of waiting.” EntreMed has received clearance from the U.S. Food and Drug Administration to expand its clinical trials to investigate responses to different doses. Even Leboulch says that clinical trials are now likely to provide the best new information on whether endostatin really works.


    Reseeding Project Offers Aid to Strapped Afghani Farmers

    1. Daniel Charles*
    1. Daniel Charles, author of the recent book Lords of the Harvest, is based in Washington, D.C.

    Drought and war have crippled agriculture in Afghanistan. But an international plant-breeding network is trying to improve a desperate situation

    The people of Afghanistan have survived a generation of political turmoil and civil war, but they now face an even more relentless enemy: famine. Agriculture has been almost wiped out in parts of the war-torn, drought-plagued country, and many farmers lack one of the most basic tools to revive it: seeds for their traditional crops. Help, however, may be on the way, thanks to a fortuitous deposit in a seed bank in nearby Syria.

    A few months ago, researchers at the International Center for Agricultural Research in the Dry Areas (ICARDA) in Aleppo, Syria, dipped into the subzero temperatures of their “gene bank” to retrieve about 200 samples of chickpeas, barley, lentils, and fava beans collected decades ago in the marketplaces and mountainsides of Afghanistan. Transplanted to the red earth of northern Syria, seeds from these botanical remnants of Afghanistan's past are being returned to their native land this month in the first stage of a long process to rebuild the country's agriculture. “Right now the seed situation in Afghanistan is critical,” says Adel El-Beltagy, the center's director-general. “We believe the majority of the country's seed was lost when farmers planted the 2001 crop. When the rains failed for a third year in a row, it put an end to their ability to stay on the land.”

    The picture is not uniformly dire. Even last year, Afghan farmers grew 1.5 million metric tons of wheat. This harvest, half of the amount needed to feed the country's 22 million people, came almost exclusively from irrigated areas where “improved varieties” common across Southwest Asia are grown. There's plenty of such seed in Pakistan and Iran, and ICARDA plans to deliver about 3500 metric tons of it to Afghanistan within the next week.

    Far more difficult, however, will be recreating the diversity of agricultural production in areas that rely on rainfall. Those regions once grew not just wheat but also nutrient-rich legume crops, millets, fruits, and nuts such as pistachios. Plant breeders have long neglected such crops to focus on high-yielding wheat or rice for irrigated land. As a result, farmers continue to rely heavily on traditional varieties, called landraces, that are specifically adapted to their local conditions, and seed of these varieties is in very short supply.

    Geoff Hawtin, director-general of the International Plant Genetic Resources Institute in Rome, traveled throughout Afghanistan during the 1970s collecting seed for use by crop breeders in Afghanistan and elsewhere. “We were setting up breeding programs for the West Asia-North Africa region,” recalls Hawtin, then at the Arid Lands Agricultural Development Project based in Lebanon. “You need genetic variability to breed new varieties of these crops, and Afghanistan was a kind of treasure house of genetic diversity. The furthest thing from our minds was that it would ever be used for this purpose.”

    Fresh start.

    Nasrat Wassimi, standing second from left, works with local farmers to help restore crop diversity in Afghanistan.


    Today's news reports that describe Afghanistan as barren ignore the country's genetic riches in crops such as barley, wheat, and chickpeas. Over many centuries, Afghanistan's farmers tailored these plants to the country's stupendously varied landscape, selecting for taste or yield and, in the process, creating a multitude of local strains. “They knew that on this hillside they'd grow this particular variety, but it didn't grow very well on the other hillside,” he says. “Or they'd have types for certain cooking preparations.”

    Hawtin's timing was providential, coming just a few years before the Soviet Union invaded Afghanistan and touched off a civil war. Although he and his colleagues left duplicate samples in Afghanistan's national gene bank, the facility was destroyed in 1992. The war and drought have decimated many of the agricultural areas Hawtin visited, and desperate refugees may have eaten the remaining stocks of seed. Indeed, ICARDA, which inherited most of Hawtin's collections, may be the sole repository for some local varieties.

    “We're talking about tiny quantities … grams,” says Willie Erskine, assistant director-general for research at ICARDA. Even with two growing seasons a year, it will take years to create enough seed for widespread planting of these local varieties. In the meantime, ICARDA will distribute tons of seed—from varieties created by its plant breeders—that it can produce quickly in large quantities.

    Some Afghan farmers, however, will have a chance to compare the results of planting such improved varieties with the harvest from traditional varieties preserved in ICARDA's gene banks. Several aid agencies are planning to organize a set of field trials later this year in which Afghan farmers will grow many different varieties side by side. “It'll be a combination of landraces and improved varieties,” says Erskine. “In the end, it's the people there who will decide what they're going to grow.”

    Nasrat Wassimi, a U.S.-trained Afghan scientist, returned to Afghanistan last week to lend a hand to the reseeding effort. In the 1990s, while based in Pakistan, Wassimi coordinated field trials by local farmers of many different crop varieties. “What's amazing is that they were able to do this under such difficult circumstances,” says John Dodds, ICARDA's Washington, D.C., representative. About half of the country's irrigation canals have been destroyed or abandoned, according to some observers; ICARDA hopes that remote-sensing data from satellites will provide a better picture.

    Wassimi left the region 2 years ago and settled with his family in Tucson, Arizona, but he has agreed to spend up to a year helping reestablish agriculture in his native land. He believes that many traditional varieties of crops have survived, with Afghan farmers planting traditional rain-fed varieties in irrigated fields during the drought to ensure a future supply of seeds. But water scarcity and lack of security may be harder obstacles to overcome. “Even some of the rivers have dried up,” he says. “And the people with guns, they're still there.”


    California Tries to Rub Out the Monster of the Lagoon

    1. Jay Withgott*
    1. Jay Withgott writes from San Francisco.

    With the seaweed Caulerpa a global threat, the world is closely watching an eradication effort in California

    SAN DIEGO—In a planet increasingly flooded with invasive species, victories are rare. But a dogged team of scientists and resource managers in California is hoping to beat the odds and triumph over the world's most formidable seaweed.

    The target is Caulerpa taxifolia, an alga native to the tropics. One cold-water strain thrives in aquaria. But that trait also makes it a threat to temperate coastlines around the world. All that's needed to seed an invasion is one saltwater hobbyist or aquarium store owner carelessly dumping algae-laden water.

    Officials in the Mediterranean and southeast Australia have already abandoned any hope of eradicating this strain in their waters. But 21 months after the exotic alga was discovered flourishing in a San Diego County lagoon (Science, 14 July 2000, p. 222), officials are hopeful that California's decision to act quickly will help it succeed where others have failed. At the same time, some scientists feel that those managing the eradication campaign could better fight their foe by trying to learn more about it.

    As invasive organisms go, the aquarium strain of C. taxifolia is an ecologist's nightmare. It crowds out native flora, knocking out the base of the marine food web, diminishing biodiversity, and impairing fisheries. Spreading outward with runners, it grows several centimeters a day and forms a dense green carpet that excludes all other plants. Its toxins deter herbivores that might otherwise keep its growth in check, and it does not die back in winter. It reproduces asexually from even the tiniest fragments of any part of the plant, so a single torn frond can begin a new colony. Its only weakness is an inability to reproduce sexually, as do its wild relatives. Scientists don't know if the cause is genetic or environmental, but they say that this C. taxifolia strain could become unstoppable if it finds a way to disperse eggs over long distances.

    Those characteristics, combined with boat propellers and fishing nets, have allowed the strain to spread from its humble start beneath the Oceanographic Museum of Monaco in 1984 into a monster covering 30,000 hectares of coastal sea floor off six Mediterranean nations. More recently, the global aquarium trade apparently also brought it to six locations near Sydney, Australia, and in June 2000 to biologist Rachel Woodfield's doorstep at the Agua Hedionda lagoon north of San Diego.

    Woodfield consulted experts to help identify what had been encroaching on the native eelgrass, and within days a team of scientists, industry managers, and government officials had coalesced into the Southern California Caulerpa Action Team (SCCAT). Lacking clear federal or state guidelines, they cobbled together funding and set to work. “We didn't want to duplicate the problems the Europeans had [and] get involved in a long, bureaucratic process,” says Robert Hoffman of the U.S. National Oceanic and Atmospheric Administration, one of the partners in the action team.


    Divers pump liquid chlorine into the mud and water to kill the unwanted strain of seaweed, Caulerpa taxifolia.


    The power company that owned the lagoon immediately funded its consulting firm, Merkel & Associates, to send divers into action, running surveys, taking growth measurements, and testing herbicides. Crews used tarps to quarantine patches of the weed and pumped deadly liquid chlorine into the water and mud that anchors the plants. SCCAT also produced and distributed 100,000 copies of an educational brochure describing the danger. In short order, San Diego's City Council banned the sale and possession of Caulerpa.

    But there were limits on the public's commitment to waging a seaweed war. Although recreational restrictions were imposed, SCCAT's proposal to prohibit boats for an unspecified period “went over like a lead balloon,” admits head consultant Keith Merkel: “It's a careful balance. You don't want to alienate people who are supportive.”

    While SCCAT worked to educate aquarium retailers, Susan Frisch and Steven Murray of California State University, Fullerton, surveyed Caulerpa's commercial availability. They found that 52% of Southern California retailers were selling Caulerpa and that 95% were selling “live rock” (rock or coral covered with organisms), some containing Caulerpa fragments. Such data helped convince state legislators to ban nine species of Caulerpa. “If this thing gets out of the bottle, we're going to have a real problem on our coasts,” says Assembly Representative Thomas Harman, whose Huntington Beach district was the second site of infestation.

    So far it's still in the bottle. Over 99% of the original biomass has been treated, and surveys are turning up fewer and fewer new plants. Tests of cores from within the tarps have found no evidence of viability. “Cautious optimism” is the operative phrase.

    But daunting challenges greet the start of the new growing season. One is money. SCCAT's current funding will dry up at year's end, and the team is still looking for the $1.5 million a year needed to sustain the eradication effort.

    SCCAT also needs to resolve internal debate over the role of science in the management process. Its eradication policy barred researchers from obtaining samples and performing fieldwork for fear that researchers might inadvertently fragment the plants and hasten their spread. Susan Williams, an ecologist at the University of California, Davis, and director of the Bodega Bay Marine Laboratory, would have liked to see studies of the effectiveness of eradication methods, including collateral effects of chlorine treatment, as well as ecological and life-history studies. But eradication leaders express few regrets about the way things were done, and some outside observers agree. “The Californians chose to shoot first and ask questions later,” quips Australian phycologist Alan Millar of Sydney's Royal Botanic Garden. “And I think in this case that was necessary.”

    Tension between eradication and research is “a recurring theme in biological control,” says Edwin Grosholz, a University of California, Davis, biologist, who organized a Caulerpa conference last month in San Diego. “But you can eradicate at full speed [while] also learning something about what you've done.”

    Scientists believe that they already know enough to fear other species in the genus. A strain of C. racemosa with similar biological traits and impact on ecosystems is spreading rapidly in parts of the Mediterranean. Of the world's more than 70 Caulerpa species, invasive behavior has been documented in five, Williams says. But few species are well studied, and many are involved in the aquarium trade; Frisch and Murray's survey found 16 species in 26 California pet shops alone.

    For this reason, and because identifying Caulerpa species can challenge taxonomic specialists, let alone enforcement officers, scientists had urged the California legislature to ban the entire genus. But the aquarium industry lobbied successfully to restrict Harman's bill to only nine species. Sure enough, shortly after the ban went into effect last September, inspectors in San Francisco let through a shipment of live rock from Indonesia containing “Caulerpa species,” according to the state Department of Fish and Game. The inspectors apparently were unable to identify the algae to the level required under the law.

    That's not good enough, says French phycologist Alexandre Meinesz of the University of Nice, who first sounded the alarm in 1989. Many countries with temperate coasts, from Japan to South Africa, should be preparing to confront Caulerpa, he says, noting that New Zealand is among the few that so far seem willing to step up to the challenge. Nations hoping to root out the invader will need a model of success, however, leaving all eyes on California.


    New Annual Survey Brings Census Into 21st Century

    1. Constance Holden

    The new American Community Survey will let researchers and public officials get their hands on fresh data annually rather than having to wait 10 years

    The U.S. census offers demographers a wonderful snapshot of the country's population. But snapshots are static pictures of an ever-changing world, and recent history has exposed their limitations. The longest economic boom in U.S. history went bust shortly after the 2000 census was conducted, for example. Then came the 11 September terrorist attacks.

    Starting next year, the entire nation will begin to get feedback about dozens of social indicators on an annual instead of decennial basis. The exercise, called the American Community Survey (ACS), will, when fully implemented, be “the most important innovation [in census history] at least since sampling theory,” says former census director Kenneth Prewitt.

    Begun in 1790, the U.S. census has two purposes. The first is to determine the distribution of seats in the U.S. House of Representatives, based on population. The second—what Prewitt likes to call “the nation's longest continuous science project”—is to probe social and economic conditions through questions about topics such as income, race, education, disabilities, military service, jobs, and commuting time to work. For 150 years, every household received what was known as the long form. That changed in 1940 with the introduction of sampling; in 2000, the long form was sent to one in six households, covering some 30 million people.

    Every piece of data is devoured by one government program or another: in planning transportation systems, zoning, schools, health care facilities, and housing as well as in targeting and budgeting social services. But whereas “business and commerce operate on instantaneous information,” says Prewitt, “the federal government operates on 10-year-old data.” It takes 2 years to process answers from the long form, and by the end of the decade the information can be more than a little stale.

    The Census Bureau wants to jettison the long form forever and substitute the ACS, a continuous pulse-taking of 39,000 local jurisdictions. Using an approach called “rolling samples,” the bureau each month will mail questionnaires to a different sample of households within each jurisdiction. Officials began testing the ACS in 1996. Next year it's expected to go into high gear, with questionnaires going to 250,000 addresses each month, or 3 million a year. By 2010, 30 million addresses will have been sampled, almost doubling the 16 million covered by the long form. Communities of 65,000 or larger will have fresh numbers every year, thanks to sufficient sampling size; smaller areas will take several years to accumulate meaningful data.

    Mapping drug traffic.

    ACS demographic data combined with where drug dealers live (white dots) and work (stars indicate arrests) helps Springfield, Massachusetts, officials tackle youth violence.


    Improved timing isn't the only advantage to the ACS, says Nancy Gordon, associate director for demographic programs at the bureau's headquarters in Suitland, Maryland. Administering the long form has gotten increasingly inefficient and expensive, she says. In 2000, for example, the bureau needed to hastily train nearly 1 million temporary workers to conduct follow-up interviews of people who didn't respond to the original mailing. In contrast, the ACS will be handled by a permanent regional staff of some 2000 to 4000 people who will contact nonresponders first by telephone and then, if necessary, in person.

    The ACS arrives on the wings of 21st century technology that makes the long form superfluous. The heart of a census, says Prewitt, is its address file. It wasn't until the mid-1990s that computer systems could carry out prompt and precise tracking of large populations. To do the job right, the Census Bureau links its master address file with a digital database of geographic features such as roads, railroads, rivers, lakes, and political boundaries, allowing the bureau to crosscheck its addresses against relevant physical features. By 2010, the bureau hopes to make databases compatible with global positioning systems, allowing fieldworkers to consult digital rather than paper maps and to update addresses instantly.

    So far the results of the ACS have been heartening, says Gordon. Although the initial mail response rate of 52% is lower than the 58% for the long form—not surprising because the annual exercises won't be accompanied by the blast of publicity the decennial census generates—follow-ups have raised that to 96%. And the quality of the data is better, says demographer Joe Salvo of the New York City Planning Department. A professional staff skilled at eliciting information from reluctant citizens, he notes, has less need for imputation, the last-minute inferring of missing data.

    The ACS is the linchpin of the Census Bureau's plans for “re-engineering” the census. With the long form accounting for 60% of the bureau's paperwork, Prewitt says that adoption of the ACS will hasten the day when the decennial census can be conducted via postcards and the Internet.

    The ACS still must jump through a few hoops. The next challenge is winning approval of the $219 million it requested in the 2003 fiscal year to do the job right. Although that's a 77% increase over its normal off-year budget, Gordon says the new format won't cost any more in the long run: an estimated $11.25 billion, compared with $11.7 billion for the 2000 census.

    Congressional reaction to the ACS has been largely favorable, although members continue to express concern about the “intrusive” and mandatory nature of the survey. The chair of the House subcommittee that oversees the bureau, Representative Dave Weldon (R-FL), has just asked the General Accounting Office to conduct an “independent investigation” of these issues as well as the ACS's cost-effectiveness.

    In the meantime, Gordon says that local officials have already embraced the ACS. Salvo, for one, says efforts to help neighborhoods and businesses recover from 11 September could have benefited from more comprehensive and up-to-date workforce and employment data. “It's this kind of situation that makes the ACS so attractive to us,” he says. “I really think the era of mass census-taking for the long form data is over.”

    In Springfield, Massachusetts, officials are already using ACS data to improve delivery of health and social services. When cancer registries at two hospitals showed a large number of women with late-stage breast cancer, health officials used the ACS to calculate rates for women over 40 for each police sector. Black or Spanish-speaking women in low-income areas turned out to have the highest rates, a preliminary finding that may help public health workers in designing information and screening campaigns.

    Springfield is also using ACS to tackle teen violence. Combining data on where violent youths were arrested in 1999 with demographic data from the ACS allowed officials to map school dropout rates, work patterns, home ownership, single-parent families, and teens' work and educational status. Amy Pasini, a violence and injury prevention coordinator at Baystate Medical Center in Springfield, says all this will be useful in planning intervention strategies: “Using 10-year-old census data doesn't have much teeth to it. … We see the ACS as a gold mine.”


    Unusual Venture Helps Make the Sky Affordable

    1. Daniel Clery*
    1. With reporting by Ding Yimin in Beijing.

    High-tech, assembly-line techniques are putting professional-quality telescopes within reach for a global scientific community

    LIVERPOOL, U.K.—Henry Ford had the big idea: Build automobiles on a production line, and they will be cheap enough for everyone to afford. The essence of that philosophy is one of the driving forces behind Telescope Technologies Ltd. (TTL), a start-up owned by Liverpool John Moores University (JMU). Its goal is to open up the international market for optical telescopes that can do real science.

    TTL's instruments are not for hobbyists to set up in their back garden. Their mirrors are 2 meters or more across, and they stand more than 7 meters tall. Most professional telescopes of this class are one-off projects, designed and built from scratch. But TTL and its original partners at the Royal Greenwich Observatory (RGO) in Cambridge had a different idea: Come up with a robust design that can be scaled up or down, and make it robotic to reduce operating costs. Price it at a fraction of the normal cost, then build it again and again and again.

    The result, which sells for about $3 million—less than half the cost of a similar one-off instrument—means that groups with a limited budget can now get their hands on a frontline professional telescope. Although TTL's first instrument is still waiting to be installed on the Canary Islands off Africa's Atlantic coast, the company has already completed one for a consortium of Indian universities and is building two for an educational foundation (see map). It has just won an order from China, and it has had more than 100 inquiries, including groups in Indonesia, Africa, and the Indian subcontinent.

    “There are no others in the world like it,” says British software entrepreneur Dill Faulkes, whose foundation is buying two 2-meter TTL telescopes that will be located in Hawaii and Australia. The instruments will be dedicated to students for real-time observations via the Internet. Robotics is the key for Chen Dong of the Yunnan Astronomical Observatory in China, which plans to install a 2.4-meter TTL scope in 2004 atop a remote 3000-meter-high mountain in southwestern China. “This is a great advantage for the [Gaomeigu] observatory,” he says about what will be China's largest telescope. “It will help scientists save time and money and allow them to do international joint observations by clicking a mouse.”

    TTL's headquarters in Liverpool looks just like all the industrial units dotted around this former docklands, except for the sliding panels on the roof to allow factory tests of the telescopes. As three scopes currently take shape on the shop floor, the staff nervously checks a Webcam to monitor progress of the company's prototype instrument, the Liverpool Telescope. This instrument, which will be run by JMU as a U.K. national facility, has been sitting in containers at its mountaintop observatory in the Canary Islands since November as local contractors continue to pour concrete. “It's like standing on stage waiting for the curtain to go up,” says Paul Rees, project manager at TTL. “Anticipation is very high.”

    A new vision.

    This robotic, 2-meter optical telescope can serve students and professionals alike.


    The idea for the company grew out of an effort by Mike Bode, a JMU astrophysicist, to win government support for a robotic telescope. Bode specializes in novae, fleeting events that require rapid reprogramming of telescope time to capture. Bode was frustrated by the long lead times at most large facilities in allocating blocks of observing time. What he wanted was a large telescope that could react immediately to a sudden event or multitask so that, say, one object could be monitored for a few minutes a night over many months in the midst of other observations.

    At about the same time, RGO staff wanted to use knowledge gained from helping design the international 8-meter Gemini telescopes in Hawaii and Chile to build a range of medium-sized scopes. JMU and RGO came together and founded TTL in 1996. A regional development grant of $2.3 million from the European Union helped with the start-up capital, on condition that local suppliers were to be used as much as possible. The design team was to remain at RGO, and assembly was to be carried out in a new plant in Liverpool.

    Then came a near-fatal blow. In 1997, the government closed RGO and consolidated British astronomy facilities in Edinburgh. JMU, with little experience in telescope design, had to decide whether to take on the whole operation. “There was a lot of navel gazing,” says TTL managing director Michael Daly, before JMU decided to take the plunge into running the commercial venture.

    Although the episode caused a year's delay in the project, the decision to move key design staff to Liverpool and locate the whole team under one roof paid handsome dividends when it came to honing down the design. Conventional scopes are usually deliberately “overengineered” to eliminate unwanted movement. In contrast, TTL's designers used a detailed computer modeling technique known as finite element analysis to predict how key components would behave. This high-tech approach allowed them to make the structure lighter, which increases its resonant frequency —its natural wobble—and allows the control system to effectively cancel out the effect of the wind, keeping the scope rock steady. Result: The fast-moving telescope can operate in the open air, without a cumbersome dome.

    Economies of scale.

    TTL's shop floor in Liverpool is churning out telescopes for customers around the world (bottom).


    Other key technologies include the motors and computers that rotate the telescope. TTL has relied heavily on off-the-shelf equipment from industrial robotics, which reduces costs and makes maintenance simpler. “[Our telescopes] couldn't have been built 10 years ago,” says Rees. “We use less metal and glass but more activation.”

    That painstaking design “really pays off” when first light arrives, says TTL optics designer Patrick Conway. The team took less than an hour to capture its first image of a known star after pointing the newly completed Liverpool Telescope through a hole in the factory roof one night last year. “Other large telescopes have taken weeks to acquire their first identifiable object,” says Conway.

    As well as providing scientists with excellent vision, TTL hopes its robotic scopes will wow them with their ease of use. In TTL's definition, robotics doesn't just mean that an astronomer can control the telescope from some distant and hospitable location. Rather, it means that astronomers can go to bed at night and wake up the next morning with data. Astronomers send observation requests to a “virtual astronomer": one of eight onsite computers that control the telescope. The virtual astronomer ranks the requests by importance. “Taking astronomers away from the telescope makes it more efficient,” says Daly. TTL estimates that a robotic scope will squeeze almost twice as much observing time into a night.

    Robotic control, combined with the ability to find a new target in an average of 23 seconds, also makes the scopes ideal for the sort of time-critical observations that Bode envisaged. Today's top targets are gamma ray bursts, mysterious high-energy blasts from deep space that reach us about once per day and are only detectable from space. To understand them, astronomers need a fast-moving telescope that can catch the faint and short-lived optical glow associated with a gamma ray burst. An alerting network that allows orbiting gamma ray observatories to steer small robotic telescopes to the targets already exists, but it will get a huge boost from NASA's pending Swift satellite and the Liverpool Telescope, which will be the world's largest robotic scope. “The Liverpool Telescope could then catch one gamma ray burst per week,” says Bode.

    The prototype Liverpool Telescope will also take orders from British schoolchildren. Under another project being developed by JMU, dubbed the National Schools Observatory, 5% of the telescope's observing time will be allocated to schools. Registered groups can request observations from the telescope using a comprehensive Web site stuffed with astronomical information and activities. And thanks to the virtual astronomer, the students won't get pushed aside by “proper” researchers. “We'll get equal footing,” says astronomer Andy Newsam, who is developing the school's Web site at JMU.

    Faulkes is also planning an ambitious educational project with his two telescopes. The sites were chosen so that U.K. students would have dark sky when they are in the classroom. His plan is to offer 700 schools a chance to control the telescopes themselves, often collaborating with researchers on real science projects. “It's all about understanding science as it happens,” he says.

    Although juggling all these projects is creating anxious moments for the TTL staff, they track precisely with what the company's founders hoped to achieve, namely, to serve a universe of customers. “This is not just a university design exercise, it's a business,” says Daly. “We succeed or fail as a business. If we fail, we close.” Henry Ford would be proud.


    Working Outside the Protein-Synthesis Rules

    1. Josh Gewolb*
    1. Josh Gewolb is a writer in Boston.

    Proteins built in the ribosome are subject to certain restrictions, so researchers are harnessing a nonribosomal system that might one day make new drugs

    Most proteins are built according to a set of rules as strict as any list of “don'ts” posted at a public swimming pool. Only 21 types of amino acids are permitted. Proteins must not loop. Non-amino acids may not be included in the protein.

    Life has gotten by pretty well with these restrictions. They are imposed on all proteins built by a cell's ribosomes, and for higher organisms, that means all proteins. But many bacteria and fungi can turn to an alternative system that allows them to toss aside the standard rules of molecular biology. Bypassing the ribosome, they manufacture some of their most important short proteins using giant enzymes that recognize amino acids and link them directly into chains.

    Because the so-called nonribosomal peptide synthetases (NRPSs) are not bound by the ribosome's rulebook, they are able to produce an array of peptides with unusual properties. Peptides manufactured by NRPSs include some of the most potent pharmaceuticals known, from penicillin to the immunosuppressant cyclosporin. Researchers have known about this alternative protein-construction system for decades, but only recently have they begun to build a solid molecular understanding of how the system works. With this knowledge, they hope to alter the NRPS machinery to make even more effective variants of powerful existing drugs as well as novel drugs built along the same lines. Although the goal of designing drug-factory enzymes is still a long way off, recently researchers have begun to engineer new nonribosomal proteins.

    An alternative system

    Bacteria and fungi use nonribosomal peptides for critical tasks such as killing parasites, communicating with members of their own species, and regulating the movements of ions. The peptides are ideal for these delicate tasks because they resist degradation and are unlikely to be mistaken for other compounds. These properties are a result of their unique composition: Scientists have spotted several hundred different molecular ingredients in various nonribosomally manufactured peptides, most of which the ribosomes have never heard of. These compounds include so-called right-handed amino acids, the rare twins of the standard left-handed model, as well as molecular relatives of amino acids such as acyl acids. But it's not just strange starting materials that make these peptides unconventional. They frequently contain loops, which are almost never found in standard proteins. The eccentric structures of these proteins foil protein-eating enzymes called proteases, which prefer to digest conventional strings of amino acids.

    The NRPSs that build these odd peptides are the largest enzymes known in nature. They're made up of a series of modules, each of which is responsible for adding one particular unit–be it a specific left- or right-handed amino acid or some other compound–to a growing peptide chain.

    Each module in the NRPS consists of three subunits. The first is an adenylation (A) element that recognizes a free-floating amino acid, say, and prepares it for incorporation into the chain. Next are two subunits that attach the new amino acid to its neighbors in the chain, known as the condensation (C) and thiolation (T or PCP) domains. The sequence of modules in an enzyme constitutes a unique blueprint designed to produce a particular protein. The final module of the chain has a special termination (Te) domain that releases the protein, sometimes drawing it into a loop.

    Who needs the ribosome?

    This mold uses nonribosomal peptide synthetase to make penicillin.


    Most NRPSs consist of four to 10 modules, but the enzymes–and therefore the peptides they produce–can reach up to 50 units in length. NRPSs can be enormous, for an enzyme. The 11-module cyclosporin NRPS, for example, weighs in at a whopping 1700 kilodaltons (kD), compared to about 20 kD for myoglobin, a ribosomally produced protein that helps the body store oxygen.

    Each species of bacterium or fungus that relies on the enzymes carries just one or two different NRPSs, in part because they are so unwieldy. But given the diversity of species that use the alternative protein-building system, researchers expect that a wealth of NRPS diversity must be out there. Figuring out how the modules recognize their target molecules and string them together might allow researchers to build on one of the existing NRPS plans.

    Breaking down the problem

    The immense size of NRPSs has prevented scientists from deducing their structure–a key to understanding how the enzymes work–using the standard tricks. “Some people think they are impossible to crystallize,” says biochemist Hans von Dohren of the Technical University of Berlin in Germany. To get around this problem, researchers have broken the enzymes apart and studied the structures of the subunits. The publication of the Te subunit structure in the March issue of Structure marks the completion of sample structures for all of the subunits except the condensation domain.

    The structure of the A subunit, responsible for the core task of recruiting new amino acids for the growing protein, has been determined in the greatest detail. In 1997, Mohamed Marahiel and his colleagues at the Philipps University of Marburg in Germany reported the structure of an adenylation subunit from Bacillus brevis, the source of the antibiotic gramicidin; this subunit recognizes the amino acid phenylalanine. They found that this subunit has an active site flanked by two domains, each resembling the well-studied luciferase protein that makes fireflies glow. More recently, in work submitted for publication, Marahiel's group has found that a second adenylation subunit from Bacillus subtilis (this one recognizes a carboxyl acid) has a similar structure, suggesting that all A subunits are built according to the same general plan.

    The structural information allowed Marahiel and Marburg biochemist Torsten Stachelhaus to figure out how A domains recognize their substrates. In 1999 the pair combined structural data with sequence information for 160 different A domains to identify 10 critical residues that adenylation subunits use to discriminate among amino acids. By modifying these residues, they were able to predictably alter which compounds an adenylation subunit will recognize, for example, coaxing a phenylalanine subunit into recognizing leucine.

    But modifying NRPS subunits is a tough way to customize NRPSs to make novel products. Far easier is cutting and pasting complete modules to make blueprints for new protein sequences. Marahiel and his colleagues first used such a strategy to alter an NRPS in 1995, replacing the A subunit that recognizes the amino acid leucine, along with the T subunit that stitches it into the growing peptide chain, with A and T subunits dedicated to other amino acids. By swapping these subunits, they switched which amino acid was incorporated into the chain.

    This was a major advance, but researchers were stymied when they tried to tailor other enzymes using the same strategy. It turned out that switching A and T subunits wasn't enough. As biochemist Christopher T. Walsh of Harvard Medical School in Boston discovered, in most cases, the A needs to be accompanied by the appropriate C domain, in addition to T. Armed with this information and a sharper understanding of subunit boundaries gleaned from the structural data, Marahiel and colleagues were able to modify more NRPSs, adding various third modules onto several two-part NRPSs, they reported in the 23 May 2000 Proceedings of the National Academy of Sciences. The study was the “first clear indication that you can swap modules and get them to function,” says Walsh.

    Join the chain.

    This adenylation subunit grabs the amino acid phenylalanine (red).


    On the assembly line

    There are many additional obstacles to altering NRPSs, however. Many bacteria and fungi, some familiar and others exotic, use the enzymes. But scientists do not know how to genetically manipulate most of these organisms–most lab strains stick to ribosomes when they're building proteins. This has held back progress on altering NRPSs, despite the recent biochemical advances. To address the problem, Marahiel and his colleagues produced a modified strain of Bacillus subtilis that readily accepts NRPSs from other bacteria. They deleted a nonessential 26-kilobase-long gene cluster in the bacterium, replacing it, as proof of principle, with a 49-kilobase NRPS gene cluster that produces the antibiotic bacitracin. The gene was taken from the bacterium Bacillus licheniformis, which cannot be genetically manipulated. In September 2001, the team reported that the enzyme was produced correctly, despite their concerns that inserting a large foreign gene would cause genomic instability.

    Other major obstacles remain to be overcome before researchers can engineer novel NRPSs in vivo. For one, many of the unusual amino acid variants in these peptides are made by enzymes not normally present in the cell. Ultimately, researchers will have to insert genes for the enzymes that manufacture these oddball amino acids into the engineered host systems, a challenge that no one has yet reported attempting.

    Yet more complications arise from the variations in structure between NRPSs. In 1998, Guido Grandi at Chiron S.p.A. in Siena, Italy, discovered that a Pseudomonas syringae NRPS that manufactures an antibiotic called syringomycin has an unusual building plan. Modules encoding eight of its nine amino acids are arranged in a line, as is standard, but the ninth amino acid is added by an A subunit located on a different protein. This protein then intermingles with the eighth residue's C and T domains to help them hitch up the ninth amino acid.

    Since then, many other variations on the standard NRPS theme have been discovered. Although these oddities mean that it may be harder for researchers to alter some NRPSs, Walsh says that the variations on the assembly-line structure may still turn out to be “good news” because they mean that NRPSs are more flexible than previously thought.

    Looping the loops

    Topology provides another opportunity for researchers to cook up useful proteins. Whereas cyclic ribosomal peptides are rare, proteins manufactured by NRPSs are commonly looped, a feature that helps them bind their targets securely and decreases their vulnerability to degradation–both desirable properties for pharmaceuticals. Researchers would like to engineer novel looped peptides as well as use NRPS chemistry to turn known linear proteins into loops, which is nearly impossible using conventional chemical tricks. In the 14 September 2000 issue of Nature, Walsh and Marahiel showed that termination domains will cyclize any protein with a simple two-amino acid signature, suggesting that it is easy to manipulate whether a protein gets cyclized. Following up on this study, Walsh and his colleagues successfully used purified Bacillus subtilis Te to cyclize novel peptides that were variations on the theme of the 10-unit peptide that the Te normally loops.

    Eventually researchers hope to create large pools of looped and unlooped peptides as part of their long-sought goal: screening many variants of potential and successful drugs to look for even more potent medications, a powerful drug-discovery technique. Walsh suspects that such screens may be possible within perhaps 5 years. “It's harder than one might have expected at the beginning, but enough of the rules will be deciphered and controllable” that researchers will be able to borrow techniques from NRPSs to build libraries of candidate drugs, he says. Ultimately, researchers hope that these law-breaking enzymes will rewrite the rules on what proteins can do.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article