News this Week

Science  29 Mar 2002:
Vol. 295, Issue 5564, pp. 1969

    DARPA and Jason Divorce in Spat Over Membership

    1. Ann Finkbeiner*
    1. Ann Finkbeiner is a science writer in Baltimore, Maryland.

    For 40 years, a high-prestige, low-profile group of about 40 academic scientists has quietly been advising the U.S. government on some of the most sensitive technical issues of the day. No longer. Last week the Defense Advanced Research Projects Agency (DARPA) announced that over the winter it had ended its sponsorship of the group, known as Jason. The decision, first reported by National Public Radio, was triggered by DARPA's attempt to add three people to Jason's elite ranks. The move, sources told Science, triggered a showdown in which neither side was willing to back down.

    DARPA and Jason are both children of the government's need for scientific advice after Russia's 1957 launch of Sputnik. But while DARPA is a federal agency, Jason was formed by academic scientists, then and now mostly physicists, who remembered the Manhattan Project and again offered their services to the government. Jason's organization is a little loose, but its membership is strictly controlled: Only Jasons choose other Jasons. The group's name has been the source of endless speculation. But the real story is simple: When physicist Marvin Goldberger, a Jason founder then at Princeton University, told his wife that the government had named the new group Project Sunrise, she said that a more fitting name for problem-solvers would be the sea-faring hero in Greek mythology, Jason.

    Some of the group's earliest projects remain current research interests, including remote sensors and antisubmarine warfare. Jason's technical advice has ranged from biomedical imaging to ballistic missile defense to verification of the nuclear test ban treaty. Jasons also laid the groundwork for the laser guide star, which allows the adaptive optics systems on astronomers' ground-based telescopes to rival the Hubble Space Telescope.

    New image?

    Steven E. Koonin says that Jason studies (above) make a “unique and important” contribution to national security.


    Last week, DARPA issued a terse statement thanking the group for its long service but suggesting that it had failed to keep up with the times. During the Cold War, the statement says, Jasons had helped with problems “that were highly physics-oriented.” But more recently, the DARPA statement says, “technology developments moved more toward information technology, [and] the Jasons chose to not lose their physics character to focus on DARPA's current needs.”

    Steven E. Koonin, Jason's current head, takes issue with DARPA's characterization of the group's habits. “We feel we have something unique and important to contribute to national security,” says Koonin, a physicist and provost at the California Institute of Technology in Pasadena, citing recent studies on civilian biodefense, landmine clearing, and Internet security. He also noted that the group, whose members now include chemists, biologists, engineers, and information scientists, is anxious to work on counterterrorism projects and hopes the group can find another home: “We hope we can get back to work ASAP, including studies for DARPA.” So does Ari Patrinos, head of biological and environmental research within the Department of Energy's (DOE's) science office, who has worked with Jason for nearly 13 years. “They're a reality check,” he says, “I have nothing but admiration, respect, and gratitude for their work.”

    The underlying cause for the split, according to those close to the situation, is an apparent disagreement over Jason's membership. Last winter, DARPA proposed three new members, but Jason rejected them, saying they did not meet its criteria for scientific standing. That refusal led DARPA to cancel Jason's $1.5-million-a-year contract, which represents about 40% of its overall budget. DARPA's action also blocked the conduit for funding from other government agencies including DOE, the intelligence community, and the armed servicesthus effectively stopping all Jason activity.

    James Decker, deputy director of DOE's Office of Science, which has used the group for many studies, calls the Jasons “diverse people who give very thoughtful advice.” Patrinos says that its advice is often “unfettered and sometimes gruff, but one of the greatest privileges of my job has been the interaction with Jason.” Koonin says Jason is seeking another sponsor within the Department of Defense.

    In the meantime, DARPA's decision has dealt the group a heavy blow. A 2-week winter meeting for conducting short studies was canceled, and the fate of a brief spring planning session and a 6-week summer session is uncertain. “With all of our experience and expertise in counterterrorism,” says Koonin, “I can't believe that we've been sidelined.”


    Cost Overruns Will Hit Research at CERN

    1. Giselle Weiss*
    1. Giselle Weiss is a writer in Allschwil, Switzerland.

    ALLSCHWIL, SWITZERLAND—Last June, CERN, the European particle physics laboratory near Geneva, issued a press release trumpeting “1754 days to the LHC and counting!” Make that at least 2000 days.

    To cope with cost overruns on the Large Hadron Collider, CERN managers have drafted a preliminary plan that would delay start-up of the massive proton accelerator until sometime in 2007, slash spending on long-term research programs, and require belt-tightening across the lab. The embryo plan, outlined to CERN's governing council last week, would shift some $300 million from other operations into the LHC and stretch out payments for the facility until 2010.

    CERN's budget troubles came to light in September, when members of the facility's finance committee learned that the LHC —intended to hunt down an elusive particle called the Higgs boson—would cost about 30% more to build than the originally budgeted $1.6 billion (Science, 5 October 2001, p. 29). Excavations, industrial services, and LHC's 1236 “horribly complicated” superconducting magnets share the blame for the cost overruns.

    CERN director-general Luciano Maiani had known about a budget shortfall for months but failed to disclose it, a move he has publicly regretted and one that “created an enormous amount of bad feelings,” says Walter Hoogland, a scientific delegate to the governing council from the Netherlands. Seeking to reassure CERN's overseers, Maiani in December announced the formation of five internal task forces to look for efficiencies. An external review committee (ERC) was also set up to assess the LHC and CERN's other research programs in light of the LHC funding needs.

    Tunneling for funds.

    CERN director- general Luciano Maiani plans to focus the lab's resources on completing the LHC.


    According to delegates present at last week's closed-door sessions, the good news from the ERC's early deliberations is that the committee sees no technical problems that would prevent delivery of the LHC. On the negative side, says Ian Halliday, chief executive of the United Kingdom's Particle Physics and Astronomy Research Council, the ERC is not yet persuaded that CERN is sufficiently focused. “CERN really needs to be seen to be finding credible solutions by June,” when the governing council next meets and the ERC presents its final report, Halliday told Science. (Maiani was not available for comment last week.)

    Even before the projected $510 million shortfall on the LHC came to light, CERN was coping with cutbacks required by member states when they approved the LHC in 1996. They include a substantial drop in member contributions and a staff reduction of one-third, from 3000 to 2000, a number that will be reached in 2006.

    To help in the current crisis, the Swiss delegation has offered to advance $54 million over 3 years, to be deducted from later contributions. According to CERN spokesperson James Gillies, another $300 million can be saved by “cutting back very drastically on long-term research and development” as well as smaller research activities, and by slashing costs such as office overheads, support for visiting fellows and associates, and industrial services for infrastructure and installation. Maiani urged the council last week to increase CERN's budget to enable limited research and development to continue, and to speed financing of the LHC, but his plea is unlikely to be heeded.

    On the research front, running time at the laboratory's existing proton accelerators will be cut 30% for the foreseeable future. In addition, the superproton synchrotron will be shut down for all of 2005, which in turn will put off the scheduled firing of a neutrino beam 780 kilometers to Gran Sasso, Italy, to 2006. Planning will also be curtailed, although high priority will be given to research and development on the compact linear collider, a “CERN invention,” says Roger Cashmore, CERN's director of research, and the “only way to get to very high-energy electron-positron collisions.”

    Frans Verbeure, a scientific delegate to the council from Belgium, calls the emerging plan a “rather sound proposal.” But Daniel Froidevaux, a physicist who has been at CERN for 25 years, says that the situation at the lab is “more difficult than I've ever seen it before.” Cashmore is hopeful that the lab will emerge from the turmoil a better, leaner, more effective organization. And, he emphasizes, no one doubts that the LHC will be completed.


    Cosmic Ripples Confirm Universe Speeding Up

    1. Andrew Watson*
    1. Andrew Watson is a writer in Norwich, U.K.

    Four years ago, cosmologists astonished their colleagues by announcing that the universe appears to be expanding at ever-increasing speed—and that a mysterious antigravity force must be doing the pushing. Since then, other scientists have scanned space in vain for evidence that the unexpected acceleration might be an illusion. Now an international consortium of astronomers has confirmed the original finding by taking a completely different approach. “A compelling case has been made that the universe is accelerating,” says Max Tegmark, a cosmologist at the University of Pennsylvania in Philadelphia.

    “This is an important piece of work,” says Neil Turok, a cosmologist at the University of Cambridge, U.K. In combination with earlier results, Turok says, the new research adds to the mounting evidence that ordinary matter alone cannot mold space into the geometry that cosmologists believe it has. Instead, many now believe, “dark energy” must be added to the mix—a repulsive force similar to one that Albert Einstein once considered and then forcefully rejected.

    The original finding and the new study reflect the two different strategies that scientists use to map the structure and geometry of far-flung corners of the universe. One is to take a “standard candle”—an object of known brightness—and then calculate its velocity by measuring how much the redshift “stretches” its light as it traverses the universe. The 1998 announcements that the universe's expansion is speeding up (Science, 30 January 1998, p. 651) relied on this technique, using exploding stars dubbed type IA supernovae as standard candles. Distant supernovae recede more slowly than expected, researchers found, suggesting a cosmic acceleration.

    The new work, by a 27-person team from 14 institutions around the globe, takes the alternative route. “We have used a 'standard ruler' test,” says George Efstathiou of the Institute of Astronomy in Cambridge, who led the study. By starting with something of known size, he explains, cosmologists can calculate how much matter the universe must contain to make it appear the way astronomers see it from Earth.

    The researchers used an esoteric ruler: the size of the lumpiness of the universe. They started relatively close to home, by calculating the variations in clustering within a huge swarm of galaxies surveyed by the Anglo-Australian Observatory in Siding Spring, Australia—the so-called Two Degree Field Galaxy Redshift Survey (2dFGRS). This lumpiness, says Efstathiou, can be traced all the way back to the ripples in the afterglow of the big bang, the cosmic microwave radiation. In recent years, balloon-borne and ground-based detectors have given scientists a good look at those ripples in patches of the sky (Science, 28 April 2000, p. 595; 22 June 2001, p. 2236).

    Cosmic recipe.

    Linking microwave ripples (top) to galaxy clusters revealed “dark energy.”


    “The cosmic microwave background gives us a picture on the sky of the fluctuations as they were when the universe was 400,000 years old,” says Efstathiou. “So the test is, we have a three-dimensional picture of the present day, and we can compare it with the angular picture at 400,000 years.” Working from that information, the team calculated how much matter must be sprinkled through the cosmos to transform the primordial ripples into multigalactic clumps as the universe aged.

    The results, published in the Monthly Notices of the Royal Astronomical Society, confirm earlier findings that the universe is geometrically flat, or subject to the rules of Euclid on a large scale. More important, there is not enough matter to create that flatness. Taken together, the ordinary matter that astronomers can observe and the so-called dark matter that they infer from the pull of its gravity provide just a third of the energy required for a flat universe. It's the Enron problem, says Tegmark: “Most of the budget is missing.” The budget deficit, the remaining two-thirds, is what cosmologists call dark energy.

    “What we are measuring is the energy associated with empty space,” says Efstathiou. And some features of that energy, Tegmark says, are “really weird.” For example, whereas ordinary matter pulls on other matter and attempts to reign in the expansion of the universe, “dark energy has this strange property that it's essentially repulsive, so that it pushes everything away and makes the universe accelerate faster and faster.”

    The new study gives a valuable boost to the earlier results based on supernovae, says Paul Steinhardt, a cosmologist at Princeton University in New Jersey. “We are uncertain how reliable type IA supernovae are as standard candles when you talk about supernovae that exploded a long, long time ago,” he says. The strategy used by Efstathiou and his colleagues “is much more secure since we have many cross-checks, [although] it is more indirect and less intuitive,” says Steinhardt. Crucially, Tegmark says, the new study “finds the same amount of dark energy as the supernova analysis did, but in a completely independent way.” In an online paper ( scheduled to appear in Physical Review D, Tegmark and two colleagues provide further backing with a similar study based on a different, smaller galaxy survey.

    Even so, Tegmark says, the case for acceleration fueled by dark energy “certainly hasn't been established beyond any reasonable doubt.” Turok agrees. Data from the forthcoming MAP and Planck satellites and the million-galaxy Sloan Digital Sky Survey should help firm up the case and enable cosmologists to check out some of the assumptions underlying these tests.


    Academic Reform Law Made More Flexible

    1. Adam Bostanci,
    2. Gretchen Vogel

    BERLIN—Bowing to pressure from young researchers, politicians, and the press, Germany's science minister has agreed to amend the recently passed university reform law to give young academics more time to finish their training. The change would remove what is seen as a serious flaw in otherwise needed reforms, and it may lead to a cease-fire in a drawn-out battle over the legislation.

    The law, which took effect on 23 February, is a major overhaul of university employment rules. It attempts to curb the practice of keeping young researchers in an indefinite state of uncertainty by setting a limit of 6 years for students to finish a Ph.D., followed by 6 years to land a permanent position in a university. Several national organizationsincluding the German research funding agency (DFG) and the German conference of university rectors and presidents (HRK)supported the overall reforms but worried that current students may not have enough time to complete their training. Students and academic staff went on strike at Bielefeld University (Science, 1 February, p. 781), and some protesters called for Research Minister Edelgard Bulmahn to resign.


    Bulmahn gives young German researchers an extension.


    In response, Bulmahn announced on 22 March that she would “clarify” the law by providing an extension to February 2005 for those now in the system who might bump up against either time limit. The clause will be added to a bill that would prevent universities from charging student fees. A ministry spokesperson said that she expects parliament to approve the new legislation by early summer, but others say that the fees may spark a more extended debate. “I doubt that it will be ready by the summer,” says Kurt Kutzler, acting president of the Technical University of Berlin and a vice president of HRK.

    Bulmahn's modification doesn't address a second pressing issue, however. Under the new reforms, researchers who have finished their Ph.D. and Habilitation (a more regulated version of the postdoctoral fellowship) but fail to snag a permanent faculty position within 12 years (15 years for medical students) can be hired only under temporary contracts that comply with Germany's general employment law. But university administrators feel that the law's complex hiring and firing requirements are ill suited to research. They also worry that universities may avoid such contracts because the law gives temporary employees the right to sue an employer for a permanent post.

    “We still need a satisfactory solution for researchers who have finished their training but are waiting for a suitable post as full professor,” says Christoph Gusy, assistant rector of Bielefeld University. Researchers will be watching closely to see whether the new law adds to the strain on this important talent pool.


    Deep Quakes Slow But Very Steady

    1. Erik Stokstad

    When it comes to quakes, Earth doesn't have much rhythm. Tens of kilometers below the surface, faults rupture with a chaotic unpredictability that has stumped seismologists and sometimes caught cities off guard. On page 2423, however, a team of geophysicists led by Meghan Miller at Central Washington University (CWU) in Ellensburg reports that a strange kind of temblor, called a slow earthquake, goes off silently about every 14 months in the Pacific Northwest.

    Such a rigid schedule is “quite surprising,” says Alan Linde of the Carnegie Institution of Washington in Washington, D.C. Geophysicists don't yet know why the slow earthquakes—so called because of their leisurely pace and the absence of seismic waves—occur so regularly, but they hope to deploy instruments to study them in greater detail and learn more about the boundaries of major tectonic plates called subduction zones. Slow earthquakes could even turn out to herald a season of heightened risk for larger quakes.

    Typical earthquakes announce themselves in a battery of shaking as a jammed and stressed fault suddenly breaks loose. In the Pacific Northwest, the culprit is the Juan de Fuca plate trying to ram its way beneath the edge of North America. Every 500 years or so, the locked fault tears free and generates a major earthquake. Further down the fault, there's less potential for trouble; because the rocks are hotter and more plastic, the plates are thought to defuse their energy by slipping slowly and continuously.

    But not always. Last year, Herb Dragert of the Geological Survey of Canada in Sidney, British Columbia, and colleagues found evidence for a relatively sudden pulse of movement farther down the fault than the locked zone is. By monitoring the location of Global Positioning System (GPS) stations relative to each other, Dragert's team discovered an unusual pattern of surface deformation in southeast British Columbia and northwest Washington state.

    Most of the time, this region is being shoved northeastward at an average of 8 millimeters per year. Dragert's team noticed that over several weeks in 1999, the stations reversed their direction and moved 2 to 4 mm to the southwest. A numerical model indicated that roughly 35 kilometers below the surface, a 50-kilometer-by-300-kilometer swath of the plates had slipped (Science, 25 May 2001, p. 1525).

    Quiet habit.

    Slow-paced earthquakes regularly release some of the strain built up by the descent of the Juan de Fuca plate, but they don't relieve the locked zone, which generates much bigger quakes.


    Geophysicists have had firm evidence of slow earthquakes in subduction zones since 1995, when Ichiro Kawasaki of Toyama University identified one off Japan, but Dragert's team was the first to find such a quake in the Pacific Northwest transition zone.

    Intrigued by their finding, Miller's team went back and analyzed 10 years of data from these and other GPS stations. The researchers found a total of eight such slow earthquakes in the same general vicinity. Most remarkably, the slow quakes started every 14.5 months, give or take 1 month. “That's incredibly exciting,” says co-author Tim Melbourne, also at CWU. “If you can find a fault that's regularly but intermittently creeping, maybe you can make sense of this in a deterministic way.”

    It's too early to tell what effect the slow earthquakes might have on the risk of a blockbuster shakeup. Each slow event reduces strain locally but not in the locked zone, which is where the big ones break out. The slow earthquakes do have a menacing potential, Dragert warns. “Just as a taut violin string is more likely to snap during sudden rapid tightening,” Dragert says, the locked zone—if close to critical threshold—might rupture when loaded with stress from a slow earthquake. This suggests that the likelihood of a major event may increase during slow-earthquake season.

    But the CWU team points out that a large number of slow temblors may take place between each 500-year great quake. This makes it unlikely that any single slow earthquake is going to be a meaningful precursor. “I'm not convinced that they're a harbinger of disaster,” Miller says. All agree, however, that slow earthquakes are likely to reveal more about how plate boundaries work.

    Miller's team believes the regularity implies that slow earthquakes are a basic way that strain is relieved in this subduction zone. Continuous GPS observations are revealing similar, individual events at other subduction zones, such as Japan and Peru, Miller notes. (Other experts say they'd like to see more examples of periodicity from around the world before calling the mechanism fundamental.)

    To learn more, Miller's team is exploring collaborations that would set down denser arrays of GPS stations with seismometers, strainmeters, and other instruments before a slow earthquake is scheduled to begin. And as continuous GPS networks become more common and more accurate in other parts of the world, the pace will be anything but slow.


    Will Congress Catch EPA's Falling STAR?

    1. Jeffrey Mervis

    The U.S. Environmental Protection Agency (EPA) has pulled the plug on a popular, one-of-a-kind graduate fellowship program in the environmental sciences. This year, the program drew applications from about 1400 environmental science and engineering students, who now must scramble to find other support.

    Agency officials say the move, announced recently on the agency's Web site (, responds to a presidential proposal to end the $10-million-a-year program this fall. An EPA official familiar with the fellowship program, called Science to Achieve Results (STAR), says the agency is “being a good soldier” in assuming that Congress would concur with the president's 2003 budget request and zero out the program.

    Awards for the 2002-03 academic year were due to be announced next month. But rather than making a commitment it might not be able to honor, the official said, EPA will instead send back reviewers' comments and suggest that students look for other sources of funding. The money saved in the 2002 budget will be doled out as needed to complete the multiyear awards for 211 current fellows.

    Environmental groups are beginning to rally support for the program, pointing out that last year EPA director Christine Todd Whitman told Congress it “continues to successfully engage the best academic environmental scientists and engineers.” The program is also important for the field, says David Blockstein, head of the National Council for Science and the Environment. “If the STAR fellowships end, there will be no dedicated funds for graduate fellowships in the environmental sciences.” The demand greatly exceeds the supply, he adds: “Based on my experience as a reviewer, EPA could double the number of awards without diluting the quality.”

    Supporting role.

    Granger Morgan says fellowships bolster EPA's “science-based” rules.


    Terminating the STAR graduate fellowships would undermine EPA's efforts to improve the scientific basis of its regulations, says Granger Morgan, head of the department of engineering and public policy at Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania, which has had several fellows. “It's been very important in funding top-flight students,” he says, adding that the fellows also develop a relationship with EPA and a better understanding of its mission. “If you want science-based regulation, then you need to invest in the people to do it,” Morgan says.

    In addition to tuition support, the fellowships provide students with an annual stipend of $17,000 and research funding of $5000. The program, begun in 1995, has supported some 784 fellows, two-thirds of them at the doctoral level. The fellowships are just one component of a $100-million-a-year STAR research effort, which will continue.

    “It gave me the freedom to pick a research topic, as well as buy a computer and travel to conferences,” says CMU environmental policy graduate student Felicia Wu, who expects to have completed her Ph.D. by the time her STAR fellowship ends in August. Wu, who hopes to work in the public policy arena, says that the 3-year fellowship allowed her to switch to a project on genetically modified corn after funding for a study of drinking-water quality ran out.

    The STAR program is one of two EPA education programs targeted in the 2003 budget request, the other being $9 million for environmental activities with elementary and secondary school students (Science, 8 February, p. 954). The $19 million being spent on them would be shifted to the National Science Foundation.


    Neurons Turn a Blind Eye to Eye Movements

    1. Marcia Barinaga

    If our mind were to see what our retinas see, the world would seem herky-jerky. That's because our eyes continually dart from place to place, causing an image to jump about on our retinas. The brain smooths the scene by briefly blanking out visual perception when the eyes jump. A simple demonstration illustrates this: Look at one of your eyes in a mirror. Then look at your other eye. Then back to the first. You will not see your eyes move, even though a person watching over your shoulder would easily see the rapid eye movements known as saccades.

    Neuroscientists have long debated the origin of this momentary blindness, known as saccadic suppression. Some argue that the brain does it solely based on information coming from the retina, while others think it uses additional, nonretinal signals coming from brain areas such as those that move the eyes. Now researchers have the first hard evidence for such an “extraretinal” mechanism. On page 2460, Klaus-Peter Hoffmann, Alexander Thiele, and their colleagues at Ruhr University in Bochum, Germany, report that they have identified visual neurons that distinguish between real movements of a scene and the shifts caused by saccades.

    “This is the golden fleece that people have been looking for,” says University of California (UC), Santa Cruz, neuroscientist Bruce Bridgeman, “neurons that respond differentially to a saccade.” The results, he says, prove that extraretinal signals alert the neurons when a saccade occurs.

    Neuroscientists discovered decades ago that experimental subjects—whether people or monkeys—don't usually perceive images during a saccade. Work from many labs traced this inability to a so-called masking effect: Whereas the visual system receives crisp, clear signals before and after an eye movement, explains neuroscientist Robert Wurtz of the National Eye Institute (NEI) in Bethesda, Maryland, “what is received during the eye movement is blurred and of lower contrast.” The blurred signal is swamped out by the stronger before and after views that come from the retina.

    In 1968, Wurtz made recordings from monkeys' visual neurons that supported the masking model and cast doubt upon the role for extraretinal signals in saccadic suppression. He recorded from neurons in monkeys' primary visual cortex, one of the brain's first relay stations for visual information, under two conditions: while the animals were making saccades, or while they were holding their eyes still and the visual scene jerked in a way that mimicked a saccade. If the neurons received extraretinal cues during a saccade, they might respond differently in the two cases. But the neurons responded identically, suggesting that the only information they received came from the retina.

    Invisible to you.

    Neurons in the MT and MST may suppress image movements encountered when the eyes jump.


    Since that time, researchers have characterized visual areas beyond the primary visual cortex. Hoffmann's team tested whether extraretinal signals of saccades may be reaching two of these higher visual areas, the middle temporal (MT) and middle superior temporal (MST) areas. They specialize in detecting motion and so have a strong need to suppress saccade-generated image motion.

    Hoffmann's team recorded from MT and MST neurons in monkeys trained to focus on one spot in a scene projected onto a screen, then to shift their gaze to another spot. At other times the monkeys knew to keep their eyes steady while the scene shifted in a way that replicated its movement across the retina during the saccade. Some neurons responded identically in both cases. But others distinguished between the conditions, firing when the scene moved but not during a saccade. This was the smoking gun: Because the retinal signals were the same in both cases and the only difference was the eye movement, this suggested the neurons receive an extraretinal message that the eyes moved.

    Other neurons responded in an even more remarkable way. Neurons in MT and MST normally register movement in a favored direction. But during saccades, some fired in response to motion in the opposite direction, effectively producing a false report of the direction in which the image moved. For example, during a saccade, a neuron that normally reacted to things moving to the right would instead respond to leftward motion. Hoffmann and his colleagues suggest that these “switching” neurons may cancel out signals from the neurons that respond normally to image movement induced by a saccade.

    The experiment “pretty well nails” the idea that extraretinal signals help the brain suppress vision during saccades, says UC's Bridgeman. NEI's Wurtz agrees but notes that it does not negate the role of masking.

    Neuroscientist Richard Andersen of the California Institute of Technology in Pasadena suggests that MT and MST may have a special need to rely on extraretinal signals. These motion-sensitive areas show little activity when the scene is still, but when the eyes shift they are deluged with motion signals. The areas “need some way of canceling them,” Andersen says, but may lack before and after activity strong enough for masking.

    Some researchers question elements of the story, however. John Findlay, who studies eye movements at the University of Durham, United Kingdom, finds it “a little difficult” to accept that the neurons' signals would be similar enough in strength to cancel each other out. Others note that it is difficult to reproduce the way an image moves on the retina during a saccade; they worry that image discrepancies may account for some of the data. But none doubt the Hoffmann team's evidence that an extraretinal signal tells MT and MST neurons of a saccade. The next challenge will be to determine which brain area is sending that signal. Then yet another veil will be lifted from before vision researchers' eyes.


    Chromosome End Game Draws a Crowd

    1. Jean Marx

    From small beginnings, research on telomeres—the specialized structures at the chromosome ends—has exploded into the biotech realm

    Even to a biologist, the tiny one-celled creature known as Tetrahymena can seem a bit odd. A ciliate, so called because it's propelled by hairlike projections called cilia, the single-celled Tetrahymena has not one but two nuclei. And the ciliate life cycle includes a phase during which the chromosomes of the larger of these two nuclei fragment into thousands or even millions of pieces, depending on the species. It was the latter quirk that caught the attention of cell biologist Elizabeth Blackburn.

    In the mid-1970s, Blackburn embarked on a seemingly esoteric study of the ends of chromosomes—of which Tetrahymena and other ciliates offer an abundance. Now Blackburn and Tetrahymena find themselves center stage in one of the hottest areas in both biotechnology and basic cell biology: research into telomeres—the specialized structures that cap chromosome ends.

    Telomeres perform the crucial job of protecting the chromosomes from erosion during cell division. Without them, essential genetic information would be lost. But telomeres are much more than mere passive caps: Researchers have linked them to both aging and cancer. Not surprisingly, these connections have attracted a broad range of researchers—among them an unusually large contingent of women who have emerged as leaders in the field. Telomeres have also enticed a slew of biotech and pharmaceutical firms eager to cash in on antiaging and anticancer drugs, some of which are moving into clinical trials (see sidebar, p. 2350).

    It's a far cry from the field's beginnings, says Blackburn, who won the 2001 General Motors cancer award in basic science for her pioneering work on telomeres. Even as recently as the late 1980s, a meeting on, say, chromosomes might have a single session on the topic. Blackburn, now at the University of California (UC), San Francisco, recalls usually being the last speaker in the last session at Gordon conferences. Now “we have whole meetings on telomeres,” she says.

    The big influx of researchers has upped the level of competition, notes Carol Greider of Johns Hopkins University School of Medicine in Baltimore, Maryland, who entered the field in the mid-1980s as a graduate student in Blackburn's lab. “In the early days, we could think of an experiment and get a graduate student to do it,” she says. “Now we have to worry about whether a company will put 10 postdocs on [the project] and scoop us.” Even so, Greider and others say that having more competitors is, on balance, beneficial.

    A slow start

    Telomeres have long had their followers. Decades ago, they piqued the interest of Nobel Prize-winning geneticists Barbara McClintock and Hermann J. Muller. Their work in the 1930s and 1940s indicated that telomeres prevent chromosomes from fusing end to end, an event that would have catastrophic consequences for the cell. But telomeres were hard to study directly, mainly because the material is so scanty. Human cells, for example, have just 92 telomeres—one on each end of their 46 chromosomes.

    Hot spots.

    The green staining marks the telomeres on these human chromosomes.


    Consequently, telomeres remained little more than a curiosity until Blackburn took up the cause in the mid-1970s. She had just moved to the lab of Joseph Gall, then at Yale University, after completing a Ph.D. with Fred Sanger at the University of Cambridge, U.K. Sanger was a leader in efforts to develop DNA sequencing methods, and Blackburn and Gall were eager to try them out.

    Telomere DNA was a good target because it could be labeled and thus easily isolated. That's when Tetrahymena, with its numerous minichromosomes and rapid life cycle, first entered the picture. “Joe knew the right beast to use,” Blackburn recalls. Indeed, Tetrahymena and other simple organisms proved to be central to telomere research. “Almost all the major advances in the earlier studies have been based on Tetrahymena and yeast,” says Jerry Shay of the University of Texas Southwestern Medical Center in Dallas.

    Gall gets credit for influencing the field in another way, too. “For whatever reason, when there were few women in science, his lab always had a substantial number,” says Virginia Zakian of Princeton University. And as those women established their own labs, they attracted more to the field. “There were a fairly large number of women to start with, and they spawned more,” says Greider.

    Blackburn puts the proportion of women at 50%. “This is a field that doesn't have a gender bias,” says Vicki Lundblad, a prominent telomere researcher at Baylor College of Medicine in Houston. Few other research areas can claim such a high representation of women among their leaders today, let alone 15 to 20 years ago. “Did it help to have women who were mentors? I hope it did,” says Blackburn.

    The search for telomerase

    As Blackburn and Gall laboriously sequenced Tetrahymena telomere DNA, they found that it was “very strange,” recalls Blackburn. Unlike the few viral genomes then sequenced, it consisted of a short sequence—the six nucleotides CCCCAA—repeated some 20 to 70 times. These researchers and others soon realized that the DNA repeats and length variation are a telomere trademark. This is true for telomeres of the yeast Saccharomyces cerevisiae, as Blackburn, who had by then joined the faculty at UC Berkeley, and Jack Szostak at Harvard's Dana-Farber Cancer Institute in Boston found in 1984. And in 1988, Robert Moyzis and colleagues at Los Alamos National Laboratory showed that human telomeres consist of a six-nucleotide repeat: TTAGGG.

    Still, no one knew exactly what telomeres did until 1982, when Blackburn and Szostak confirmed McClintock and Muller's proposal that the telomeres protect the chromosomes. Normally, foreign DNA introduced into yeast is rapidly broken down. But Blackburn and Szostak found that they could prevent that by first attaching Tetrahymena telomeres to the ends of the foreign DNA. Zakian, then at the Fred Hutchinson Cancer Research Center in Seattle, Washington, obtained a similar result with telomere DNA from another ciliate, Oxytricha fallax.

    And both teams found that while in yeast, the ciliate telomeres grew longer by adding yeast telomeric sequences. “Yeast has the ability to add its DNA to something that looks like a telomere,” Blackburn says. “That got us to thinking that there must be a [telomere-synthesizing] enzyme.”

    Good starters.

    Studies of Tetrahymena, two of which are shown here, touched off the telomerase field.


    Blackburn, by now a tenured professor at UC Berkeley, set out to find it. In 1984, Greider, a new grad student, began searching for the enzyme, now known as telomerase, in extracts of Tetrahymena cells. She picked up the first signs of the enzyme's existence on Christmas day 1984. One of her gels showed signs of a ladder of telomeric repeats that had been added to a telomere by the elusive enzyme. “I was very excited but immediately started thinking of what could give me a result that looked like telomerase but was really something else,” Greider says. Not until spring did she confirm that she had spotted the activity of a new enzyme that synthesizes telomere DNA and not, for example, a known DNA polymerase.

    One sign that Greider had the right enzyme was her discovery that, unlike ordinary polymerases, telomerase contains an RNA component that is necessary for the enzyme's activity. “The last part of Carol's thesis was to nail down that RNA,” says Blackburn. Greider did, showing that it is the template from which those short repeated telomere sequences are copied.

    As Blackburn credits Greider, others salute Blackburn's vision. Thomas Cech, head of the Howard Hughes Medical Institute in Chevy Chase, Maryland, says, “She was so far ahead of her time, that it was years before anyone knew if [telomere synthesis] was just something that Tetrahymena did or would have broader significance.” Lundblad agrees: “The field was really dominated by Liz Blackburn's insights. She provided the molecular framework—what a telomere looks like—and then discovered telomerase with Greider.”

    New fields to conquer

    Identification of telomerase provided a solution to a long-standing mystery in cell biology —the so-called end-replication problem. Researchers had known for decades that the polymerase that replicates the DNA before cells divide can't make it all the way to the end of the strand it's copying. Without an enzyme such as telomerase, the DNA would erode with every cell division. Indeed, Titia de Lange, now at Rockefeller University in New York City, says that discovery helped attract her to the field in the early 1980s. “It was intriguing that there was a solution [in telomerase],” she recalls.

    Within a few more years, researchers turned up even more attention-grabbing results—forging the first links between telomeres and aging. Some of this work came from Lundblad, then in Szostak's lab.

    Ordinarily, cultures of yeast and other one-celled organisms can divide indefinitely. But these researchers identified a mutant yeast, called est1 (for ever shorter telomeres 1), in which the telomeres progressively shortened and which also eventually stopped dividing, undergoing what is called replicative senescence. “If you knock out telomere replication, you confer senescence,” Lundblad says. Shortly thereafter, Blackburn's group demonstrated that tinkering with telomere structure also leads to senescence in Tetrahymena. “This told us cells need telomerase for their normal maintenance,” Blackburn says.

    Other researchers, meanwhile, were extending this work to humans. For example, in the 1960s, Leonard Hayflick, then at the Wistar Institute in Philadelphia, had found that human cells, unlike those of one-celled organisms, have a limited capacity to divide in culture. After some 35 to 50 division cycles, they stop, going into replicative senescence. Greider, then at Cold Spring Harbor Laboratory on Long Island; Cal Harley, then at McMaster University in Hamilton, Ontario; and several colleagues wanted to know why. The answer seemed to lie in the telomeres, which they found to shorten with every division of normal human cells—an indication that they provide the clock that tells cells they are old and it's time to stop replicating. In these cells, the telomeres effectively sacrifice themselves to protect the genetic information, but ultimately get so short that division ceases.


    Elizabeth Blackburn is still a leader in telomere research.


    An even more exciting discovery came in the early 1990s, when researchers—including Greider, Harley, Silvia Bacchetti, also at McMaster, and Shay and his Southwestern colleague Woodring Wright—linked telomerase to cancer. They showed that while the enzyme is not active in normal cells—thus allowing progressive telomere shortening—the gene for the enzyme somehow gets turned back on in cancer cells. Indeed, the enzyme is active in up to 90% of human cancers. That makes telomerase “an important target for cancer therapy,” says cancer researcher Robert Weinberg of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts.

    Exactly how the gene making telomerase gets turned back on in cancer cells isn't clear, although work by Shay and Wright provides some clues. On their route to becoming cancerous, cells first have to overcome the block to cell division normally imposed by short telomeres. And they can do this, the Texas scientists found, if they have also lost active copies of the p53 and Rb tumor suppressor genes—mutations known to contribute to cancer development.

    The telomeres of cells that then continue to divide get progressively shorter, leading to still more chromosomal abnormalities, apparently including, in rare cases, reactivation of the telomerase gene. And when that happens, the cells can keep on growing, forming a cancerous tumor.

    In the wake of these discoveries, the telomere field exploded. When Greider plotted the number of papers mentioning the word “telomerase,” she found a dramatic increase beginning in 1994, and the plot is still going up (see graph below). The discoveries also gave increased impetus to the search for the gene for the catalytic protein of telomerase, which was still at large, a situation that was a handicap to a full understanding of how the enzyme works—not to mention to efforts to develop inhibitors for use in cancer therapy. A ciliate, this time Euplotes aediculatus, again provided the first opening.


    Joachim Lingner, then a postdoc in Cech's lab at the University of Colorado, Boulder, was able to isolate the telomerase protein from Euplotes in 1996. A link to yeast came with the discovery that the protein is the Euplotes equivalent of Est2, a protein made by a gene that Lundblad had identified in yeast as being necessary to maintain telomere length. By 1997, four groups had cloned the gene for the telomerase catalytic protein (also called hTERT). One of the groups included Cech, Harley—now at Geron Corp., a biotech company in Menlo Park, California—and their colleagues; the other three were led by Murray Robinson and Lea Harrington of Amgen Inc. in Thousand Oaks, California, Weinberg, and Roger Reddel of Children's Medical Research Institute in Sydney, Australia.

    Since then, researchers have strengthened the link between telomerase and both aging and cancer. For example, in 1998, Shay, Wright, and their colleagues showed that they could extend the replicative life-span of human cells in culture by introducing the cloned hTERT gene. In addition, independent teams, one led by Kathleen Collins of UC Berkeley and the other by Tom Vulliamy at Hammersmith Hospital in London, have found that mutations that reduce, but don't eliminate, telomerase activity cause some cases of dyskeratosis congenita, a rare genetic disease whose symptoms include some that are consistent with premature aging. Blackburn describes the work as “fascinating. It argues that for a normal human life-span, [telomerase] dose is important.”

    And on the cancer front, Weinberg, William Hahn of the Dana-Farber Cancer Institute, Christopher Counter of Duke University Medical Center in Durham, North Carolina, and their colleagues reported in the 29 July 1999 issue of Nature that they could make human cells cancerous by introducing just a few genes: the T antigens of simian virus 40, which interfere with the tumor-suppressing activities of p53 and Rb; the Ras oncogene; and hTERT. “No one had ever before put in defined genes and gotten a human cancer cell,” Weinberg says. “We were excited.”


    In this complex of telomere DNA (red and green) with two telomere proteins, the free end of DNA is looped back and buried in the complex.


    Just as important, several researchers have demonstrated that they can inhibit tumor cell growth by blocking telomerase activity in the cells. Several teams, including researchers at biotech firms such as Geron and Amgen, as well as pharmaceutical giants such as Novartis and Boehringer Ingelheim, are now pursuing this lead to develop therapies for cancer.

    Unanswered questions

    Although the cancer connection is a huge draw, many researchers are still tackling fundamental questions of telomere biology. One is how telomeric DNA is protected. If telomeres were just bare DNA at the tips of the chromosomes, the cell would recognize them as damaged DNA and “repair” them, producing fused chromosomes and other damaging abnormalities. Telomeres have a couple of ways of avoiding this fate.

    Using electron microscopy, Jack Griffith of the University of North Carolina, Chapel Hill, de Lange of Rockefeller, and their colleagues showed in 1999 that the bare ends of mammalian telomeres bend back and tuck into the double-stranded DNA. Since then, researchers have found similar “t-loops,” as the group called them, in ciliates and trypanosomes. “This tucking in may be how telomeres protect their ends,” de Lange says.

    T-loops don't form on their own, however. Researchers including de Lange, Cech, and Lundblad have identified many proteins that bind to telomeric DNA. Indeed, says Cech, “you wonder how there is even room” for them all. De Lange's group has evidence that two of these proteins, TRF1 and TRF2, are involved in t-loop formation and telomere protection (see p. 2446).

    And about a year ago, Cech and University of Colorado postdoc Peter Baumann identified the telomere end-binding protein, Pot1 (for protection of telomeres 1), present in organisms ranging from fission yeast to mammals (Science, 11 May 2001, pp. 1075 and 1171). Deletion of the Pot1 gene in yeast results in rapid telomere loss, showing that it is needed for telomere maintenance. Because these proteins are essential for telomere maintenance, they, too, might be targets of anticancer therapies.

    Researchers now want to unearth the functions of all those binding proteins. In addition to TRF1, TRF2, and Pot1, these include several proteins known to be involved in DNA repair. “Why would you put proteins involved in end-to-end [DNA] joining at the telomeres?” asks Princeton's Zakian. “It would seem to be the last place you would want them.”

    Other questions concern how the telomeres get uncapped when the DNA replicates and then recapped when that is finished, and how the cell knows that the telomeres have shortened to the point where it's time to senesce or die. “We don't have a clue about what makes a telomere look good or bad to a cell,” Blackburn says. But while there's much work to be done, one thing is certain: There are now plenty of hands willing to tackle it.


    Tackling Cancer at the Telomeres

    1. Jean Marx

    Telomeres, the end bits of chromosomes, can pull off some amazing feats. Not only do they protect the genetic material from loss during cell division, they also serve as a “mitotic clock,” telling cells when it's time to stop dividing and go into senescence. And now, findings about how telomeres are maintained may point the way to new therapies to fight cancer.

    One of many discoveries about telomeres over the past few decades (see main text) is that telomerase, the enzyme that synthesizes the telomere DNA, helps drive the growth of cancer cells. Because telomerase is not present in most normal cells, the enzyme could be an attractive target for therapies aimed at halting tumor growth. “We are really excited,” says Murray Robinson of Amgen Inc. in Thousand Oaks, California, one of the companies trying to develop such therapies. Cancer therapies—if they can be developed—are likely years away, however.

    Researchers are experimenting with two basic strategies. One aims to block the enzyme's activity, and the other uses telomerase as a tag to elicit cancer cell killing in other ways—for example, by enlisting the immune system to destroy the cells that make it.

    As with most searches for new pharmaceuticals, the gold standard would be a small molecule that is easy to make and administer. At first, researchers were optimistic, because telomerase is a reverse transcriptase, an enzyme that copies RNA into DNA. A number of small-molecule inhibitors have been developed against the AIDS virus's reverse transcriptase. But finding one for telomerase has proved harder than expected. Cal Harley of Geron Corp. in Menlo Park, California, recalls that in 1994, “I predicted that we would be in [clinical] trials in 3 to 4 years.”

    Although no small-molecule drugs have made it into clinical trials yet, at least one candidate is on the horizon: an inhibitor described in the 17 December issue of EMBO Journal by Klaus Damm of Boehringer Ingelheim Pharma KG in Biberach, Germany, and colleagues. In tests with human cancer cells in culture, the inhibitor led to telomere shortening and eventually caused the cells to stop dividing. It also inhibited the growth of human tumors transplanted into animals, without any apparent toxic side effects.

    The Boehringer team's drug is directed at the catalytic protein of telomerase; other groups are targeting the enzyme's RNA, which serves as the template for the reverse transcriptase. Some, such as Seiji Kondo of Mount Sinai School of Medicine in New York City and his colleagues, are using an antisense approach to disable the RNA. This involves making an RNA whose sequence is complementary (or antisense) to that of human telomerase RNA and will thus bind to it, blocking its action. To be sure to take the telomerase RNA out of action, the Kondo team also attached the antisense RNA to another molecule that can attract an RNA-destroying enzyme. In test tube and animal studies, this construct can inhibit the growth of malignant gliomas—a particularly deadly type of brain cancer—says Kondo.

    In a similar vein, Geron has developed a small modified nucleic acid that specifically binds to telomerase RNA near the active site of the enzyme. Harley says that this molecule has proved very effective in test tube and animal tests, and “assuming all goes well, we will have it in clinical trials in 2003.”


    A human tumor implanted in a mouse and treated with a telomerase inhibitor plus an angiogenesis inhibitor (right) shrinks more than one treated with the angiogenesis inhibitor alone (left).


    All telomerase inhibitors have a possible drawback, however: They kill slowly, because tumor cells will continue to divide until the telomeres progressively shorten and trigger cell suicide. Several labs are exploring telomere-based immunotherapies that can kill much more quickly.

    All cells display on their surfaces fragments of the proteins they make. This helps “train” the body's immune cells not to attack its own tissues. Robert Vonderheide, then working with Lee Nadler at the Dana-Farber Cancer Institute in Boston, found in 1999 that telomerase-making cancer cells carry fragments of the enzyme on their surfaces and that these can be recognized in cell culture by appropriately activated T cells. Indeed, because telomerase is present in the cells of as many as 90% of human cancers, it may come close to being the long-sought “universal tumor antigen,” says Vonderheide.

    The trick is to get patients' immune cells to recognize the telomerase fragments on tumor cells. One way to do this is to feed an antigenic telomerase peptide to a patient's dendritic cells in culture. Dendritic cells trigger immune responses by “presenting” antigens to T cells. When the cells exposed to the telomerase peptide are then infused back into the patient, they can thus spark an immune attack on telomerase-bearing tumor cells, says Vonderheide, who is now at the Abramson Family Cancer Research Institute of the University of Pennsylvania in Philadelphia.

    Johannes Vieweg, Eli Gilboa, and colleagues at Duke University Medical Center in Durham, North Carolina, are using a similar approach: They introduce an RNA that directs telomerase synthesis into the dendritic cells in culture. These two teams and another led by Steven Rosenberg at the National Cancer Institute in Bethesda, Maryland, are beginning clinical trials of antitelomerase immunotherapy.

    Another telomerase-based therapy that is being explored by several teams, including Kondo's and Harley's, aims to kill cancer cells by introducing genes encoding toxic proteins. Kondo and his colleagues are trying a gene encoding a caspase, an enzyme that brings about cell destruction. To ensure that the caspase is made only in cancer cells, the researchers attach the caspase gene to a regulatory sequence from the telomerase gene. The idea is that this compound gene will be turned on in cancer cells, just as the telomerase gene itself is. Studies with cells in culture and animal tumor models have been encouraging, Kondo reports.

    Still, a lingering concern is that both the immunotherapies and those involving the toxic genes could harm tissues other than their tumor targets. Although most cells don't make telomerase, stem cells, such as those in the bone marrow and the lining of the gastrointestinal tract, do. Researchers suspect they can get around that problem, but the safety and efficacy of telomerase-based therapies remain to be established.


    Microbiologists on the Trail of Polluting Bacteria

    1. David Malakoff

    Government officials hope scientists in the emerging field of bacterial source tracking can help them clean up polluted waterways

    CLARKE COUNTY, VIRGINIA—It was an environmental mugging: Potentially dangerous levels of fecal bacteria were pouring down Page Brook, a picturesque local stream in this mostly rural enclave 100 kilometers west of Washington, D.C. But who was to blame? The leading suspects were human waste from faulty streamside septic systems, dung from cows grazing along the waterway, and droppings from a range of wildlife. But without firm evidence to identify the source, says county official Alison Teeter, it was impossible to know which sentence—new sewage controls for homeowners, fences to keep cows away from the stream, or stronger wildlife management—best fit the crime.

    Thanks to some scientific sleuthing by Charles Hagedorn, however, Page Brook is cleaner today than it was 5 years ago. Hagedorn, a professor at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, used a newly developed tracking method to indict cows as the primary source of the stream's bacteria. Armed with that information, county officials raised funds for fencing projects that have lowered bacterial counts by up to 98%. Without Hagedorn's 1997 study, Teeter says, “we might have wasted a lot of time and money on measures that didn't address the problem.”

    Court-ordered cleanups of thousands of similarly polluted lakes, rivers, and streams have created a nationwide demand for methods like Hagedorn's to track waterborne bacteria and pathogens back to their origins. But while the cleanup of Page Brook demonstrates the promise of such techniques, the infant science of bacterial source tracking (BST) is beset by growing pains. Researchers have yet to agree on standard methods, and there is rancorous debate over the soundness of certain approaches. One scientist has even challenged the basic biological assumption underpinning the use of one common bacterium for source tracking (see sidebar).

    This week, scientists backed by the Environmental Protection Agency (EPA) released a report that calls for a sustained nationwide study to test, compare, and improve tracking methods. “We need to figure out which methods best answer which questions,” says Bill Keeling of the Virginia Department of Conservation in Richmond. The report follows on the heels of the fledgling field's first national conference.*

    Ending the blame game. For decades, public health officials have measured water quality by monitoring levels of fecal coliforms, bacteria that live in animal guts and survive in the environment when expelled along with feces. Although the coliforms may not cause disease, they can be accompanied by a rogue's gallery of pathogens, including microbes that cause hepatitis, cholera, and gastrointestinal illnesses. As a result, state officials have used them as a marker to routinely close swimming areas, wells, and shellfish beds when coliform counts rise above certain levels.

    Unfortunately, the approach hasn't kept waterways clean. Although modern sewage treatment has eradicated the worst problems, EPA estimates that at least 40,000 kilometers of streams and coastal waters still carry bacterial loads that exceed health standards. Environmentalists have won a string of court victories in the last decade that require states to set goals for reducing bacteria counts. But efforts to set these targets—called Total Maximum Daily Loads (TMDLs)—often have become exercises in finger-pointing, as farmers, ranchers, and homeowners have argued that someone else is to blame. Desperate to calm such conflicts, TMDL specialists asked microbe scientists to come up with objective ways to pinpoint bacteria sources. Despite sparse funding, a handful of researchers have proposed ideas that range from DNA fingerprinting of different species of bacteria to techniques that identify viruses unique to specific animals.

    The most prominent techniques are so-called “library-dependent” methods, which require researchers to match a bacterium found in a waterway to one included in a previously created library of bacteria from known sources. The Page Brook study, for example, used a library-dependent technique called antibiotic resistance analysis (ARA) that was developed by Bruce Wiggins of James Madison University in Harrisonburg, Virginia.

    ARA assumes that the strains of bacteria in people, farm animals, and wildlife respond differently to antibiotics. To catalog these signatures, Hagedorn's team first combed the small watershed for excrement from every potential source, including people, cows, and deer. The researchers then cultured Enterococcus bacteria from the samples, exposed the microbes to a battery of antibiotics, and recorded the results. Later, they found that Enterococcus taken from the stream had signatures that—statistically, at least—matched cataloged signatures produced by cattle-borne bacteria.

    Many genetic approaches also require libraries, with researchers seeking to match the unique DNA or RNAprofiles of bacteria from waterways with those from bacteria associated with different sources. For example, over the past 5 years, another Virginia Tech scientist, George Simmons, used a genetic technique called pulsed-field gel electrophoresis to fingerprint bacteria sources in Four Mile Run, an urbanized stream near Washington, D.C. Simmons, now retired, found that it carried bacteria that matched those from several hosts (see graphic), including significant contributions from hard-to-control wild sources such as raccoons and geese.

    Pointing fingers.

    New microbial tracking methods enabled researchers to uncover sources of bacterial pollution in Virginia's Four Mile Run.


    The new EPA report, however, cautions that such findings are typically based on small studies and use methods not yet widely replicated. Various source-tracking methods have been applied to fewer than 125 waterways, and reports in peer-reviewed journals are scarce. The report calls for a four-phase national study to find out what really works, starting with basic experiments to see if single laboratories can replicate their own results. The study would end with a real-world test designed to compare different methods in a single, complex watershed.

    The report is silent on how much such studies might cost and where funding might come from. The U.S. Geological Survey and Orange County, California, are funding small comparison studies, and California is considering a multimillion-dollar effort. Virginia, meanwhile, is the first state to require that BST be used in the development of TMDLs, giving researchers a chance to refine their techniques.

    Testy talk. Such studies could help settle several simmering technical debates. The validity of ARA, for instance, has come under attack from Mansour Samadpour, a University of Washington, Seattle, public health specialist who has pioneered molecular BST techniques and started a company that specializes in them. “ARA doesn't work—it has nothing to offer,” Samadpour told Science. At the recent BST meeting, he presented unpublished data suggesting that a standard ARA method was too crude to reliably segregate a group of Escherichia coli bacteria—the most common fecal bacteria—into useful groups. “It just doesn't provide the resolution you need to see important distinctions,” he says.

    ARA developer Wiggins notes that his method has passed muster in peer-reviewed studies and claims that Samadpour's study compared apples and oranges. “He tested something, but it wasn't ARA as we know it.” Wiggins agrees that his method can be improved, and although it may not match the power of genetic approaches, he says it can be faster and cheaper.

    Other BST researchers, meanwhile, complain that Samadpour has not shared enough information to allow them to evaluate his RNA ribotyping approach. Although Samadpour's company, the Institute for Environmental Health in Seattle, has been involved in more than 80 source-tracking studies, few have been reported in peer- reviewed journals. But Samadpour says he is ready to share the firm's bacteria library, which contains genetic profiles of more than 70,000 strains of E. coli, with other researchers who copy his methods. But because most don't, “What use would they have for [the library]?” he asks.

    Potentially a great deal, suggests the new report. That's because one of the most pressing questions facing BST researchers is whether a library constructed for one study can be used for others. Waterborne bacteria can vary widely by place, season, and time of day, and even diet can shift the dominant strains found within an individual.

    Given this variability, some BST researchers fear that they will have to construct new libraries for every new watershed, driving up expenses. But Samadpour believes that with a big enough library and extensive sampling programs, powerful molecular techniques can do the trick. “For the moment, there is no way around brute-force methods,” he says. He figures that 500,000 E. coli strains, taken from a range of vertebrates, could cover the country if the data were updated regularly. Such a database might cost at least $10 million, he says.

    Such cost estimates are driving researchers toward techniques that wouldn't require expensive libraries. Microbiologist Katharine Field of Oregon State University in Corvallis, for instance, is looking at using certain Bacteroides bacteria, which may carry host-specific genetic markers that vary little from place to place. Other techniques seek to detect widespread, species-specific antibodies that adhere to shed bacteria.

    Fully developing and testing such methods, however, will require researchers to adopt standard methods and share data. In the end, it's likely that more than one method will be needed to track down solutions.


    Is E. coli Distinct Enough to Join the Hunt?

    1. David Malakoff

    Underlying many bacterial source-tracking techniques is the assumption that some strains of bacteria are found only within a single kind or group of animals. But the idea of host specificity may be wrong when it comes to the common fecal bacteria Escherichia coli, argues David Gordon, a microbiologist at the Australian National University in Canberra. E. coli populations are so mercurial, he says, that scientists should think twice before using them for source tracking. But others say that new techniques may still make the bacterium a tracking star.

    E. coli is popular with public health scientists because it is common, is easy to culture, and can be deadly in some variants. Until recently, however, few researchers had studied how populations of the bacterium varied by geography and host.

    The answer, Gordon says in a May 2001 Microbiology review, is that host and place count for little in explaining E. coli's genetic variation. Geographic separation, for instance, could explain just 10% of the differences found in bacteria hosted by rodents in Mexico and Australia; the rest, he says, are due to diet and other factors. Similarly, Australian mammals shared many strains. “There is some host specificity,” he says, “but the patterns aren't strong enough to unambiguously assign a strain to a source.” Other microbes, says Gordon, may produce better results.

    Fuzzy trail.

    E. coli may not be a good bacterium for source trackers.


    Scientists familiar with bacterial source tracking say that Gordon's caution is justified. But powerful genetic fingerprinting techniques may overcome such problems, says Mansour Samadpour, a University of Washington, Seattle, public health specialist who has pioneered molecular source-tracking methods using E. coli. “Sensitive methods can identify differences in bacteria that may otherwise appear identical,” he notes.

    Valerie Jody Harwood, of the University of South Florida in Tampa, urges source trackers to adopt “a strong population biology perspective.” It may be impractical to prove that a specific bacterium came from a specific source, she says. “But we probably can show that one set of strains is more likely to come from one host than another”—or at least determine if they came from a human source. That ability, she says, would be enough to keep E. coli in the source trackers' toolbox.

  12. FRANCE

    Biologist Wins Battle Over Bureaucratic Fungus

    1. Michael Balter

    Betty Felenbok applied her passion for social justice to vanquish purchasing rules that were strangling French researchers

    ORSAY, FRANCE—As a molecular biologist, Betty Felenbok spends her days trying to understand the soil fungus Aspergillus nidulans. But for many years she has also battled the creeping fungus of French bureaucracy. Felenbok's tireless lobbying efforts have helped loosen stringent purchasing rules that were tying the hands of her colleagues around the country (Science, 15 March, p. 1993). “Betty has done a great job getting us out of this mess,” says molecular biologist Patrick Forterre, a colleague at the Institute of Genetics and Microbiology on the Orsay campus of the University of Paris, where Felenbok, 64, has worked since 1969.

    A respected scientist whose group is trying to decipher the genetic mechanisms that allow A. nidulans to use ethanol as a nutrient, Felenbok is a seasoned campaigner for many scientists' causes. She believes that “science is a social and cultural activity, with an important role to play in society.” She launched her latest drive about 3 years ago, after the finance ministry cracked down on public research agencies such as CNRS, France's giant basic research agency, for ignoring the rules on purchasing lab supplies.

    The rules required agencies to put out competitive bids for items as mundane as test tubes and pipettes at the beginning of each fiscal year and also mandated that all orders over about $600 had to receive special authorization from the agencies' central administration. The crackdown paralyzed many labs. Especially hard hit was the biomedical agency INSERM, which had failed to complete its competitive bidding for that year and whose labs rapidly ran out of supplies (Science, 12 March 1999, p. 1613).

    A longtime member of the executive commission of the National Union of Scientific Researchers (SNCS), Felenbok, along with her colleagues, quickly collected more than 5000 signatures on a petition that opposed what she called “a Kafkaesque domain of irrationality [and] wastage of time and money.” Some scientists even had to hire full-time staff to process their purchase orders. “We were in a state of permanent blockage,” Felenbok says. It still took many months of tireless campaigning to lift some of this burden.

    The stalemate was not entirely the fault of the central government, however. An aide to Research Minister Roger-Gérard Schwartzenberg says that CNRS and other research agencies initially balked at decentralizing their purchasing procedures, a necessary step in giving lab directors greater control over their operations. “The ministry was willing to soften the rules, but not the agencies,” says the aide, who praises Felenbok for getting the research agencies to modify their internal rules. Felenbok agrees that both sides needed to compromise.

    Victory procured.

    Betty Felenbok helped bring together scientists and government officials to make purchasing rules more flexible.


    Partisan politics also lent a hand. On 21 April, Socialist Prime Minister Lionel Jospin will square off against incumbent conservative President Jacques Chirac in the first round of the presidential election. And because French scientists have historically felt that the Socialist party is more sympathetic to research, they are a constituency that Jospin is loath to antagonize. As the date neared, Felenbok says, she began to find increasingly sympathetic ears in Jospin's research and finance ministries.

    The new rules, announced earlier this month, drop the $600 limit and allow lab directors to spend nearly $80,000 annually on any single type of product without special permission. This means researchers won't be twiddling their thumbs while a request for lab supplies works its way through multiple layers of bureaucratic delay. And Felenbok's colleagues say that the victory wouldn't have been possible without her determined effort. “She believes fervently that things can be better,” says Richard D'Ari, a molecular microbiologist at the Jacques Monod Institute in Paris. “And she works heart and soul to bring it about.”

    The roots of Felenbok's social activism go back to her childhood in Nazi-occupied France. Her parents, Polish Jews living in Lithuania, emigrated to France in 1930. But German tanks rolled across the country just 2 years after her birth in 1938, forcing the family to flee Paris for the southern city of Limoges. Felenbok's father joined the resistance and her mother went into hiding, while Felenbok and her sister found refuge in a group home for children. Her immediate family was reunited after the war, although relatives were arrested and died in Auschwitz.

    Her interest in biology was kindled by courses she took at the Sorbonne from Jacques Monod, who would later share a Nobel Prize for his groundbreaking work on gene regulation. “We were witnessing the birth of molecular biology,” she says. “A lot of us got involved, thanks to the enthusiasm that Monod passed on to us.” Felenbok was also fortunate to come of age at a time when budding scientists were in great demand, winning a lifetime position at CNRS at the tender age of 24.

    Felenbok worked on a number of model organisms before turning to Aspergillus, with thousands of well-characterized mutant forms. Recently, for example, her lab has helped figure out how certain proteins, called transcriptional activators, control the genes that code for alcohol dehydrogenase, an enzyme that breaks down alcohol. Although A. nidulans itself is not pathogenic, other species of Aspergillus can cause serious respiratory illness under certain conditions. Understanding its inner workings could eventually help asthmatics and others suffering from fungal allergies.

    Despite making steady progress on that work, Felenbok will probably be best remembered for fighting the good fight against the French bureaucracy. Says SNCS secretary-general Jacques Fossey: “Her principal quality is to never abandon her convictions, even if she is alone against everyone.”


    Economic Crash Brings Ill Winds for Science

    1. Jocelyn Kaiser

    In the wake of their country's financial meltdown, Argentinean scientists are finding it harder and harder to keep their labs afloat

    BUENOS AIRES—On a warm summer day, biochemists bustle about in air-conditioned rooms crowded with protein sequencers and centrifuges at the University of Buenos Aires (UBA). The scene, not unlike that in any well-funded biology lab elsewhere in the world, seems far removed from the smashed bank windows and government offices marred with graffiti in central Buenos Aires. But this vision of normalcy may not last much longer. The funds that sustain this group's work are dwindling with Argentina's falling peso. “We are asymptotically going to zero,” says UBA biochemist Juan Paul Rossi.

    He's not the only Argentinean scientist staring into the abyss. “I've been in this business more than 35 years, and this is the worst,” says biochemist Eduardo Charreau, the new director of the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Argentina's main science funding agency. A severe belt tightening is squeezing grants and salaries and scotching bigger efforts as well: Last month, the government aborted a new grants program aimed to lure young Argentinean investigators back from abroad. Even worse, Argentina is bracing for a brain drain that could turn back the clock on its scientific development. “It takes years and a lot of effort and luck to build successful labs and centers, which could be dismantled in a few months,” says UBA ecologist Osvaldo Sala. “It will take a long time to reverse.”

    Science budgets had already suffered during the last 4 years of recession, but they went even further south when Argentina partially defaulted on its debt and was forced to allow the peso to trade freely in currency markets in January. The peso has lost more than 70% of its value since then. And although CONICET, which pays salaries and stipends, and “the Agency”—the Agencia Nacional de Promoción Científica y Tecnologíca, which funds research grants—received level funding in the 2002 budget (177 million pesos and 42 million pesos respectively), they likely won't get the full amount from the Treasury (see graph). The Agency has already suspended this year's grants competition for at least 3 months.

    The devaluation and budget squeeze are a vicious one-two punch. “My salary was $3000 [per month] a year ago, and now it's $645,” says UBA physicist Oscar Martínez. Even researchers with international grants lost money if their funds were not sitting in foreign accounts when the crisis hit.

    Falling fast.

    As Argentinean science budgets slide and purchasing power plummets with the peso's exchange rate, researchers such as those at the biochemistry department at the University of Buenos Aires are bracing for an inevitable slowdown.


    Because of government-imposed limits on withdrawing money from banks and buying imports, most labs, like UBA's, have had to raid cash and supply reserves to continue operating normally, says Sala. In the longer term, Sala hopes that his international grants will keep his group afloat.

    Others aren't so fortunate. Rossi and his colleagues at the Faculty of Medicine, who have only CONICET, UBA, and Agency grants, are now frantically applying to foreign funding sources such as the Wellcome Trust, a British charity. UBA's José María Delfino says he has shelved plans to replace his lab's creaky 30-year-old spectropolarimeter. Journal renewals and meetings fees are looming nightmares.

    Heightening the anxiety for some scientists are the new government's plans to merge CONICET with the Agency, created 5 years ago in part to fund peer-reviewed research. Although CONICET had a modest grants program, the Agency plied 20 times more grant money into science, says physicist Mario Mariscotti, its first director. Moreover, the grants are much larger: up to 50,000 pesos, compared to 5000 for CONICET. Equally important, the Agency introduced to Argentina rigorous peer review using international reviewers. CONICET's review system is flawed, many scientists claim, because high-level CONICET scientists do the final sorting and tend to steer grants toward their own institutes. The Agency's system, coupled with heftier grants, helped non-CONICET scientists build labs.

    Charreau insists that the plan to merge the two agencies would “combine the best of both systems.” That could include divvying up funds into much smaller grants than those dispersed by the Agency.

    In another sign of retrenchment, the Secretary of Science and Technology, engineer Julio Luna, last month canceled a fellowship program for scientists under 40, saying that the last administration didn't allocate money for it. Of over 400 applicants, most living abroad, 70 winners were told last fall they would receive 3-year stipends of 30,000 pesos (then worth $30,000). “I had spent 2 years trying to come back,” says one recipient, virologist Andrea Gamarnik. Despite the setback, she recently returned to UBA from the University of California, San Francisco, with other support. Luna has promised that the would-be grantees will be given smaller CONICET grants.

    The program's cancellation has fueled fears that the crisis is exacerbating Argentina's inability to retain talent. Although there are no hard data, scientists point to a surge in requests for sabbaticals and letters of recommendation for posts abroad. “It's not [just] the number. It's that it's the best ones” who are leaving, says biochemist Armando Parodi of the Institute for Biotechnology Research at the University of San Martín.

    Few see a clear path to recovery for the scientific community. Charreau thinks Argentina could follow the example of Brazil, which recently began taxing companies to fund research as a way to build its economy (Science, 2 March 2001, p. 1685). But creating new programs may be unrealistic. For now, Charreau says, “we have to try to avoid the scientific base disappearing.”


    Getting the Behavior of Social Insects to Compute

    1. Ben Shouse
    1. Ben Shouse is a writer in New York City.

    By envisioning ant colonies as computer networks, entomologists have begun to unravel complex behavioral patterns

    CAMBRIDGE, U.K.—In a recent AT&T commercial, blue ants scurry through a maze and across a chasm in pursuit of an unknown goal. Only when the cartoon “camera” zooms out does the bigger picture come into view: The streams of ant traffic are forming the familiar lines of AT&T's logo. These fictitious bugs are a tongue-in-cheek illustration of how global order can emerge from what appears to be local chaos—a concept that computer programmers have been cribbing from insect behavior for years to make networks run more efficiently. For example, in a routing program called Ant Colony Optimization, virtual ants lay pheromone trails to direct the flow of data packets without cumbersome central planning.

    Now, reversing roles, a handful of entomologists are cribbing from the computer scientists. They deconstruct many behavioral patterns of social insects and show how these operate much like computer algorithms. “It's not a superficial resemblance,” says Tom Seeley, who studies bee communication at Cornell University in Ithaca, New York.

    For decades, entomologists have known that insect colonies are capable of complex collective action, even though individuals adhere to straightforward routines. When foraging, for example, workers appear to march to a drumbeat that dictates when to turn and when to lay down pheromone to guide other workers. As simple as these rules are, they create an effective dragnet to haul in food as efficiently as possible. In this manner, “ants have been solving problems very skillfully every day of their lives for the last 100 million years,” says Nigel Franks of the University of Bristol, U.K. But it's harder to fathom the roots of more complex problem solving, such as group decisions made without central planning or the management of complicated traffic patterns. To tackle such problems, researchers are redefining their work as a search for evolution's source code.

    Franks was among the first entomologists who turned to computer science for fresh ideas. Beginning in 1990, his work with Chris Tofts, now at Hewlett-Packard Labs in Bristol, helped sweep away old notions that have long clouded entomologists' vision. For example, intellectuals since Aristotle have assumed that an insect's age always determines its task. Early research suggested that older workers toil farthest from the nest, as foragers or soldiers. But Franks and Tofts reverse-engineered a colony and showed that this correlation arises from a simpler rule: Workers do the nearest available job and move outward when they are replaced, usually by a younger ant.

    By thinking like computer programmers and scrutinizing the actions of individual ants, scientists have revealed behavioral rules that govern group problem solving, just as assemblies of simple circuits can perform complex calculations. For example, Stephen Pratt of Princeton University and his colleagues at the University of Bath, U.K., have discovered how colonies of the tiny English ant Leptothorax albipennis reach a consensus on moving to a new nest. Their findings, in press at Behavioral Ecology and Sociobiology, show how the colony integrates information, overcoming slow communication and lack of central information processing.

    The original microchips.

    From the brutish Eciton burchelli (bottom) to the diminutive Leptothorax albipennis (top), ant problem solving resembles the inner workings of computer circuitry.


    After a nest is damaged or the colony grows too large, “recruiter” ants explore potential nest sites. When a recruiter finds a promising site, she (all workers are sterile females) returns to the nest and attracts another recruiter with pheromone, initiating a tandem run to the site. If the second worker likes the site better than others she has seen, she recruits another worker. Most recruiters visit only a few sites, using two rules to integrate data. Rule one: If a site is not so attractive, the recruiter delays initiating a tandem run, so that mediocre sites amass recruiters more slowly than premium sites. Rule two: Once an undetermined threshold number of workers are recruiting for the same site, the recruiters become movers, carrying the entire colony to its new digs. As a result, the colony always agrees on the best site, even if it is discovered later than other sites.

    Also employing meticulous observations of individuals, Iain Couzin of the University of Leeds, U.K., has uncovered the laws of army ant traffic. Just as computer networks must maintain speed during high data flow, Eciton burchelli must avoid getting knocked off the trail or turned around during foraging expeditions known as “swarm raids.” Up to 200,000 foragers moving in opposite directions must follow the same pheromone trail, so the risk of gridlock is high. At the “Mathematics of Social Insects” meeting in Cambridge, U.K., last December, Couzin and Franks reported that the colony maximizes the flow of ants with two simple turning rules. Outbound workers turn aside sharply from encounters with ants moving in the opposite direction, while inbound workers, laden with food, turn aside more slowly. In computer simulations, these rules keep both streams in line and maintain a steady return of food to the colony. The same ordered traffic patterns appear both in simulations and in real swarm raids: a central lane of returning ants flanked by columns of outbound ants.

    Seeley admires the precision of such observations and how they make sense colony-wide. “They've got the whole colony, the whole process under a video camera, and they can quantify the activity of individuals,” he says. But although the approach is great for entomology, the payback for computer science could be a ways off. “The process that takes you from the work of biologists to real-world applications is not linear,” notes Marco Dorigo, a computer scientist at the Free University in Brussels who invented Ant Colony Optimization. The “algorithms” these entomologists have found are remarkable, he says, but in the programming world they remain solutions in search of a problem.

  15. Chemists Look to Follow Biology Lead

    1. Joe Alper*
    1. Joe Alper is a writer in Louisville, Colorado.

    Covalent bonds provide the strongest connection between atoms in a molecule. But chemists are now using more tenuous links to assemble large molecular complexes

    Over the past century, chemists have become lords of the covalent bond. They've invented hundreds of reactions that form and rearrange the ultrastrong coupling that results when two atoms share electrons, and they've used their knowledge to rival nature's ability to create everything from antibiotics to plastics. But it's only in recent years that chemists have begun to take advantage of another wonder of the natural world—the ability to form looser links between small molecules. That mechanism governs the assembly of molecules into living organisms, including sewing DNA strands into the familiar double helix and uniting the protective membrane that houses every cell. It's also the basis of one of the fastest-growing areas of research: supramolecular chemistry. “We're starting to bring supramolecular chemistry to the same level of predictability as traditional organic synthesis,” says Jerry Atwood of the University of Missouri, Columbia.

    This success, say chemists such as Atwood, in time may revolutionize fields ranging from polymer science to tissue engineering. “Supramolecular chemistry is going to allow us to bridge the gap between the macro world and the atomic world, and that will have a tremendous impact on a wide variety of fields, such as diagnostics and microelectronics,” says Chad Mirkin, who heads the Institute for Nanotechnology at Northwestern University in Evanston, Illinois.


    This organic compound mimics the simple structure of a virus by assembling itself along a series of symmetrical planes (green).


    Despite biology's considerable lead in working with loose-knit bonds, scientists have learned plenty since chemists began dissecting the nature of the chemical bond. Starting in the mid-1800s, researchers characterized a variety of types of noncovalent bonds, such as hydrogen bonds, van der Waals forces, π-π interactions, and other energetic influences that attract atoms to one another without causing them to exchange electrons formally.

    Even though these bonds are typically weak by themselves, clusters of them produce a powerful adhesive. Two of nature's favorites are links made with hydrogen bonds—formed when a hydrogen atom is shared by two electron-rich atoms, such as oxygen or nitrogen—and hydrophobic interactions, which occur when oily molecules pack together to limit their exposure to water. These two forces combine to stabilize the double helix of DNA: Hydrogen bonds pull the two strands together, while the hydrophobic rings of the nucleic acid bases stack upon one another to make the strands more rigid. This stacking is further stabilized by the interactions between π-electron clouds that circle above and below these rings.

    Chemists have long understood the basic workings of noncovalent bonds, but they're still getting a handle on manipulating large numbers of them. “Supramolecular chemistry is the science of the noncovalent bond, and this isn't an expertise that we synthetic chemists have really developed,” says Sam Stupp, a materials scientist at Northwestern. “We still don't know many of the rules for how small molecules recognize one another and assemble to make stable, large structures.” But, he adds, “we are getting to the point where we can think about how we would put together large molecular complexes and then actually develop a rational approach to accomplish that synthesis.”

    This more rational approach has been particularly useful in designing molecules capable of assembling themselves. Several groups have begun doing just that by mimicking the way viral coat proteins self- assemble into beautifully symmetric, hollow capsules. These capsules are rugged enough to survive both inside and outside of the body, yet fragile enough to dissolve and release their contents once they burrow their way inside a target cell.

    Atwood, for example, teamed with graduate student Leonard MacGillivray, who has since moved to the Steacie Institute for Molecular Sciences in Ottawa, Canada, to design capsules that can put themselves together and even repair tears. After examining the crystal structure of viruses, the two chemists created small molecules with the correct symmetry and placement of atoms to allow multiple hydrogen bonds to form.

    Using this scheme, last year, Atwood's group created hollow spheres from six identical subunits in which each is held to its neighbor by eight hydrogen bonds. Crystallographic analysis showed that each sphere contained 1510 cubic angstroms of empty space. Such hollow molecules are typically very difficult to create by traditional organic synthesis. More importantly, says Atwood, these large cavity-containing structures self-assemble even when closely related building blocks are present in solution. “So there's not only self-assembly occurring, but also a recognition process that we don't yet understand,” he adds. Atwood believes that someday such capsules could be used to release drugs in the vicinity of their selected targets and help separate large molecules on the basis of size or shape.

    Other groups are looking to DNA for a little guidance in making new polymers. At the European Institute of Chemistry and Biology in Talence, France, for example, Ivan Huc and his colleagues last year created the first synthetic polymer that in solution assembles itself into a double helix. The team started with two closely related subunits—2,6-pyridine dicarboxylic acid and 2,6-diaminopyridine—that they could link together covalently into single strands. In dilute solution, the resulting polymer twists into a helix, which the researchers determined by using nuclear magnetic resonance (NMR) spectroscopy.

    Many polymers, both natural and synthetic, form similar single-stranded helices. Huc's polymer, however, matches up with a second strand, creating a DNA-like double helix, a structure his team confirmed with NMR and crystallographic studies. Still, the two had one big difference: The bonding that creates the polymer double helix was the reverse of what happens in DNA.

    In DNA, hydrogen bonds between the base pairs hold the separate strands together while the interaction between electron clouds that hover above and below the ring-shaped bases helps stabilize the helical structure. Meanwhile, in the French team's polymer, interaction between electron clouds around the pyridine rings holds the two strands together, while hydrogen bonds within each strand provide rigidity to the resulting structure.

    Huc's group has since created other double helices using a variety of different pyridine groups as building blocks. Its next goal is to increase the distance between the two strands enough to make supramolecular structures that mimic ion channels. Those structures, in theory, could be used to deliver drugs across the cell membrane.

    Assembly two-step.

    Organic molecules (left) self-assemble into long, helical supramolecules, which then serve as the templates upon which dissolved minerals crystallize (micrograph at right).


    Steven Zimmerman, an organic chemist at the University of Illinois, Urbana-Champaign, says he believes that polymers such as Huc's also may make excellent candidates for novel self-healing materials such as plastics. “Because you have multiple noncovalent interactions holding such a polymer together, you can actually break a few of these bonds without having the material fall apart,” he says. “It would be like having molecular Velcro, where the strength comes from the fact that you have hundreds of attachment points between pieces.” The arrangement, he adds, “gives broken contact points a good chance of reforming before the bulk connection is destroyed.”

    While Huc has been studying the noncovalent attachments between polymer strands, several groups have been examining the connections within strands. Zimmerman, for example, has developed pairs of molecules with various electron-swapping groups that will recognize each other in solution and form multiple hydrogen bonds between them, analogous to the base pairs of DNA but with far greater sticking power.

    In their latest success, to appear in an upcoming issue of the Proceedings of the National Academy of Sciences, Zimmerman's group synthesized four-part building blocks capable of linking to form 12-membered chains that loop into rings. “We were somewhat surprised to get such large, stable structures, but now that we have, we think we will be able to rationally create monomers that will self-assemble to form interesting materials,” says Zimmerman. If successful, he adds, one result could be a new breed of polymers that could be molded in place when still malleable and unbonded and then coaxed to link up and form a rigid structure.

    Beyond polymers and molecular capsules, some chemists see a role for supramolecular chemistry in engineering replacement tissues and creating replacement parts more readily accepted by the body. In particular, chemists imagine a new way to forge links between organic and inorganic materials, a tough nut for materials scientists to crack. “If you don't have to rely on covalent bonds, then there are many avenues to bring organic and inorganic molecules together in a structurally durable manner,” says bioengineer Buddy Ratner, director of the Center for Engineered Biomaterials at the University of Washington, Seattle.

    Last year, for example, Northwestern's Stupp and postdocs Jeffrey Hartgerink and Elia Beniash created two-part organic molecules that automatically assemble themselves into fibers. When placed in the solution of calcium, phosphate, and hydroxide ions, these fibers coax tiny crystallites to form on top that are made of hydroxyapatite, the same material that gives bone its toughness. Such self-assembled materials, Stupp believes, open the door to making synthetic bone replacements (Science, 23 November 2001, p. 1635). Stupp's group has extended the work to topping self-assembled organics with other minerals such as cadmium sulfate, which could prove useful as low-resistance wires for molecular-scale electronics.

    Ratner's group is starting with the inorganics and topping them with organics. For example, together with chemist Thomas Boland, who is now at Clemson University in South Carolina, Ratner showed in 1995 just how easy this melding could occur. “We made the accidental discovery that if you dipped a piece of gold into a dilute solution of nucleic acid in ethanol, boom, they self-assembled into ordered two-dimensional films that were anchored to the gold. It was that simple,” says Ratner.

    Ratner's group has gone on to develop several systems that self-assemble into large supramolecules capable of coating the metallic surfaces of implantable medical devices. In its most recent work, the Seattle team has developed a method for depositing polyethylene glycol studded with wound-healing proteins onto metallic surfaces. “Repairing a wound is actually a fine example of supramolecular chemistry, and we're trying to encourage that to happen in such a way that the collagen and extracellular matrix components of tissue will form a supramolecular complex with an organic coating on an implant,” explains Ratner. “In essence, we want to build a supramolecular complex on a supramolecular complex, to reconstruct a more natural anatomy around a device.”

    One hundred years ago, Hermann Emil Fischer of Germany won the second Nobel Prize awarded in chemistry for synthesizing complex organic molecules using new covalent-bond-forming reactions. That feat ushered in a golden century for synthetic chemistry. In 1987, the Nobel committee recognized the emergence of supramolecular chemistry by awarding the chemistry prize to Donald Cram, Jean-Marie Lehn, and Charles Pederson for work on creating small molecular complexes held together by noncovalent forces. In the 15 years since then, researchers have labored to extend the power of organic synthesis from forming single complex molecules to forming entire molecular assemblies. “There's tremendous science to do here,” says Northwestern's Mirkin. “And if we're successful, the impact on materials science and on everyday life will be enormous.”

  16. Can Chemists Assemble a Future for Molecular Electronics?

    1. Robert F. Service

    Researchers hope to create a new technology by finding ways to get molecule-sized circuits to put themselves together

    2001 was a banner year for molecular electronics, the nascent effort to make electronic circuitry from molecule-sized components. Several teams wired up their first molecular-scale logic and memory circuits, a feat that earned them Science's Breakthrough of the Year (Science, 21 December 2001, p. 2442). But the field's pioneers have always had a bigger dream: to make chips useful for applications such as cheap, throwaway electronics that silicon can't touch. To do so, they must find a new way to pattern the millions of transistors, wires, and other devices needed to make complex circuitry. Without that breakthrough, molecular electronics could slide from being one of the hottest fields in nanoscience to a mere curiosity.

    The solution, many researchers believe, lies in the idea of self-assembly, making individual molecular components so that each plugs itself into a structure just where it's needed. After years of making individual devices, researchers are now beginning to tackle that challenge. “One of the next big things is making big advances in the assembly area,” says Charles Lieber, a chemist and molecular electronics expert at Harvard University in Cambridge, Massachusetts.

    On track.

    Nanowires are assembled into crossbars (above). Organic molecules between crossing wires serve as transistors (right).


    The goal of chemically programming millions of molecules evokes everything from smirks to outright scorn from conventional electronics makers, who view its technological prospects as a long shot at best. “Two years ago, everyone thought we were idiots,” admits James Heath, a chemist who heads a molecular electronics effort at the University of California, Los Angeles (UCLA). But new results suggest that researchers looking to self-assemble molecular electronics are beginning to make headway. In January, at a meeting in Phoenix, Arizona, Heath was one of several team leaders presenting progress reports on using self-assembly together with other techniques to help make the most complex molecular electronic circuitry ever constructed. “This isn't at the stage where you can shake something in a beaker and pull it out and presto,” he says. “But there is some self-assembly involved.”

    Heath reported that his UCLA group, along with Pierre Petroff's group at the University of California, Santa Barbara, has developed an assembly process to create a framework for molecule-based circuitry more than 1000 times as dense as what current chip-patterning techniques can achieve. The key step is to create small groups of parallel nanowires stacked atop each other like tiny crossbars. Heath's group, together with that of Fraser Stoddart at UCLA and colleagues at Hewlett-Packard in Palo Alto, California, previously showed that tiny, switchlike organic molecules trapped at the intersections of similar crossbars could act like molecular-scale transistors, and that crossbar circuits, in which each junction contains a layer of molecular switches, could function as a type of rewriteable memory. In that case the wires were much larger, each a couple of micrometers across. The researchers' new technique may enable them to fabricate similar memory circuits but at a density of molecular-based memory bits that takes advantage of their ultrasmall molecular switches.

    Heath's team, led by postdoc Nick Melosh, used materials called quantum well superlattices as a template for growing a variety of different types of conducting nanowires. The wires were then transferred to a substrate. In a superlattice, researchers use a technique called molecular beam epitaxy to essentially spray-paint atomically thin layers of different semiconductors one atop another. Heath's team turned the superlattice on its side and selectively deposited the nanowires onto just one component of the superlattice, thereby translating the atomic control over film thickness into atomic control over nanowire diameter and spacing between neighboring wires. Then the researchers used a still-proprietary technique to transfer the parallel nanowires to a substrate. Finally, they repeated the process with a second set of nanowires and deposited them on top of and perpendicular to the first set. The result: a group of about 100 nanowire junctions patterned at a density of about 300 billion junctions per square centimeter.

    “It's really beautiful stuff,” says Lieber of the new result. But Heath readily admits that his team is far from turning the tiny crossbars into useful devices. “Now we have to figure out how to wire them up,” Heath says. He's looking for a better approach than the painstaking method of connecting the wires one at a time.

    Lieber's Harvard team has also made dramatic progress in fabricating crossbars. For their work, which they described in Phoenix, researchers used a combination of flowing fluids and other techniques to self-assemble similar crossbar structures with nanowires less than 100 nanometers apart. Lieber also declines to offer more details until the work is published. But he says that the field's new focus on using self-assembly to construct complex circuitry opens the door to commercial applications. “I'm more optimistic than I was even 6 to 8 months ago,” Lieber says.

    Other groups are taking biological routes in hopes of achieving similar goals. At Pennsylvania State University, University Park, for example, chemist Christine Keating is working to use DNA to guide the assembly of nanorods in precise arrangements. The approach links one sequence of single-stranded DNA to a nanorod and the other to a surface, allowing the complementary strands to find each other in solution and knit themselves together. Keating's team hopes to coax circuits to wire themselves up in predesigned arrangements.

    Physicist Ranganathan Shashidhar and colleagues at the Naval Research Laboratory in Washington, D.C., also are taking a page from nature. They're decorating tiny viral particles with metal nanoparticles and switching molecules in the hopes of getting each one to act like an individual device. Then they plan to allow the viruses to assemble themselves into arrays—something they do automatically—in hopes of creating complex interconnected circuit networks. The effort remains in the preliminary stages, Shashidhar says, but hopes are high because biology has already solved the self-assembly problem.

    Efforts to use self-assembly for molecular electronics are nowhere near mature enough to be called a technology. “We're still in the land of science here,” says Heath. But researchers are growing more confident that self-assembly can boost molecular electronics to sufficient levels of complexity to result in practical applications. “I do think self-assembly is up to the task,” says Lieber. Even if molecular electronics circuitry never matches the speed and reliability of silicon, those applications will undoubtedly bring an end to the smirks.

  17. Building Better Photonic Crystals

    1. Robert F. Service

    Self-assembly has the potential to transform not only electronics but photonics as well. Researchers are using self-assembly techniques to grow photonic crystals, devices in which a regular array of tiny holes or other features patterned into a material steer light beams exactly where researchers want them to go. By building such beam-steering devices into computer chips, technologists hope to simplify and speed the conversion of digital information from photons—used to carry data over communications networks—into electrons, which process information on chips.

    To make photonic crystals, many researchers currently employ the same lithographic chip-patterning techniques used in advanced electronics. But lithography has trouble making thick three-dimensional (3D) photonic crystals, which are potentially the best performers. Self-assembly, by contrast, has an easier time of making thick devices, and the process can be dirt cheap. But it's hard to create self-assembled devices free from debilitating defects and to add light-guiding features. Recent progress is bringing renewed hope, however. “The field is really beginning to pick up steam,” says Paul Braun, a materials scientist at the University of Illinois, Urbana-Champaign.


    This stack of tiny glass spheres can be used to create a photonic crystal.


    In the 15 November 2001 issue of Nature, for example, David Norris and colleagues at the NEC Research Institute in Princeton, New Jersey, Princeton University, and the A. F. Ioffe Physical- Technical Institute in St. Petersburg, Russia, described a way to make large, defect-free photonic crystals. Norris—now at the University of Minnesota, Twin Cities—simply dipped one end of a vertical silicon wafer into a solvent containing a suspension of tiny glass spheres. As the solvent evaporated, the surface of the liquid slowly traveled down the wafer, coating it with spheres that assembled themselves into a perfect crystalline arrangement.

    To turn this collection of spheres into a photonic crystal, Norris's team exposed it to hot silicon vapor, which slowly diffused into the crevices between the spheres, filling them with silicon. Then they dissolved the spheres with hydrofluoric acid, leaving the silicon dotted with air-filled holes. The silicon network and the air have very different indexes of refraction (a measure of the speed of light in each material), which combine to form an optical filter that transmits only certain wavelengths of light, reflecting all others.

    Braun calls the NEC work “tremendous,” in part because photonic crystals wind up atop a silicon wafer—just the place to integrate photonic devices with electronic components. So far, these photonic crystals are more like perfect mirrors than devices in themselves. But “the mirror is a starting point,” Braun says. “If you don't have it, the rest is academic.” To make light-steering devices, researchers must doctor the crystals to allow precise wavelengths of light to enter and then maneuver them through the material, a process akin to making a wire for light inside the crystal.

    That's where Braun's group has recently been making progress. In the 4 February 2002 issue of Advanced Materials, Braun, postdoc Wonmok Lee, and graduate student Stephanie Pruzinsky reported using a laser to thread a type of light wire called a waveguide through a photonic crystal. Like the Norris group, they used self-assembly to create a 3D array of tiny spheres. They then filled the space around the spheres with a plastic precursor called a monomer. Finally, they focused a pair of lasers on a spot within the crystal. Where the lasers met, their combined energy cured the monomer into a rigid polymer. By sweeping the focus of the two beams through the crystal, the researchers patterned their waveguide where they wanted it to go. Braun cautions that the polymer light pipe is still untested. But Norris calls it “an intriguing first step.”

    Guiding Light.

    A laser-cured polymer (yellow) creates a path for photons to travel through a photonic crystal.


    Another first recently came from a team led by Peter Mach and Pierre Wilzius at Bell Laboratories, the research arm of Lucent Technologies in Murray Hill, New Jersey. In the March issue of Physical Review E, they reported progress in converting self-assembled photonic crystals into switches that can redirect light in different directions. This effort started out like the NEC effort, creating a solid framework around air-filled holes. But then the Lucent team, together with team members at Harvard and the University of Pennsylvania, filled the holes with liquid crystals, tiny rod-shaped molecules that swivel in unison in response to an applied electrical field. When the researchers switched on a field, this caused a light beam hitting the crystal to diffract, altering its direction. Eventually, Mach says, such switches may prove useful for creating chip-sized devices capable of routing photonic data for optical networking.

    The recent reports show that self-assembled photonic crystals can be made defect-free, incorporate waveguides, and switch light beams. Now comes the hard part, Braun says: “to combine all three.”

Log in to view full text

Log in through your institution

Log in through your institution