# News this Week

Science  17 Jun 2005:
Vol. 308, Issue 5729, pp. 1722
1. ITALIAN SCIENCE

# Abstentions Scuttle Drive to Liberalize Italy's Embryo Laws

1. Gretchen Vogel

An attempt to loosen tight restrictions on in vitro fertilization (IVF) in Italy failed when only 26% of the electorate turned out to vote in a referendum on 12 and 13 June, missing a required 50% quorum. The result was precisely what Catholic Church leaders sought; it means Italy will continue to forbid research using embryos from IVF procedures. Italian scientists are still allowed to work with any imported human embryonic stem (hES) cells—although funding is scarce—but they cannot derive new ones.

The Catholic Church campaigned strongly to persuade voters to stay away from the polls. Under Italian law, a referendum is invalid if fewer than half the eligible voters participate. The abrupt fade-out marked the end of a bitterly fought campaign. Referendum opponents accused proponents of overstating the promise of embryo research, comparing it to Nazi experiments. Many scientists and referendum supporters accused the government of dirty tricks and distortions.

Until January, Italy had no laws on IVF treatments or embryo research, enabling the infamous claims of gynecologist Severino Antinori, who said he was trying to create the first cloned human. But a law passed in February 2004 put tight restrictions into effect 6 months ago. It forbids the creation of more than three embryos per IVF attempt, all of which must be implanted in the potential mother, and it outlaws the donation of sperm or eggs. It also imposes a fine of more than $1 million for any attempt at human cloning. The law was controversial from the start: Members of Parliament offered more than 350 amendments. But none were allowed, and the law passed 277 to 222. Before the law took effect, Italy's far-left Radical Party collected nearly twice as many signatures as the 500,000 required for a petition to put four parts of the law up for review in a referendum—the ban on embryo research; giving legal rights to the human embryo; the ban on gamete donation; and the requirement that only three IVF embryos can be created, all of which must be implanted. In the weeks leading up to the vote, the campaign intensified, with researchers and patient groups staging several days of hunger strikes to protest what they called an unfair attack on the referendum. Opponents, including Pope Benedict XVI and Catholic bishops, calculated that a combination of voter apathy, summer vacations, and deliberate abstentions would nullify the vote. Scientists were active on both sides, with several researchers, including Angelo Vescovi of the Stem Cell Research Institute at the University of Milan-Bicocca, saying new hES lines were not necessary to advance stem cell research. Vescovi appeared on campaign posters saying he planned not to vote. On the other side, more than 130 scientists from around the world signed a letter urging Italians to change the law. Elena Cattaneo of the University of Milan, one of a handful of researchers working with hES cells in Italy, says she is bitterly disappointed by the vote. “It is a pity for science,” she says. “The scientific community wasn't able to express itself the way we should have done.” Marco Cappato, a member of the Radical Party and a leading campaigner for the referendum, says he and his colleagues will try to make the law an issue in the next elections, expected next spring, and win enough votes to change it in Parliament. 2. SCIENTIFIC PUBLISHING # Society Bars Papers From Iranian Authors 1. Yudhijit Bhattacharjee Six months after scientific societies and publishers won a hard-fought battle with the U.S. government to edit and publish manuscripts from countries under a U.S. trade embargo, the American Institute of Aeronautics and Astronautics (AIAA) has decided to bar such submissions from its journals and conferences. The institute says the ban, which falls hardest on scientists from Iran, is necessary to protect national security. But other scientific associations say the decision is wrong-headed and could actually limit U.S. access to scientific developments in the four embargoed countries: Iran, Cuba, North Korea, and Sudan. The policy, adopted last month, states that the institute, “consistent with U.S. laws and policies, shall not knowingly provide products or services to, or engage in formal, technical information exchange with individuals or entities residing in embargoed nations.” As a result, AIAA pulled 24 Iranian-authored submissions from its seven journals and cancelled presentations by Iranian researchers scheduled to attend an AIAA-sponsored fluid dynamics conference in Toronto. The conference, held 6 to 9 June, was subsequently exempted from the ban after protests by Iranian attendees who had already finalized their travel plans. AIAA's position that the ban is “consistent with U.S. laws” is incorrect, says Marc Brodsky, executive director of the American Institute of Physics (AIP), which has pushed hard to assure open communication with scientists in the embargoed countries. “There is no law or regulation I know of that requires AIAA to take the actions it has announced,” he says. “Certainly it hurts our security to bury our head in the sand and not learn about what scientists and engineers are doing in the countries the institute has targeted.” In December, the Treasury Department's Office of Foreign Assets Control (OFAC) clarified that publications did not need the government's permission to edit and print papers from anywhere in the world (Science, 24 December 2004, p. 2170). That decision, which reversed an earlier ruling requiring journals to obtain a license in order to edit papers from embargoed countries, came after AIP and other publishers filed a lawsuit against OFAC in October 2004 alleging that the agency was violating freedom of speech. The suit cited a 1988 legal amendment that exempts information from trade embargoes. AIAA did not institute the ban “in response to a specific legal constraint,” institute Executive Director Robert Dickman told Science in an e-mail. Dickman said the board “is balancing our responsibilities as a professional society to foster open exchange in scholarly, scientific, and engineering information with our social responsibility to avoid assisting a nation such as North Korea in its efforts to develop nuclear weapons and the capability to deliver them.” One of the first victims of that policy was Masoud Darbandi, an aerospace engineer at Sharif University of Technology in Tehran, who received a note from Dickman on 26 May that AIAA had withdrawn his paper on high-temperature irradiance from a forthcoming issue of the Journal of Thermophysics and Heat Transfer. “It was a basic science paper, and I am not involved with any military projects,” he says. “If the paper had been about a military application, the U.S. would actually benefit from its publication; what better way to get inside information about Iran's military?” The policy has triggered internal dissent, according to some AIAA staff members who requested anonymity. “We're hopeful that it will be reversed,” says one. 3. GEOCHEMISTRY # New Geochemical Benchmark Changes Everything on Earth 1. Richard A. Kerr Whether you're navigating the expanse of the Pacific or Earth's 6370-kilometer-deep interior, it's best to have a star to guide you. Geochemists exploring the dark realms of the planet's rocky mantle and iron core depend on the elemental and isotopic composition of magmas, which have risen from the deep interior, and meteorites, which were the building blocks of the rocky planets. Their pole star has been an assumption about the composition of Earth's rock—until now. According to a paper published online this week by Science (www.sciencemag.org/cgi/content/abstract/1113634), for decades researchers have been following the wrong star. New, more-precise measurements of the neodymium isotope composition of meteorites show that geoscientists have been using an incorrect composition of the rocky Earth as a benchmark to infer everything from mantle compositions to the amount of interior heat being generated by radioactive decay. “It's a fantastic paper,” says geochemist Stanley Hart of Woods Hole Oceanographic Institution in Massachusetts. “This is going to change the way we think about the interior of the Earth. There are so many things this is going to impact.” For starters, either Earth was made from only one part of an inexplicably lumpy primordial pudding, or very different rock unlike anything sampled so far settled to the bottom of the mantle early on and has been altering the behavior of the rest of the planet ever since. A key to the geochemical benchmarking of Earth has been the ratio of the isotopeneodymium-142 to neodymium-144 (142Nd/144Nd), which can be used to trace geologic processes. In the early 1980s, researchers measured this ratio in both chondritic meteorites—which are made of the same primordial rock that formed Earth—and a variety of terrestrial rocks. The ratio was the same on Earth and in meteorites, within the error of the analyses. So everyone assumed that the average 142Nd/144Nd ratio of terrestrial rock had not been changed since Earth formed. Any chemical processing in the first several hundred million years—such as partial melting of rock or crystallization of magma—would have tended to separate samarium-146—whose radioactive decay produces neodymium-142—from neodymium-144, altering the 142Nd/144Nd ratio seen today. No one seriously tested the result until geochemists Maud Boyet and Richard Carlson of the Carnegie Institution of Washington's Department of Terrestrial Magnetism in Washington, D.C., used a markedly improved mass spectrometer to measure the 142Nd/144Nd ratio of a variety of chondrites. Their value fell in the range published 25 years ago, but their much more precise analytical technique revealed that 142Nd/144Nd is 20 parts per million lower in chondrites than in terrestrial rocks. Boyet and Carlson think that the meteorite-Earth difference most likely arose just 30 million years after Earth formed. Some process—perhaps the progressive crystallization of Earth's early “magma ocean”—may have concentrated certain elements in the remaining melted rock, the way frozen seawater loses most of its salt to the sea. Such an “enriched” part of the interior has never shown up in volcanic rocks. If it's there, geochemists have calculated the composition of the rocky Earth from samples of “depleted” rock, with misleading results. Boyet and Carlson suggest that the enriched rock ended up at the bottom of the mantle. To judge by other isotopic and chemical data, this enriched region could make up as much as 30% or as little as 5% of the whole mantle. At the low end, it would be small enough to fit into the 200-kilometer-thick D'' (pronounced “D double prime”) layer, a mysterious region seismologists have spotted at the bottom of the mantle, hard against the molten-iron core. A geochemically enriched bottom layer “explains a lot of things,” says Carlson. One might be Earth's missing heat. The average Earth rock—as previously calculated—didn't contain enough radioactive, heat-generating uranium, thorium, and potassium to account for half of the heat seeping out of the planet. But an enriched layer would contain 43% of all heat-producing elements. From the bottom of the mantle, a hot, enriched layer could help drive the core's geodynamo, which generates Earth's magnetic field. And the hot layer could also produce rising plumes of hot mantle rock that feed volcanic hot spots such as Hawaii and Iceland. “It's a very exciting result,” says geochemist Stein Jacobsen of Harvard University, who co-authored the 1980s papers on the first 142Nd/144Nd measurements. “I don't doubt their measurements at all, and the interpretation is reasonable. But at this early stage, I don't think it's the only one.” Boyet and Carlson acknowledge that the Earth-meteorite difference might have arisen when the planets formed. Perhaps the primeval disk of gas and dust differed in composition from place to place. True enough, says planetary physicist David Stevenson of the California Institute of Technology, but such heterogeneity “is not an attractive idea. There is no obvious physical reason why it would be so.” Wherever the truth lies, “this is going to create an incredible hum and a lot of work,” says Hart. 4. MOLECULAR BIOLOGY # Nucleosomes Help Guide Yeast Gene Activity 1. Jean Marx If traffic lights aren't precisely coordinated, accidents and traffic jams result. Likewise, for an organism to develop and remain healthy, cells need to turn their genes on and off at exactly the right times. Numerous proteins known as transcription factors assist in this task, settling in on regulatory sequences in DNA and activating or inactivating associated genes as needed. But genetic material is more than just strands of DNA. It also includes complexes of proteins arranged with the DNA in beadlike structures called nucleosomes. Recently, studies have suggested that these chromosomal beads also contribute to gene regulation. The latest example comes from Oliver Rando and his colleagues at Harvard University. In a paper published online this week by Science (www.sciencemag.org/cgi/content/abstract/1112178), they describe a technique that has allowed them to determine the exact positions of all the nucleosomes over large stretches—roughly 500 kilobases—of the yeast genome. The work, which cell biologist Bradley Cairns of the University of Utah in Salt Lake City calls a “technical tour de force,” suggests that nucleosomes help direct transcription factors to the regulatory sites of their target genes. For many years, researchers thought that nucleosomes were distributed more or less evenly along a chromosome's DNA. However, examination of a few genes revealed that their promoters, regulatory regions that contain binding sites for transcription factors, lacked nucleosomes. Then, last year, two groups, one led by Jason Lieb of the University of North Carolina, Chapel Hill, and the other by Bradley Bernstein and Stuart Schreiber of Harvard University, reported that this seemed to be the case for promoters throughout the yeast genome. The Rando team has now provided the most detailed look yet at nucleosome positioning in the yeast genome. The researchers first treated yeast DNA with a DNA-digesting enzyme that removes the regions that connect one nucleosome bead with the next. The nucleosomal DNA itself survives because it is protected by associated proteins called histones. After isolating the nucleosomal DNA and labeling it with a green fluorescent dye, they mixed it with digested fragments of total genomic yeast DNA labeled with a red fluorescent dye. The researchers then applied the mixture to a microarray chip covered with thousands of overlapping 50-base DNA sequences that covered the length of chromosome 3, a span of nearly 500 kilobases. These probes latched onto the corresponding fluorescently labeled fragments of yeast DNA. By plotting the green-to-red ratio for each spot on the chip, Rando's team could determine the exact position of nucleosomes along chromosome 3. This analysis showed that most—nearly 70%—of nucleosomes occupy the same chromosomal locations in every yeast cell. This was indicated by the fact that the green-to-red ratio peaks indicating their positions were sharp and not smeared out across the DNA. That finding was surprising. “If nucleosomes were left to their own devices, they would be pretty happy [thermodynamically] sitting almost anywhere,” Rando says. In keeping with the previous results, those positions are outside the promoter-containing regulatory sites for most genes. Indeed, nucleosomes usually flanked those sites, creating what Lieb calls “helicopter landing pads” for transcription factors. A question that remains is whether those promoter sites are nucleosome-free all the time, or whether transcription factors push nucleosomes out of the way so they can land. There is some evidence for the first idea. Rando and his colleagues found that the regulatory site for a gene turned on by heat and other stresses lacked nucleosomes even before the stress was applied. The promoter sites may be held open, Rando says, because they contain stretches of adenine-thymine nucleotide pairs, which tend to incorporate poorly into nucleosomes because of their rigidity. However, other work, including that of the Lieb and Bernstein-Schreiber teams, indicates that gene activation leads to shifts in nucleosome positioning. Lieb suggests that the Rando team's finding may help explain other phenomena in addition to gene regulation. He points out that chromosome exchanges during meiosis and the insertion of moveable elements called transposons both tend to occur just before the beginning of genes—regions now known to often lack nucleosomes. Nucleosome positioning could “provide a structural basis for a broad range of observations,” Lieb predicts. 5. INTELLECTUAL PROPERTY # U.S. Patent Reform Begins Long Journey Through Congress 1. Eli Kintisch Enshrined in the original text of the Constitution, the U.S. patent regime is showing its age. Last week, Representative Lamar Smith (R-TX), an influential House member, introduced a bill intended to bring the system into the 21st century. But his proposed reforms have sharpened a debate between biopharma and other high-tech sectors over whether to strengthen or weaken the power to enforce an issued patent. The road to patent reform is not going to be an easy one. There is general agreement that reform is needed, however. A 2004 National Research Council (NRC) panel identified many “new strains” on the system, including increasingly complex technologies, more applications, and new forms of intellectual property, that are threatening to overwhelm the U.S. Patent and Trademark Office (PTO) as well as the courts. The panel argued that the office needs more examiners to handle a rising workload and that U.S. examiners are more likely to say yes than are their counterparts around the world. “Inventors are induced to make marginal applications by their likelihood of success, and the resulting flood of applications overwhelms the patent office,” Harvard Business School professor Joshua Lerner wrote in testimony last week before the House Judiciary Committee panel on intellectual property that Smith chairs. Drugmakers and bench researchers alike complain of litigation costs, and lawyers now advise scientists not to read patents—undercutting the system's basic aim of dissemination—so that they can claim ignorance if they are later sued for infringement. Smith's bill, the Patent Reform Act of 2005 (H.R. 2795), draws heavily on 5 years of discussion among patent attorneys, inventors, and academics on needed changes. One point of agreement was that individuals or companies should be able to challenge a patent through an administrative process up to 9 months after the patent is granted. Reformers hope that step—in which the PTO would review the scope and validity of a patent—will speed up the resolution of disputes, reduce the number of suits, and prevent bad patents. Smith's bill would also adopt the international norm of according priority to the first inventor to file an application rather than the first to come up with the invention. Other aspects of the bill are more contentious. The disagreements generally reflect a fundamentally different view of intellectual property in the computer and pharmaceutical industries. Computer hardware and software technologies change so fast that the industry generally considers patent protection less important. But big high-tech companies say patents are being awarded for small changes in software code, and they regularly receive threats of infringement suits that they have sometimes settled for millions of dollars. The computer industry therefore generally wants to make enforcing patents more difficult. Biotech companies, whose products can have a market life of many years and bring in billions of dollars in revenues, on the other hand, generally want to strengthen patent protection. An April draft of the bill had biotech lobbyists apoplectic over language regarding injunctions: the way judges stop infringers from selling another's intellectual property. Reflecting the wishes of the computing industry, it would have barred injunctions unless the patent holder could show potential “irreparable harm” to its business. Biotech attorneys argued that would render many patents unenforceable. After marathon negotiations with both sides, Smith softened that language, calling instead for the judge to consider “relevant interests.” But a clause Smith added—one that would essentially require the judge to stay the injunction during an appeal—opened a new line of disagreement. The new language would mean a “fundamental sea change,” said federal judge T. S. Ellis, speaking before a patent-reform conference last week at the National Academy of Sciences. “Nearly every case will be appealed,” predicts Carl Gulbrandsen, head of the Wisconsin Alumni Research Foundation, which manages intellectual property for the University of Wisconsin. “The [university] start-up program will be sorely interfered with.” Attorney Robert Armitage of Eli Lilly commented wryly that the 63-page bill was “one page too long.” Another concession to the computing industry involves a clause that would allow an accused infringer to challenge the validity of the patent administratively years after the patent was issued and thus avoid court costs. Biotech officials believe this so-called second window would significantly weaken their ability to enforce patents. And Yale president Richard Levin, who co-chaired the NRC panel, called it a “terrible idea.” Even if those issues are resolved, a thicket of others has interest groups jostling for position. Universities like the idea of first-to-file because it could eventually save on filings overseas if the United States harmonizes with foreign systems. But they worry that large companies will beat them to the patent office. “The current system has given the university time to file,” said Theodore Poehler, vice provost of Johns Hopkins University in Baltimore, Maryland, speaking at the academy conference on behalf of several university consortia. Poehler says Congress should adopt rules that ensure academic scientists could publish results and not “get scooped by their own work or someone they work with.” But Poehler threw his own monkey wrench into the works by suggesting that universities should receive an exemption from infringement suits for academic research using patented material. That idea—a preliminary one, he stresses—bothers companies that sell patented tools such as reagents and could harm universities that license them. Smith knows he has his work cut out for him. After all, the previous attempt, a relatively minor measure introduced in 1991, took 8 years to pass. “We have a ways to go,” he says gamely. 6. INFECTIOUS DISEASES # Turf War Halts Spain's Foot-and-Mouth Disease Studies 1. Martin Enserink A bureaucratic turf battle has forced Spain's premier animal disease center to halt all research on an important veterinary infection, foot-and-mouth disease (FMD), scientists say. Studies on the viral disease by four groups at the Research Center for Animal Health (CISA), in Valdeolmos, 40 kilometers from Madrid, stopped last month because the European Union last year issued a directive that shifted FMD studies to a smaller Spanish lab—one that experts say is ill equipped for this work. Virologists have decried the move as a blow to Spanish FMD research and claimed the move was plotted by the Ministry of Agriculture, Fisheries, and Food to wrest control over the FMD portfolio from Education and Science. But the government has so far given no official rationale for the shift. “Nobody really understands what has been going on,” says virologist Rafael Fernández Muñoz of the Hospital Ramón y Cajal in Madrid. CISA has been doing research on FMD in its extensive biosafety level 3 facilities since the lab was built in 1993; it also became the country's national reference center for FMD. Nobody disputes that it is home to the bulk of the research in this area, says Luís Enjuanes, a virologist at the Universidad Autónoma in Madrid who is not involved in FMD. The lab is part of the National Institute for Agricultural and Food Research and Technology (INIA), administered by the Ministry of Education and Science. But CISA was snubbed by a September 2003 E.U. directive setting guidelines for FMD control. An annex that listed European centers authorized to handle the highly infectious FMD virus mentioned the Central Veterinary Laboratory (LCV) in Algete—a part of the Ministry of Agriculture—as the only authorized lab in Spain. The Spanish government confirmed the move in a November 2004 law. LCV was an odd choice, says Enjuanes, because it specializes in diagnostics and surveillance, and it has inferior facilities and no basic research program. Despite the decree, CISA continued working on FMD until last month, with the tacit approval of the Education and Science Ministry, says former CISA director Esteban Domingo. But it stopped in May, and on 8 June, an official at INIA headquarters sent CISA staff an e-mail declaring that all FMD work was prohibited. E.U. officials compiled the 2003 list based on the recommendations of member countries, including Spain, but the Spanish government has not explained why it agreed to CISA's exclusion. Researchers suspect a bureaucratic motive: They say Spain's Ministry of Agriculture wants to wrest back control of FMD research, which it lost in a reorganization 5 years ago. The Brussels directive apparently took the science ministry by surprise, says Francisco Sobrino, a CISA researcher studying FMD. A spokesperson says that the two ministries “are working to find a solution.” Researchers are outraged, however. In an open letter sent to the ministries in March, 135 virologists asked that CISA be allowed to study FMD again, giving Spain two research centers. France and Germany also have two designated FMD labs, notes Enjuanes. Sobrino says the scuffle over FMD shows that science policy lacks coherence. The Spanish Society for Virology has asked both ministries to resolve the issue. “I hope they work it out,” says Muñoz, who chairs that group, “because there really is no justification at all for this.” 7. ASTRONOMY # Extrasolar Planets Get Smaller and (Possibly) Harder 1. Robert Irion Planet hunters are edging toward an alien version of home. Astronomers have identified the littlest planet known to orbit a normal star, with a likely mass just 7.5 times that of Earth—half the size of the previous record-holder. But the finding's uniqueness isn't quite on firm ground: Although its discoverers say the world is “plausibly rocky,” it may be similar to a batch of exoplanets unveiled last year by competing teams from the United States and Europe (Science, 3 September 2004, p. 1382). The planet circles a red dwarf star called Gliese 876, a dim neighbor just 15 light-years from Earth. It gained notice only because of a striking gravitational resonance between two big gaseous bodies that revolve around the star. The inner planet takes 30 days to orbit, whereas the outer one takes twice as long. As astronomers monitored the planets' back-and-forth tugs on Gliese 876—revealed as regular wobbles in the star's light—a slight distortion in the resonant pattern suggested that a third world was in the mix. Theorists Eugenio Rivera of the University of California (UC), Santa Cruz, and Jack Lissauer of NASA's Ames Research Center in Mountain View, California, created a detailed model of the system. It pointed to a low-mass inner planet speeding around the star in a mere 1.94 days, the shortest exoplanet “year” yet seen. But to be sure, the team's observers monitored the star as often as possible using the 10-meter Keck I Telescope at Mauna Kea, Hawaii, including an intense program last fall with an improved spectrograph to tease out fine details of the wobbles. It took more than 150 observations spread over 8 years to pinpoint the inner planet, says team leader Geoffrey Marcy of UC Berkeley. Without the pas de deux between the more distant gas giants, their little sibling would have eluded detection thus far, says team member Paul Butler of the Carnegie Institution of Washington, D.C. Moreover, the precise gravitational waltz allowed the team to calculate the planetary system's tilt relative to our view. That geometrical measure—known only for a few other “edge-on” systems—yielded a range of six to nine times Earth's mass for the new planet, rather than simply a minimum mass that astronomers calculate for other exoplanets. The group announced its results on 13 June at a National Science Foundation press briefing in Arlington, Virginia. At its distance of just 3 million kilometers from Gliese 876, the new world probably roasts with a surface temperature of 200 to 400 degrees Celsius. The team leans toward a solid nickel-iron rocky body for the hot planet, perhaps with a steamy atmosphere. “But it could be something more akin to a small Neptune,” notes Lissauer, with a thick, vaporous mantle of water, hydrogen, and other gases. “It could go in either of those directions.” Indeed, too much equivocation remains to claim the first terrestrial analog, says Carnegie theorist Alan Boss, who did not participate in the study. At half the size of the three exoplanets announced last year—with masses 15 to 20 times that of Earth—the new planet probably isn't a distinct beast. “I view these as likely to be members of a single class,” says Boss. “They are possibly rocky worlds.” The question won't remain open too long, he adds. By finding more such planets, astronomers should spot one that crosses directly in front of its star. That would yield a convincing measure of the object's size and density, and thus its composition. Red dwarf stars are appealing targets for another reason, says team member Steven Vogt of UC Santa Cruz: Their energy output is so feeble that rocky planets in 10-day to 20-day orbits might be cool enough for liquid water to exist. “If nature makes those planets, that's where we want to be looking,” Vogt says. 8. GENETICS # Jumping DNA Mixes It Up in the Developing Brain 1. Greg Miller We usually attribute the unique temperaments and talents of each person to a mysterious mix of heredity and environment. A provocative new study, however, suggests that a third factor also plays a role in generating individual differences: bits of genetic material that spontaneously hop around the genome of neurons in the developing brain, altering patterns of gene expression. In the 16 June issue of Nature, the discoverers of this phenomenon, led by Fred Gage at the Salk Institute for Biological Studies in La Jolla, California, propose that the process adds a random twist to neural development that ensures that no two brains—not even those of identical twins—are put together in exactly the same way. Other researchers are intrigued by the idea but say much more work is needed to extend the team's findings, which come from experiments on cultured rat cells and genetically engineered mice, to normal animals—let alone people. “It has the potential to open a new paradigm, but it's not there yet,” says Haig Kazazian, a geneticist at the University of Pennsylvania in Philadelphia. Gage and colleagues, including postdocs Alysson Muotri and Vi Chu, were investigating a major puzzle in developmental neuroscience: how neural stem cells decide whether to turn into neurons or support cells called astrocytes and oligodendrocytes. Searching for genes that guide this decision, the team noticed that rat cells in the process of becoming neurons contained increased levels of RNAs corresponding to L1 retrotransposons, bits of DNA that can jump from one place in the genome to another by first copying themselves into RNA and then reversing the process. In so doing, they can alter the activity of genes they hop into. Intrigued, the team inserted a human version of L1 into rat neural stem cells, along with a marker that would make the cells glow green whenever L1 made a jump. Many cells lit up, and when the researchers took a closer look, they found that L1 jumped into several genes typically expressed by neurons. In some cases, the jump altered gene expression in a way that influenced the stem cells' fate—making them more likely to turn into neurons, for example. In an additional set of experiments, the team created transgenic mice by endowing the rodents with the same human retrotransposon. L1 did its jumping act in a subset of brain cells but not in cells of other tissues, with the exception of the testes and ovaries, where such retrotransposition was already know to occur. In principle, Gage says, this process could generate distinct neural circuitry in each individual by altering the proportion of different neuron types or levels of gene expression in the brain. The next step, he says, will be to prevent L1 from jumping around and see what effect—if any—that has on brain development in mice. If L1 does prove to have an important role in brain development, that would come as a surprise to many researchers, says Kazazian: “These are genetic parasites as far as we know, and we never thought they might have a function like this.” Indeed, not everyone is willing to bet that they do. Anthony Furano, who studies retrotransposons at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, applauds the Salk team's efforts but harbors doubts about the implications. “It seems unlikely that the relatively random changes caused by L1 could be a serious player in producing individual variation,” he says. “But if they prove otherwise, well, then, wouldn't that be something?” 9. BIOMEDICAL POLICY # House Approves 0.5% Raise for NIH, Comments on Database 1. Jocelyn Kaiser A House panel this week approved a 0.5% increase in the 2006 budget for the National Institutes of Health (NIH). The$142.3 million increase, to $28.5 billion, matches the Bush Administration's request. House lawmakers also declined to fully support a scientific society in its dispute with NIH over the agency's new chemicals database. The modest budgetary rise falls far short of the projected 2006 biomedical inflation rate of 3.2%, say NIH's supporters. Representative David Obey (D-WI), who voted against the bill, predicted that it would lead to fewer new grants. “We are concerned about the momentum of biomedical research being affected by this,” says Jon Retzlaff, legislative relations director for the Federation of American Societies for Experimental Biology in Bethesda, Maryland. Backers of NIH hope that the corresponding Senate panel, chaired by Senator Arlen Specter (R-PA), will continue its track record of bestowing more generous increases on the agency. The Senate panel could mark up its bill as soon as July. The House Appropriations Labor/Health and Human Services/Education subcommittee also appears to have listened to NIH in its fight with the American Chemical Society (ACS) over PubChem, a free NIH database on biologically active compounds. ACS contends that PubChem will duplicate its own vast subscription-based chemical database and should therefore include only data generated by NIH's new molecular libraries screening effort. This spring, ACS took its case to the subcommittee's chair, Ralph Regula (R-OH), whose state is home to the headquarters of the ACS database (Science, 6 May, p. 774). Librarian groups and advocates of open-access publishing, meanwhile, blasted ACS's position. According to several sources, subcommittee staffers discussed whether the House bill should order NIH to scale back PubChem. But after last-minute negotiations between ACS and NIH officials, including NIH Director Elias Zerhouni, the subcommittee settled on less restrictive wording. A report accompanying the House bill notes that PubChem's mission is to “house both compound information from the scientific literature as well as” data from the molecular libraries initiative, but it says the committee is “concerned that NIH is replicating” private information services. The report “urges NIH to work with private sector providers to avoid unnecessary duplication and competition.” ACS issued a statement saying it “is very pleased” and expects “to work diligently with NIH toward a collaborative model.” Supporters of PubChem, for their part, see the House language as a victory for NIH. ACS official Brian Dougherty declined to say whether the society will ask the Senate for help. “We're focused on working with NIH,” he says. 10. EARTH SCIENCE # The Story of O2 1. Richard A. Kerr Gaseous oxygen is essential to advanced life, but Earth came with no guarantees that oxygen would abound. Researchers are piecing together life's complex involvement in oxygen's halting 3-billion-year rise In the beginning, Earth was devoid of oxygen, and then life arose from nonlife. As that first life evolved over a billion years, it began to produce oxygen, but not enough for the life-energizing gas to appear in the atmosphere. Was green scum all there was to life, all there ever would be? Apparently, yes, unless life and nonlife could somehow work together to oxygenate the planet from the atmosphere to the deep sea. Earth scientists are flocking to the emerging field of astrobiology to tease out the history of oxygen on Earth from a maddeningly subtle and fragmented rock record. The rise of atmospheric oxygen from nothing to abundance, they are finding, came in two big steps about 2 billion years apart. Relatively simple life probably facilitated the first step up and possibly the second, much to its own detriment but to the benefit of more complex life. “The rise of oxygen changed the course of evolution,” says astrobiologist David Catling of the University of Bristol, U.K. “Atmospheric oxygen was a precursor to advanced life on Earth, and, I would argue, to life elsewhere.” With the new interest in 3 billion years of oxygen history, “there's been a great deal of progress,” says geochemist Donald Canfield of the University of Southern Denmark in Odense. “The field has matured; it used to be a hobby area for most people. I credit NASA's [astrobiology funding] for much of that.” An invigorated field is attacking a host of big questions: When did free oxygen first appear in Earth's atmosphere? What made it appear in the first place? What held it back for so long? And what caused the second, delayed surge of oxygen that allowed advanced animals to appear? ## A certain beginning Historians of oxygen have always agreed on one thing: Earth started out with no free oxygen—that is, diatomic oxygen, or O2. It was all tied up in rock and water. For half a century, researchers have vacillated over whether the gases that were there favored the formation of life's starting materials (see sidebar, p. 1732). Without free oxygen, in any case, the first life that did appear by perhaps 3.5 billion years ago had to “breathe” elements such as iron, processing them to gain a mere pittance of energy. For decades, scientists have argued about just how long the planet remained anoxic, and thus home to nothing but tiny, simple, slow-living microorganisms. Until recently, the idea that early Earth was anoxic for more than 2 billion years—as advanced primarily by geochemist Heinrich Holland of Harvard University—dominated the field but had not won the day. Its proponents pointed to diverse evidence. Minerals older than about 2.2 billion to 2.4 billion years found in ancient soils, streambeds, and other sediments seemed to show no sign of ever having been exposed to oxygen. There were no “red beds” of sediment stained with rusted iron minerals, for example. But a small but vocal opposition, headed by Holland's former student Hiroshi Ohmoto of Pennsylvania State University (PSU), University Park, had long believed that Earth's atmosphere was oxygenated back as far as geologists can peer. He and other opponents pointed to mineral bits here and there that they believed had been oxidized 3 billion years ago or longer. ## A step up That teacher-student debate now appears to be resolved in the teacher's favor. Researchers have in hand an unequivocal method for determining the presence or absence of oxygen early in Earth's history. Introduced by geochemist James Farquhar of the University of Maryland, College Park, and his colleagues in 2000, the sulfur isotope method depends on the way sunlight breaks down sulfur dioxide in the atmosphere. These photochemical reactions can shuffle sulfur isotopes in weird ways, without respect to the mass of the isotopes. But free atmospheric oxygen wipes out such mass-independent fractionation (MIF) before the sulfur reaches Earth's surface, where the odd mix of isotopes could be preserved in sediments. Farquhar and colleagues found MIF of sulfur in rocks older than 2.4 billion years but not in younger rocks, apparently pinning down atmospheric oxygen's first appearance at levels of at least 1 part per million. That discovery—now buttressed by theoretical work and studies of other rocks—pretty much clinches the case for a late “Great Oxidation Event,” as Holland has dubbed it. “Skeptics would have to reinvent physics to counteract Farquhar's results,” says atmospheric physicist James Kasting of PSU. Although Ohmoto and some associates have yet to give in, “there's a strong consensus in the rest of the community,” says Catling. “In the MIF of sulfur, you have a clear signal that something changed at about 2.4 billion years.” And that permanent rise in oxygen to detectable atmospheric levels seems to have spurred evolution. The earliest known fossil of a eukaryote—the term for organisms, from yeast to humans, that have a cell nucleus and usually require oxygen—is about 2 billion years old. The first fossil big enough to be seen without a microscope—the spiral-chained algae Grypania—appeared 1.9 billion years ago. ## Rumors of oxidation So what about those signs of earlier oxygen that helped fuel years of debate? Some could be markers of ancient “oxygen oases” in which cyanobacteria—oxygen-producing blue-green algae—seem to have been capturing the sun's energy through photosynthesis for hundreds of millions of years before atmospheric levels exceeded the sulfur MIF limit. In rocks from the Hamersley Basin of western Australia, researchers have found signs that such oxygen did manage to permeate at least small parts of the environment—perhaps just a sea-floor skim of microbial scum—while the rest of the world remained anoxic. When oxygen-producing microorganisms die, they leave behind a telltale mix of carbon isotopes biased toward the lighter isotope, as well as distinctive organic molecules such as steranes. At the astrobiology meeting, geochemist Jennifer Eigenbrode of the Carnegie Institution of Washington's Geophysical Laboratory and colleagues reported finding those geochemical fingerprints in Hamersley rocks. The mix of isotopes and biomarkers changed as the researchers traced them into geologically more recent rocks. Eigenbrode said the shifts point to an increasing role for oxygen-dependent ecosystems, perhaps confined to thick films on the sea floor. In such oxygenated islands, eukaryotes may have appeared hundreds of millions of years earlier than their first recognized fossils, giving them that much more time to evolve their more sophisticated lifestyle. ## What was the holdup? Better records of the history of oxygen don't always make the historian's job easier. They can also make gaps in the timing harder to explain. For example, evidence from sterane and other biomarkers indicates that oxygen-generating cyanobacteria were in business by 2.7 billion years ago or earlier. Yet the Great Oxidation Event didn't come along for another 300 million years. Why the delay? Researchers have suggested several possible reasons. Geobiologist Joseph Kirschvink of the California Institute of Technology in Pasadena and his doctoral student Robert Kopp note that early cyanobacteria didn't produce oxygen. They argue that the type of photosynthesis that liberated the gas did not appear until 2.4 billion years ago. The explanation discounts a number of geochemical indicators, such as steranes, which are commonly thought to require oxygen for their synthesis. Other geoscientists suspect that the supply of oxygen-devouring volcanic gases such as hydrogen might have started petering out by 2.4 billion years ago, finally allowing oxygen levels to rise. But recent studies of trace metals in ancient rock derived from the deep Earth seem to show that the supply of reducing gases held steady right up to and through the oxidation event. Catling and colleagues have proposed that high levels of methane (CH4) in the early atmosphere greatly increased the rate at which hydrogen “leaked” into space, allowing photosynthetically produced oxygen to oxidize Earth. In 2001, they pointed out that 3 billion years ago methane produced by anoxia-loving bacteria was probably 100 to 1500 times more abundant than it is today. And hydrogen-bearing methane can freely diffuse to the atmosphere's outer fringes where its hydrogen can make the final jump to space; water's hydrogen gets condensed out at lower altitudes. To test the idea, Catling, astrobiologist Mark Claire of the University of Washington, Seattle, and planetary physicist Kevin Zahnle of NASA's Ames Research Center in Mountain View, California, recently programmed methane into a computer model that keeps tabs on oxygen's comings and goings on early Earth. In the model, volcanic gases and reactions with crustal minerals sop up oxygen as fast as cyanobacteria can churn it out. Without abundant methane to carry hydrogen away to space, Earth remains anoxic indefinitely. But with high atmospheric methane, hydrogen losses to space eventually overwhelm the antioxygen forces, and oxygen levels begin to rise. So the lowly methane-generating bacterium may have unwittingly given oxygen—its deadly enemy—a leg up on the road to an oxidizing world. ## The boring billion Even more puzzling than the 300-million-year run-up to the Great Oxidation Event is what came next. The advent of oxygen ushered in geology's red beds and life's eukaryotes. Then, for a good billion years, the newcomer eukaryotic algae went nowhere evolutionarily, frozen in time as an advanced sort of green scum. And there is growing geochemical evidence that the Great Oxidation Event wasn't actually all that great. To understand why not, scientists look to the ocean. Doubts about the oxidation event's greatness arose in 1998 when geochemist Canfield first proposed—on the basis of sulfur isotopes—that all of the ocean's waters except the uppermost layer had remained anoxic for more than a billion years after atmospheric oxygen made its first appearance. In 2002, geochemist Ariel Anbar of Arizona State University, Tempe, and paleontologist Andrew Knoll of Harvard University linked Canfield's idea to the history of life. They suggested that what oxygen the atmosphere held during the time of the “Canfield Ocean”—perhaps 1% to 10% of present levels—had actually starved eukaryotic algae and held them back evolutionarily (Science, 16 August 2002, p. 1104). The atmospheric oxygen, they noted, would have weathered sulfur off the land, dosing the ocean with deadly sulfides, which would have chemically removed the iron and molybdenum from seawater. The algae needed these elements to form enzymes essential to taking up nutrients; without them, they were malnourished and therefore evolutionarily listless. ### PROTEROZOIC ERA 2.7 billion years ago CYANOBACTERIA 2.4 billion years ago GREAT OXIDATION EVENT 1.9 billion years ago GRYPANIA SPIRALIS First eukaryote visible to the naked eye appears after oxidation. 1.4 billion years ago EUKARYOTES 0.6 billion years ago EDIACARA The Canfield Ocean, and thus the Malnourished Earth hypothesis, has since gained ground. Several groups have extended Canfield's sulfur isotope analysis across the midsection of the Proterozoic era (2.5 billion to 0.54 billion years ago), confirming signs of anoxia. But those studies drew on marine rocks deposited from waters that were at least partially isolated from the open ocean, like today's Red Sea. Perhaps their anoxia was not typical of the ocean at large. Last year Anbar, geochemist Gail Arnold, then at the University of Rochester, New York, and colleagues allayed many of those doubts. In Science (2 April 2004, p. 87), they reported on their analysis of molybdenum isotopes preserved in mid-Proterozoic rocks. The ratio of two molybdenum isotopes depends on the amount of oxygen in the ocean. And unlike many other dissolved elements, molybdenum remains in seawater so long before being removed to the sediment that it has a chance to mix throughout the ocean, even its backwaters. Samples from one place in the Proterozoic ocean, then, should reflect the amount of oxygen in the ocean as a whole. Arnold and colleagues found signs that far more of the ocean floor was anoxic 1.4 billion and 1.7 billion years ago than today. So, almost 4 billion years after the world began, the team of life and Earth that boosted oxygen from nothing to detectable levels still had a ways to go. ## Breakout What locked a billion years of the Proterozoic in geochemical and evolutionary stasis remains a mystery. But the bigger, more enticing mystery may be how oxygen finally rose to something like modern levels toward the end of the Proterozoic, 0.6 billion or 0.7 billion years ago. That's when multicellular animals first appeared, and then large animals such as the enigmatic Ediacara, creatures that must have required higher levels of oxygen. Canfield's sulfur isotopes hint that oxygen levels did in fact rise about then. This second oxidation event is proving far more elusive than the great one. Most scientists agree that it hinged on some major change that started locking up more organic matter in sediments before it could decay. Instead of being consumed in the chemistry of decomposition, oxygen built up in the atmosphere and ocean. As to what triggered the mass carbon burial, explanations fall into two camps: geological and biological. Proposed geological shifts capable of driving up oxygen levels include a jump in the production of clays able to adsorb organic matter and preserve it beneath the sea floor, and the assembly of a supercontinent whose weathering could stimulate ocean life and subsequent carbon burial by adding nutrients to the seas. Biological shifts include the arrival of lichens on land, which would have accelerated rock weathering as well, and the evolutionary innovation among zooplankton that produced dense, organics-laden fecal pellets able to sink into the deep sea. Before the oxygen historians can sort out the truth from the just-so stories, they'll have to come to grips with the nature of the ancient record of life as well as oxygen. “There's less data the older you go [in the geologic record],” says Kirschvink, “so there's less chance of being proven wrong right off” if you're working 2 billion or 3 billion years in the past. That can make spinning yarns about the early days more fun, but it puts a premium on collecting what data can be retrieved from the oldest rocks. 11. EARTH SCIENCE # A Better Atmosphere for Life 1. Richard A. Kerr Thirty years ago, geochemists took away the primordial soup that biologists thought they needed to cook up the first life on Earth. Now, some atmospheric chemists are trying to give it back. They're suggesting that the early Earth could have held onto much more of its volcanic hydrogen—a key ingredient in the recipe for making the organic compounds that may have led to the first life. Creating the primordial organic goo used to be easy. If you combined the methane and ammonia seen in the still-primordial atmosphere of Jupiter, passed lightninglike sparks through the mixture, and added some water, voilà, complex organic compounds such as amino acids formed. But then in the 1970s geochemists spoiled the party by insisting that Earth's earliest atmosphere was nothing like Jupiter's. Earth's carbon would have been part of oxygen-rich carbon dioxide, and its nitrogen part of inert nitrogen gas, they said. And hydrogen seeping from the planet's interior would have quickly escaped to space. That left chemists with a thin gruel indeed. It had far too much oxygen, which destroys organics, and not enough of the hydrogen that enables carbon atoms to link up to form the complex polymers needed for life. In the lab, such mixtures yielded few organics, and simple compounds at that. Now, atmospheric chemist Feng Tian of the University of Colorado, Boulder, and his colleagues argue that hydrogen on early Earth would have escaped much more slowly than has been assumed (Science, 13 May, p. 1014). Lacking the oxygen that absorbs solar energy, they point out, the outer fringes of the early atmosphere would have been far colder than they are today. With less energy jittering its atoms, much less lightweight hydrogen would have “boiled” away into space. The researchers also figured out how to calculate the rate at which hydrogen would have been lost as wisps of the atmosphere flowed away into space. The mathematics of such supersonic flow had frustrated all previous attempts. Overall, hydrogen would have escaped at 1/100 the rate previously assumed, the group says. Rather than building to concentrations of just 0.1%, hydrogen might have reached 30%. That would make for a far more productive atmosphere than chemists have been coping with for 30 years. “The end result is you drop vast amounts of organic compounds into the ocean to make a soup,” says the group's Brian Toon of the University of Colorado, Boulder. “On the face of it, what they have produced is quite reasonable,” says atmospheric chemist Yuk Yung of the California Institute of Technology in Pasadena. “It's a nice piece of work. It's going to make the biologists a lot happier.” Astrobiologist David Catling of the University of Bristol, U.K., isn't so sure. “It would be rather premature,” he says, to shift emphasis back to the prebiotic chemistry of a hydrogen-rich atmosphere and organic-goo-laced ocean. Tian and his colleagues “haven't dealt with all the factors that lead to hydrogen escape,” says Catling. He suspects that a more sophisticated model would show that hydrogen escaped the early Earth at least as fast as it does today. Time will tell whether too many cooks spoil the primordial broth. 12. AMERICAN ASTRONOMICAL SOCIETY MEETING # A Sprawling Andromeda Galaxy Startles and Puzzles Observers 1. Robert Irion MINNEAPOLIS, MINNESOTA— A small meeting of about 680 astronomers here 29 May to 2 June featured some big claims about the Andromeda Galaxy and the ingredients of comets. Skywatchers in the Northern Hemisphere can perceive the Andromeda Galaxy, our closest major galactic neighbor, as a faint fuzzy patch. But if our eyes could see the full extent of Andromeda's starry disk, it would span a startling 12 times the width of the full moon on the sky—triple the size of previous estimates. That result, presented at the meeting, throws a spanner into the works of galaxy formation theories. “This gigantic disk certainly came as a bolt from the blue,” says study co-author Nial Tanvir of the University of Hertfordshire, U.K. Tanvir and his colleagues, led by astronomer Rodrigo Ibata of the Strasbourg Observatory in France, based their work on a sweeping scan of Andromeda's outskirts with the 2.5-meter Isaac Newton Telescope on the Canary Islands. The survey unveiled swarms of faint stars around the galaxy's graceful central spiral, a flattened disk that seemed about 50,000 light-years in radius. As in our Milky Way, this stellar disk revolves in a calm and orderly fashion. The team expected that the outlying stars would swoop randomly around the disk in a huge spherical “halo,” as probable remnants of smaller galaxies that Andromeda has devoured. The survey does reveal clumps and messy patterns of stars, some of which clearly belong to disrupted dwarf galaxies (Science, 20 May, p. 1104). But to the team's amazement, most of the far-flung stars revolve smoothly as a giant extension of Andromeda's classic disk—out to a distance of at least 150,000 light-years. The astronomers traced those motions by analyzing light from about 5000 stars with the 10-meter Keck II Telescope at Mauna Kea, Hawaii. “This is really impressive work, and the evidence of a rotating disk is convincing,” comments astronomer Puragra Guhathakurta of the University of California, Santa Cruz. The disk is no mere sprinkle. It may produce 1/10 of Andromeda's light, says postdoctoral researcher Scott Chapman of the California Institute of Technology (Caltech) in Pasadena, who led the Keck effort. Moreover, because of its vast breadth, the extended disk could account for 30% of the galaxy's angular momentum. “It's a real puzzle why this thing is there,” Chapman says. The gravitational influence of Andromeda's hidden cocoon of dark matter—which probably reaches far beyond the newfound disk—undoubtedly played a role in dictating its shape, he notes. Even so, tiny incoming galaxies should scatter stars in all directions rather than confining them in a thin plane. A single large merger long ago might have spread stars into a disk. Alternatively, shock waves from the merger could have induced stars to form within a preexisting flat whorl of gas around Andromeda. Each scenario has trouble creating a structure as wide and coherent as Ibata's team has found. “I still haven't digested the result,” said Caltech astronomer Wallace Sargent after Chapman's talk. “I didn't expect to see ordered motion so far out.” Andromeda may spring more surprises. In a new paper under review, Guhathakurta and colleagues identify stars in the galaxy's halo as far as 500,000 light-years from Andromeda's core—fully 1/5 of the distance to the Milky Way. Our grand neighbor, it seems, is just a star's throw away. 13. AMERICAN ASTRONOMICAL SOCIETY MEETING # Snapshots From the Meeting 1. Robert Irion MINNEAPOLIS, MINNESOTA—A small meeting of about 680 astronomers here 29 May to 2 June featured some big claims about the Andromeda Galaxy and the ingredients of comets. Future crash. Two white dwarfs locked in the closest known binary system (Science, 15 March 2002, p. 1997) are spiraling toward each other faster than any other pair of stars. Energy modulations measured by the Chandra X-ray Observatory show that the system's 321-second period is speeding up by 0.012 seconds annually, reported astrophysicist Tod Strohmayer of NASA's Goddard Space Flight Center in Greenbelt, Maryland. With that rapid approach—roughly 2.5 centimeters per hour—the dwarfs will merge about 500,000 years from now. But even today, the binary churns space with gravitational waves that an orbiting mission next decade should spot easily. Probing Pluto's past. A college sophomore has helped retrace variations in the brightness of Pluto soon after the planet's discovery in 1930. Luke Smith of Louisiana State University in Baton Rouge and colleagues obtained 30 glass photographic plates taken in 1933 and 1934, when astronomers lacked the tools to gauge subtle trends in Pluto's light. The team's analysis—including new images of comparison stars close to Pluto at the time—suggests that the planet was slightly brighter than expected. The extra light may have reflected off frosts that slowly melted as Pluto edged closer to the sun later in the century, Smith reported. SNEWS button. Astronomers are ready to catch an exploding star in our galaxy. The Supernova Early Warning System (SNEWS) looks for neutrinos—ghostly particles unleashed in torrents by stars when they blow up—spotted simultaneously by four detectors around the globe. Neutrinos would herald a supernova's blast of light by a few hours, because the shock wave takes time to break free of the collapsing star. SNEWS is patient: It may take several decades to trigger on a supernova, but it should produce less than one false alarm per century, reported Alec Habig of the University of Minnesota, Duluth. 14. AMERICAN ASTRONOMICAL SOCIETY MEETING # A Rocky Landing for Deep Impact? 1. Robert Irion MINNEAPOLIS, MINNESOTA—wA small meeting of about 680 astronomers here 29 May to 2 June featured some big claims about the Andromeda Galaxy and the ingredients of comets. Each time a comet rounds the sun, its body erodes. But a comet's glorious tails of dust and gas are like banks of fog: lots of surface area and little matter. Comets shed much more mass when larger particles sputter away into narrow “debris trails” along their orbits. A new survey of these trails has solidified a growing view that comets consist mainly of rock rather than ice. NASA will put that notion to a stiff test on 4 July when it plows a projectile into a comet for the first time. Pressures from sunlight and the solar wind puff a comet's tiny particles into wide fans. Bigger fragments—millimeters to centimeters across—resist those forces. Instead, they remain in linear tracks in the comet's orbital path “like a line of boxcars,” says doctoral student Michael Kelley of the University of Minnesota, Twin Cities. “The particles drift so slowly away from the nucleus that they take hundreds of years to fill a trail around the sun.” The debris trails pop out in infrared images, because the particles reradiate warmth from the sun. Astronomers first spied a handful of trails in images from the Infrared Astronomy Satellite, which flew in the 1980s. Their abundance led Mark Sykes of the Planetary Science Institute in Tucson, Arizona, to describe comets as “icy mudballs” of mostly rocky material—a twist on the canonical “dirty snowballs” picture of mostly ice. Now, a team including Sykes, Kelley, and William Reach of the Spitzer Science Center in Pasadena, California, has taken a far more thorough look at debris trails with NASA's Spitzer Space Telescope. Their analysis reveals 21 debris trails out of 29 short-period comets, which loop around the sun in decades or less. The trails are all narrow, including seven extremely fine ribbons that only Spitzer could have detected. To trundle along such confined paths for years, the particles must be “miniasteroids” the size of small pebbles, Reach reported at the meeting. The wispy sprays of fine particles in comet tails don't hold a candle to the mass contained in the bands of pebbles, models indicate. “It's clear that [debris trails are] the main mass-loss mechanism for comets,” Reach says. “Comets cannot be composed of particles with finer grain sizes than these.” Some observers disagree. “It's important to understand the meaning of the word ‘rock,’” says Zdenek Sekanina of NASA's Jet Propulsion Laboratory in Pasadena, an expert on comets and meteoroids. Particles in debris trails probably are agglomerations of dusty grains that clump weakly together before being expelled from the comet rather than strong and coherent nuggets like pebbles, Sekanina says. Even if the crusty surfaces of comets cast rocks into space, fluffier icy matter may lurk inside, says dust scientist Diane Wooden of NASA's Ames Research Center in Mountain View, California. Testing that idea is a prime objective of Deep Impact, now on a collision course with Comet Tempel 1 (Science, 27 May, p. 1247). Reach ventures that the comet contains at least 10 times more rock than ice throughout its volume—and that Deep Impact will crash with a shallow, hard thud. 15. AMERICAN SOCIETY OF GENE THERAPY MEETING # Retroviral Vectors: A Double-Edged Sword 1. Jocelyn Kaiser ST. LOUIS, MISSOURI—The 1 to 5 June 8th annual meeting of the American Society of Gene Therapy attracted 1910 researchers who were abuzz over a possible third success for clinical gene therapy. Gene therapy has entered a painful adolescence. In the past 6 years, the technique has restored the immune systems of 22 children with two types of severe combined immunodeficiency disease (SCID), marking the first clear success in the clinic. But these victories were bittersweet: Three of the children in a SCID trial in France later developed leukemia, and one died last year. At the meeting and a satellite retreat,* researchers discussed mounting evidence that using retroviruses to fix broken genes can spur cell growth that may raise the risk of cancer. But this side effect could also explain a new gene-therapy success reported at the meeting, this time in adults. The leukemia in the French trial involved children with SCID caused by the defective X chromosome gene for a protein called common gamma chain (gc). In two of these cases, the retrovirus that carried the corrective gene into the children's blood stem cells inserted its DNA into and activated an oncogene called LMO2, and a single T cell derived from such a stem cell multiplied out of control. In St. Louis, Marina Cavazzana-Calvo of the Necker Hospital in Paris reported that the third leukemia case, which was disclosed in January, appears to involve multiple insertions of the corrective gene, affecting four different oncogenes, including LMO2—suggesting that a similar mechanism is at work. The French group has halted its trial until it can design a safer viral vector. The leukemias also apparently stem from an interaction between LMO2 and gc, so trials for ADA-SCID, which involves a different defective gene, are continuing. Researchers in St. Louis still had no definitive explanation for the mystery of why, so far, leukemias have not appeared in seven children treated in an X-SCID trial at the Institute of Child Health in London. That gene therapy protocol differed in subtle ways from the French group's. Adrian Thrasher, the leader of the U.K. study, noted that “marked” T cell counts in the French patients rose steeply after they received corrected cells, whereas in the U.K. trial, the rise has been more gradual. “I don't know what it means,” he acknowledged. Several gene therapists, including Frederic Bushman of the Salk Institute for Biological Studies in San Diego, California, described evidence that retroviruses tend to insert in active genes, perhaps because the condensed DNA-containing chromatin opens up in these regions. And Christopher Baum of Hannover Medical School in Germany and Cincinnati Children's Hospital Medical Center in Ohio presented his group's recently published mouse study, which found that stem cells containing insertions near genes involved in cell proliferation keep multiplying until these clones far outnumber other cells (Science, 20 May, p. 1171). Still, promoting cell proliferation might sometimes be a good talent for a retroviral vector. Describing gene therapy's latest success, a German-Swiss team reported that they used such a virus to restore the health of two adults with the rare inherited disorder chronic granulomatous disease (CGD). People born with CGD lack an enzyme complex, called phagocyte NADPH oxidase, which white blood cells use to make the hydrogen peroxide needed to kill microbial invaders. People with CGD usually die from fungal and bacterial infections by age 30. In an attempt to correct the defect, molecular virologist Manuel Grez of the Georg-Speyer-Haus Institute for Biomedical Research in Frankfurt and collaborators in Germany and Switzerland extracted blood stem cells from two CGD patients in their mid-20s and, using a vector similar to that used in the X-SCID trials, inserted a good copy of one gene needed to make NADPH oxidase. Before putting the cells back, they gave the patients chemotherapy to kill some of their bone marrow and allow the corrected cells more room to grow. This step once seemed risky in already-sick patients but has proved crucial to the success of ADA-SCID trials, notes Donald Kohn of Children's Hospital in Los Angeles, California. Unlike in previous CGD trials, 3 weeks after the cells were put back into the patients, a surprisingly large fraction, more than 20%, of their circulating white blood cells carried the corrected gene. The big surprise, however, was that after 4 months, a few cells in each patient went through an “unexpected expansion” in number so that up to 60% of their white blood cells contained the gene, Grez reported. Both patients now produce significant amounts of functional NADPH oxidase. And they are much healthier: A lung lesion in one has healed, and two liver abscesses have disappeared in the other. One has even been able to stop taking prophylactic antibiotics. “There's no question that [the trial has] provided benefit to the patients,” says Harry Malech of the U.S. National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, who conducted two previous CGD trials. Malech and others add a note of caution, however. The clones that multiplied in these CGD patients had the corrected gene inserted in one of three locations near genes involved in cell proliferation. Although that could help explain the unexpected expansions and why the treatment worked so well, it also raises the risk of leukemia. Grez suggests that's unlikely because the growth of the stem cells eventually leveled off. Still, says Kohn, “it needs to be watched.” • *2nd Stem Cell Clonality & Genotoxicity Retreat, 1 June 2005. 16. AMERICAN SOCIETY OF GENE THERAPY MEETING # Smart Vector Restores Liver Enzyme 1. Jocelyn Kaiser ST. LOUIS, MISSOURI—The 1 to 5 June 8th annual meeting of the American Society of Gene Therapy attracted 1910 researchers who were abuzz over a possible third success for clinical gene therapy. Stealing from the tool kit of viruses that infect bacteria, researchers have used a new gene-therapy method to cure the metabolic disorder phenylketonuria in mice. The method uses a viral enzyme, rather than a virus itself, to insert a corrective gene into a specific location in a mouse's DNA, making it a potentially safer way of delivering new genes. Many gene-therapy efforts employ a retrovirus to deliver working copies of genes into a subject's body. But because these viruses integrate DNA into a host's genome at many locations, the approach can disrupt existing genes, like those implicated in cancer. The danger of that recently became clear when three patients in a seemingly successful gene therapy trial in France later developed leukemia (see previous story). Ideally, researchers would like to place a corrective gene in a specific location, safely away from other genes. Back in 2001, geneticist Michele Calos's group at Stanford University in California invented such a method by turning to a smarter-than-average version of an enzyme called an integrase, which some viruses use to insert their DNA into a host's genome. Retroviruses, such as HIV, have integrases of their own, but they can insert DNA at locations that alter gene expression. Calos, however, found that an integrase from a bacteriophage, a virus that infects bacteria, only latches onto a specific DNA sequence within bacteria—one that is also found in a few places within mammalian genomes—and is considered low risk. She and her colleagues have recently had some success treating mouse models of hemophilia and other diseases by injecting rodents with a corrective gene and the gene for this phage integrase. Li Chen in Savio Woo's lab at Mount Sinai School of Medicine in New York City has now used a similar strategy to treat a mouse version of phenylketonuria, a disease in which people don't make enough of the liver enzyme phenylalanine hydroxylase (PAH). In such cases, the body cannot convert the amino acid phenylalanine to tyrosine, and its buildup leads to severe mental retardation. Currently, the main treatment for the condition is a strict diet low in phenylalanine, one that must be started soon after birth. Chen made plasmid loops of DNA containing a gene for a phage integrase and a good copy of the PAH gene and then injected these plasmids into the tail veins of mice with a defect in their PAH genes. After three injections, the mice made more PAH enzyme, and their blood levels of phenylalanine fell to within the normal range—and stayed there for more than 6 weeks, Chen reported at the meeting. Another sign that the gene therapy worked: The mice's fur turned from gray to black because they now had the tyrosine needed to make the pigment melatonin. The phage integrase used by Woo's team differs from that chosen by the Stanford group, and it targets different points on chromosomes. It may work better in certain tissues because, depending on the genes expressed in that tissue, different sites may be open or closed to a particular integrase, notes Barrie Carter of Targeted Genetics in Seattle, Washington. However, both groups of researchers still need to find more efficient ways of getting the genes into cells. The phage-integrase strategy “is not yet ready for prime time,” says Woo. 17. SPACE EXPLORATION # Private Mission Aims to Give Solar Sails Their Day in the Sun 1. Daniel Clery Assembled by enthusiasts on a shoestring budget, Cosmos 1 could spearhead a new generation of photon-powered spacecraft In the Star Wars movie Attack of the Clones, the villainous Count Dooku escapes the clutches of the Jedi in a spaceship that unfurls a vast, shiny solar sail to speed its way across space powered by starlight. Next week, something akin to this science-fiction fantasy will take to the skies above Earth. The spacecraft, called Cosmos 1, aims to show that solar sailing is practical in our solar system and beyond. “This is the only technology that can do interstellar flight,” says project director Louis Friedman, who is executive director of the Planetary Society, the nonprofit organization running the mission. Cosmos Studios, a U.S. science entertainment company run by Ann Druyan, widow of Carl Sagan, put up$4 million to bankroll the mission. Russian researchers at the Lavochkin Association and the Space Research Institute in Moscow built the spacecraft, which will be launched from a Russian naval submarine aboard a Volna rocket, a converted ICBM from the Cold War.

“Solar sails have come a long way in the last few years,” says engineer Colin McInnes, a solar sail expert at the University of Strathclyde, U.K. At all the major space agencies, he says, scientists are pushing the technology as the only feasible form of propulsion for certain niche missions. “Now it needs the commitment to go to the next step,” McInnes says. “Cosmos 1 can only do good.”

Friedman got hooked on sails in the 1970s, while working on a team at NASA's Jet Propulsion Laboratory designing a solar sail mission to rendezvous with Halley's comet. That mission never got off the drawing board. But after co-founding the Planetary Society in 1980 with fellow space scientists Sagan and Bruce Murray to promote space exploration, Friedman got a second chance in 2000 thanks to Druyan, a member of the Planetary Society's board of directors. With money from Joe Firmage, a philanthropist who made a fortune in the 1990s Internet boom, Druyan formed Cosmos Studios and made Cosmos 1 its first project. “We could have a real launch, a place in the history of aeronautics, all for the cost of a Manhattan apartment,” she says. Additional funding came from Peter Lewis, an insurance millionaire, and from the society's 100,000 members.

The society suffered a setback in 2001 when a practice run aboard a Volna, designed to test unfurling a sail, failed. The current launch date is 21 June, the summer solstice. If Cosmos 1 reaches its intended 825-kilometer orbit intact and is working properly, ground controllers in Moscow and at the society's headquarters in Pasadena, California, will start the mission. On command, inflatable booms will unfurl eight 15-meter-long triangular sails made of highly reflective Mylar plastic, creating a sail area of 600 square meters—some 1.5 times the size of a basketball court. The team hopes to see whether the pressure of photons bouncing off the sails will boost the craft to a higher orbit. “This has never before been done in controlled flight,” says Friedman.

Researchers at several space agencies are eager to follow in Cosmos's wake. In April, NASA and contractor ATK Space Systems tested a 20-meter solar sail in a huge vacuum chamber at NASA's Plum Brook facility in Sandusky, Ohio, and a second design from the company L'Garde will be tested later this month. A team led by Timothy Van Sant at NASA's Goddard Space Flight Center in Greenbelt, Maryland, is designing a demonstrator mission of a square sail 100 meters across that will compete for a launch slot in 2010.

The European Space Agency (ESA) is also banking on the technology in collaboration with DLR, Germany's space agency. Manfred Leipold of German contractor Kayser-Threde says that the agencies carried out a ground test of a 20-meter sail in 1999, and Leipold has since led a team that completed a design last year for a demonstrator mission in space. The mission is now awaiting a funding decision.

How did solar sails go from science-fiction favorite to must-have technology? The answer is thrust. Conventional spacecraft must carry all their fuel with them, and some missions are just too fuel-hungry. Although solar sails get only a feeble push from photons, it is unremitting and over time can build up to very high velocities. Researchers say sails are ideal for station-keeping: putting a satellite in one spot and using the sail's thrust to keep it in position. The National Oceanic and Atmospheric Administration is interested in using sails to station satellites above Earth's poles, where they can monitor the poorly understood polar climate, watch the moon and auroras around the clock, and even provide a phone link to researchers at the South Pole.

Other tempting applications include sending a probe to the sun and then pushing it into a polar orbit to observe the sun's higher regions. “This is very hard to do with conventional propulsion” because getting into a highly inclined orbit requires so much energy, says McInnes. A solar sail also may be the only way to travel for spacecraft headed far in the opposite direction, to the outer reaches of the solar system and into interstellar space. Leipold says ESA is studying such a mission to the heliopause, where the solar wind gives way to interstellar space. Its 250-meter-wide solar sail would boost its velocity to 50 kilometers/second, three times the speed of Voyager 1. NASA too would love to send a craft to deep space; its plans envisage a huge 400-meter-wide sail.

Such grand schemes may hinge on the small group of enthusiasts from the Planetary Society. Cosmos Studios has not been able to attract funding to film the preparations or launch, meaning that, as Druyan notes, “we haven't been able to make a dime off it.” But McInnes says that solar sails should get a huge jump in credibility from the mission. “At meetings I can now put up slides with some real hardware,” he says.