News this Week

Science  21 Nov 2003:
Vol. 302, Issue 5649, pp. 1306

    VA's Controversial Chief of Research Said to Be Leaving

    1. Jennifer Couzin

    The embattled chief of the Veterans Affairs (VA) Office of Research and Development, Nelda Wray, is leaving her post, according to sources close to the VA. This turn of events could mark the nadir in a period of turmoil within the $400 million enterprise. A physician and researcher formerly at the Houston VA Medical Center in Texas, Wray has been in charge of VA research since January.

    Wray's attempts to alter the VA's methods of peer review and its clinical agenda have met with intense criticism both within and outside the agency. Early in her tenure, Wray argued that VA-funded researchers were unproductive and that the agency's emphasis on basic research—to which nearly half of all research funds are traditionally devoted—was not serving veterans well. She therefore tried to tailor the agency's clinical agenda to the health problems facing veterans. Although some researchers praise Wray for trying to reform the VA's sprawling research system, critics charged that she pushed for too much change, too quickly. Earlier this fall, the department's inspector general (IG) launched an investigation into alleged personnel and funding improprieties in her office.

    Wray was expected to disclose her plans as early as this week, according to sources who spoke on condition of anonymity. VA staffers anticipate that Wray's deputy, Mindy Aisen, will become acting director of the office. Aisen declined to comment. Reached by telephone, Wray responded: “I have nothing to say.”

    The news of Wray's likely departure is the latest surprise for VA employees and hundreds of researchers the agency funds. One of the first signs of trouble came last April, when 17 investigators received word that grants they'd been told were funded had not been approved after all (Science, 25 April, p. 574). At the time, Wray explained that the grantees had been notified prematurely. She said that she was planning to reallocate VA funding, moving money from basic research into clinical outcomes studies, her own specialty, and that the shift necessitated some sacrifices.


    VA headquarters has been wracked by dissension during the term of research chief Nelda Wray.


    Wray began restructuring the VA's $400 million research enterprise soon after (Science, 4 July, p. 24). She also announced changes to the peer-review process, adding a separate score for a scientist's “productivity.” This prompted a firestorm of dissent, including from the Federation of American Societies for Experimental Biology, which sent letters of protest to a congressional committee and VA officials. As pressure intensified, Wray suggested in a conference call with research heads last week that she was abandoning productivity as a separate measure of grant quality and allowing the peer-review system to remain largely unchanged.

    Earlier this fall, the IG launched an investigation into the research office, according to a spokesperson at VA headquarters. VA employees report being questioned by the IG's office about allegations that research projects in Houston were approved without peer review; that Wray's friends were hired to positions within the agency; and that a philanthropic fund intended for VA clinical studies was misused. Government officials say that hundreds of thousands of dollars from that fund—the Baltimore, Maryland-based Friends Research Institute—may have been spent on activities for which the fund was not intended, including a reception for VA senior leadership last summer and an “image consultant” for the research office.

    In addition, the IG has inquired about a $750,000 grant, authorized last summer, to two researchers in Houston with whom Wray frequently collaborated. The IG did not respond to several requests for information on the status of the investigation.

    According to senior VA officials, Wray and her superiors convened on the afternoon of Friday, 14 November. The same day, rumors began to circulate that Wray was departing. Senior VA officials say that morale was crumbling at VA headquarters. Employees tell Science that Wray has an abrasive management style. Many were said to be seeking new jobs.

    The problems, VA researchers agree, came in the way Wray's concepts were implemented across the more than 1000 labs that house VA-funded research. “The shifting of money away from established research programs … and the changes in the established peer-review system [were] perceived as a way to pull money away from medical research,” says John Cowdery, acting chief of staff at the Iowa City VA Medical Center.

    Some say Wray's policies had promise, however. Claude Chemtob, a recently retired VA psychologist at the VA Medical and Regional Office Center in Honolulu, Hawaii, says: “It was enormously courageous to refocus the VA research enterprise” on challenges specific to veterans. But Wray does not appear to have vocal champions in Washington, D.C.


    Venter Cooks Up a Synthetic Genome in Record Time

    1. Elizabeth Pennisi

    When the U.S. Department of Energy (DOE) announced last week that sequencing maverick J. Craig Venter had taken just 2 weeks to build a viral genome from scratch, Secretary of Energy Spencer Abraham called the work “nothing short of amazing.” He predicted that it could lead to the creation of microbes tailored to deal with pollution or excess carbon dioxide or even to meet future fuel needs. But the $3 million DOE project drew ho-hum reviews from some scientists. “I didn't think it was a big deal,” says Ian Molineux, a molecular biologist at the University of Texas, Austin. And Richard Ebright, a molecular biologist at Rutgers University in Piscataway, New Jersey, agrees: “This is strictly a limited incremental advance over current technologies.”

    The skeptics focus on how hard it will be to go beyond the initial step, while Venter, head of the Institute for Biological Energy Alternatives (IBEA) in Rockville, Maryland, and former president of Celera Genomics, and his backers are proud to have gotten this far. All are in agreement, however, that the experiment demonstrated speed in converting raw ingredients into a functioning virus.

    The genome synthesized by the Venter-led group belongs to a bacterial virus, called a phage; when it was tested in a lifelike situation, Venter reported, it infected and killed bacteria just as natural phages would. Because his team stitched together the phage's DNA in just a few weeks instead of years, molecular virologist Eckard Wimmer of the State University of New York, Stony Brook, called the effort “a very smart piece of work.”

    Stir-and-bake genomes.

    Venter's (left) success in building a viral genome drew praise from DOE Secretary Abraham.


    Moreover, Ari Patrinos, who heads the DOE research program that supports Venter's group, argues that this small advance could have large ramifications. He compared the synthesis of the phage genome to early DNA sequencing: “It was rarely used” at first, but when sequencing became fast and accurate, its use exploded. He thinks Venter's “incremental step” may eventually have the same effect on the field.

    Venter's lab isn't the first to stitch together an artificial genome. Molecular biologists have been trying to do this ever since they started generating the entire sequences of organisms. Last year, Wimmer and his colleagues assembled the 7000-base poliovirus genome from small pieces of synthesized DNA. And they made headlines when they showed that the virus was active (Science, 9 August 2002, p. 1016). But the task took 3 years to finish.

    This summer, Venter set out to do better. His team included IBEA collaborators Hamilton Smith and Cynthia Pfannkoch, and Clyde Hutchison of the University of North Carolina, Chapel Hill. Like Wimmer, they started with short pieces of DNA, pieced them together by matching up overlapping ends, and eventually generated a complete 5400-base-pair phage genome. Their approach differed from Wimmer's, however. They modified and added steps to speed the sequence's assembly and to make it more accurate. The work is in press in the Proceedings of the National Academy of Sciences. And Venter is convinced that he can build genomes 300,000 bases or longer.

    But even with these improvements, skeptics and supporters aren't sure how well the procedure will work for organisms with larger genomes. “Going from a phage to a microbial genome to having a microbe that's synthetic is a very major step,” says Patrinos. But he thinks it's worth betting on.


    Science Minister Starts From Scratch

    1. Jeffrey Mervis

    Rashad Omar Mandan knows what Iraqi scientists need to help resurrect their society. “Everything,” says Iraq's new minister of science and technology. “And I'm talking to everyone to see what they can do to help us get back on our feet.”

    Last week Omar visited the United States in search of work for the more than 3500 scientists and engineers under his wing. A member of the Turkoman minority who fled Iraq in 1999 after working for 2 decades in his country's oil ministry, the 56-year-old civil engineer was brought back in September to head a new entity focused on rebuilding the country's shattered infrastructure. Its scientific corps is drawn largely from 14 state-run “companies” that helped design and manufacture Iraq's weapons of mass destruction.

    But this attempt by the Coalition Provisional Authority (CPA) to turn swords into plowshares is starved for cash and barely off the ground. Omar's employees, who receive a nominal salary, will remain idle until they receive outside help. “We need offices, laboratories, and equipment,” Omar says. “We need journals. But we don't have broadband Internet connections, so we need them on CDs. And most of all, we need projects to work on.” He estimates that the ministry needs at least $15 million to get up and running.

    Treasure hunt.

    Iraqi science minister Rashad Omar Mandan seeks funding for his researchers.


    Coincidentally or not, that's also the size of a possible investment by the U.S. State Department, which aims to keep the former weapons scientists from selling their knowledge on the open market. On 3 November officials described the draft of a plan, called the Science, Technology, and Engineering Mentorship Initiative for Iraq (STEM-II), to a gathering of arms control organizations at the Washington, D.C.-based Carnegie Endowment for International Peace.

    “It's one of several ideas that we are considering,” says George Atkinson, science and technology adviser to Secretary of State Colin Powell. “The whole point is to find something that will be successful.” According to the Associated Press (AP), which first reported on the draft, the initiative would pay Iraqis to submit proposals, and then again to carry out work that had passed peer review. The pool of scientists, AP reported, is defined as “that part of the Iraqi [scientific] community committed to peaceful professional activities not associated with weapons of mass destruction.”

    Omar's visit to Washington, D.C., was the first of several he has planned for the coming months. He chatted up representatives of companies, scientific societies, and government agencies, along with anybody else in a position to hire his employees for everything from rebuilding the power grid and upgrading communications to pursuing new technologies for agriculture, energy, and natural resources. The ministry also hopes to provide technical assistance for projects—such as restoring the country's southern marshlands—being taken on by other ministries. In the past, the ministry's scientists have also trained graduate students, although Iraq's universities now report to a separate ministry. If all goes well, says Omar, many of the ministry's scientists will take their ideas into the private sector.

    Omar says that he never expected to be holding a government position after he fled to Dubai. But now that he's home, he's eager to help nurture Iraq's scientific talent. As a CPA adviser to the science ministry described him, “he's just a patriot serving his country.”


    AIDS Vaccine Still Alive as Booster After Second Failure in Thailand

    1. Jon Cohen

    An experimental AIDS vaccine tested in Thai heroin users has failed to stop the spread of HIV. It's the second time the vaccine, a genetically engineered version of the HIV surface protein gp120 developed by VaxGen of Brisbane, California, has proven ineffective in an efficacy trial. But although gp120 clearly does not work by itself, an ambitious new study in Thailand has just started to assess whether it might provide an effective boost to another AIDS vaccine.

    Controversy has dogged the gp120 vaccine for a decade, and some researchers insist that it does not belong in the new Thai study. The U.S. National Institute of Allergy and Infectious Diseases (NIAID) once championed the gp120 vaccine, then made by the biotech powerhouse Genentech. But experiments in 1993 revealed that the antibodies triggered by this vaccine, or immunogen, stopped only laboratory-grown strains of HIV, not the feisty ones found in the real world.

    In February, VaxGen reported that the vaccine failed in a large-scale efficacy study mainly involving gay men, although a subgroup analysis suggested that gp120 might have helped the few blacks and Asians in the study (Science, 28 February, p. 1290). Last week, in preliminary results from its second trial, VaxGen reported that 106 people given the vaccine and 105 people given dummy shots became infected with HIV. The study, which involved 2546 injecting drug users, was run by Thailand's Bangkok Vaccine Evaluation Group (Science, 19 September, p. 1663). NIAID Director Anthony Fauci called the results “not surprising at all.”

    Falling short.

    This Bangkok hospital served as headquarters for the failed gp120 trial.


    NIAID still plans to include the vaccine in a recently started 16,000-person study in Thailand—the largest, most expensive AIDS vaccine trial ever launched. The $120 million study uses gp120 as a booster shot to another AIDS vaccine, made by Aventis Pasteur, that contains several HIV genes stitched into harmless canarypox. “As a boost, it might be qualitatively quite different,” Fauci says.

    Other AIDS researchers strongly disagree. “I think including gp120 is a waste of time,” says Dennis Burton, who studies HIV antibodies at the Scripps Research Institute in La Jolla, California. “It's the wrong immunogen, and you can't get around that.” Immunologist Michael Lederman, head of the Center for AIDS Research at Case Western Reserve University in Cleveland, Ohio, predicts a “hue and cry” from scientists if the trial goes forward with gp120.

    John McNeil of the Walter Reed Army Institute of Research in Silver Spring, Maryland, who has played a key role in organizing the trial with the Thai Ministry of Public Health, counters that in small human tests, this so-called prime-boost approach has triggered a “much broader” immune response of a less famous player than killer cells or antibodies. Specifically, some evidence suggests that the two vaccines together expand a population of warriors known as HIV-specific CD4 T helper cells.

    McNeil stresses that an international advisory panel, organized under the umbrella of the World Health Organization and the Joint United Nations Programme on HIV/AIDS, in 2002 recommended conducting the prime-boost study even if gp120 failed as a stand-alone vaccine. “We know there are going to be a lot of people who say, ‘This doesn't make a lot of sense,’” says McNeil. “It's a hypothesis. And that's what these vaccine trials test.”


    Nanodevices Make Fresh Strides Toward Reality

    1. Robert F. Service

    Nanoscientists have proven adept at turning tiny specks of semiconductors and metals into devices such as diodes and transistors and have even wired them into working circuits. But researchers must still vault several other daunting hurdles to compete with today's highly complex computer chips. Among them: finding ways to construct complex circuits without the aid of photolithography, the standard chip-patterning technology that doesn't work at the scale of individual molecules, and steering electronic impulses from large-scale wires down to particular nanoscale devices. Now teams report progress on both fronts.

    On page 1380, biophysicist Erez Braun and colleagues at the Technion-Israel Institute of Technology in Haifa report using a combination of proteins and DNA to direct the synthesis of a carbon nanotube-based transistor, a success that could pave the way for complex circuitry to essentially build itself. Meanwhile, in another paper on page 1377, a team led by Harvard University chemist Charles Lieber reports creating a scheme for feeding electrical impulses to specific locations in a nanocircuit, an essential step for carrying out complex computation.

    Although critics have questioned the field's near-term potential to turn out products (Science, 24 October, p. 556), Cees Dekker, a biophysicist and molecular electronics expert at Delft University of Technology in the Netherlands, says the new studies underscore that basic research in molecular electronics remains vibrant. “Both papers together show the field is progressing. There are strategies to move towards connected networks [of devices]. That's the direction the field should take.”

    Braun, together with physics colleague Uri Sivan, students Kinneret Keren and Rotem Berman, and technician Evgeny Buchstab, wanted to employ biomolecules to assemble a working transistor. Their goal was to use a straw-shaped molecule called a carbon nanotube to carry an electric current between two metal electrodes. They coated nanotubes with streptavidin, a protein that forms a lock-and-key bond with another molecule called biotin. They then used an intricate series of reactions to create a chain of other proteins—capped with biotins—and a short piece of DNA to lasso and lash the nanotubes along the central region of a long DNA strand glued atop a silicon surface. Then, through another pair of reactions, the Technion researchers capped the ends of the long DNA molecules with tiny gold metal pads, which served as electrodes to carry electrical current into and out of the carbon nanotube. Finally, the team used electron beam lithography to pattern wires atop the silicon to connect to the metal pads.

    The result was more than a dozen working nanotransistors, each just a few hundred nanometers in length and a fraction of the size of conventional transistors. “It's a fantastic demonstration,” Dekker says. Braun says the devices are still works in progress: The connections need improvement, and the e-beam technique used to pattern the wires is too slow to be a viable manufacturing technology. But he says his team is already tackling the problems and is trying to scale up the self-assembly technique to make more-complex circuits.

    Lieber's team, meanwhile, set out to address a very different issue: ensuring that information in the form of electrical impulses can be fed into specific transistors in a nanocircuit, an essential step for carrying out computation. The experiment builds on years of work at Lieber's lab to construct circuits from an array of nanowires patterned in a “crossbar” array resembling the lines of a ticktacktoe board. Previous work had shown that each intersection where two nanowires cross can serve as a transistor, in which an electric potential applied to an “input” wire triggers a corresponding electric pulse in a perpendicular “output” wire. But there's a hitch: Because a single input wire crosses several output wires, it can trigger multiple pulses simultaneously—not what is wanted for computations.

    Lieber, together with his students Zhaohui Zhong and Yi Cui, postdoc Deli Wang, and Marc Bockrath, a physicist at the California Institute of Technology in Pasadena, set out to make the firing more selective. They designed a grid of nanowires that produced inactive transistors at each nanowire crossing, unless the wires at the junction were activated by a specific chemical reaction. Then, using traditional photolithography to direct a light-induced chemical reaction to specific crossbar intersections, they steered impulses to their desired destinations (see figure). The technique, Lieber adds, should also enable them to connect nanosized crossbar circuits to large wires that carry electrical pulses on and off computer chips. Together with the self-assembly technique, these results may give nanoelectronics researchers just the boost they need to begin moving molecular electronics from a basic science to a technology.


    Chemical reactions at selected junctions control where current flows in a nanocircuit.


    Mothers' Malaria Appears to Enhance Spread of AIDS Virus

    1. Jon Cohen

    For the first time, a study of HIV-infected pregnant women has found that coinfection with malaria significantly increased a mother's risk of transmitting the AIDS virus to her baby before or during birth.

    According to a report in the November issue of the journal AIDS, HIV-infected pregnant women in the Rakai district of Uganda had nearly three times the risk of transmitting the AIDS virus to their babies if they concurrently had malaria and if the parasite that causes the disease had infected their placentas. “I was startled by the findings,” says the paper's first author, epidemiologist Heena Brahmbhatt of Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.

    In January 2002, Brahmbhatt, then a Ph.D. candidate at Hopkins studying mother-to-child transmission (MTCT) of HIV, read a report that an antimalarial drug might reduce HIV transmission to infants through their infected mother's breast milk. Intrigued by the possible impact of the drug, Brahmbhatt asked her adviser, Hopkins epidemiologist Ronald Gray, if she could look for the effect of malaria in data from Gray's well-known Rakai study. The study, conducted in collaboration with several Ugandan research groups, evaluated MTCT in 746 HIV-infected pregnant women and their babies in Uganda's Rakai district from 1994 to 1999.

    Brahmbhatt found 93 cases in which the researchers had ascertained the baby's HIV status and also had preserved the placenta. Of the 15 babies whose mothers had placental malaria, she found that six (40%) became infected. In contrast, HIV spread to only 12 of 78 infants (15%) whose mothers did not have malaria in their placentas. Brahmbhatt emphasizes that the sample size is small, and the findings are not conclusive. Still, the results were statistically significant, leading the authors to conclude that trials are “urgently needed” to evaluate whether giving HIV-infected pregnant women malaria prophylaxis can also reduce the transmission of HIV. “If our observations pan out, there may be a case for much more aggressive malaria suppression in HIV-infected women during pregnancy,” says Gray.

    Net gain.

    Protecting pregnant women from malaria may help thwart the spread of HIV from mother to child.


    Few studies have been done on the relationship of placental malaria and the maternal transmission of HIV. In 1998 a preliminary report on HIV-infected women in Kisumu, Kenya, hinted at a similar effect. But a lead investigator in that work, Richard Steketee of the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, says a more thorough analysis showed that, overall, placental malaria did not increase a woman's risk of transmitting HIV to her baby. “The [Rakai and Kisumu] studies don't match up, but that doesn't mean either one is wrong, because the methodology is different,” says Steketee, who heads CDC's malaria epidemiology branch.

    Steketee's results, in press at Emerging Infectious Diseases, offer a clue to the complex interaction of the two diseases in a pregnant woman. In Kisumu, the researchers assessed the parasitic burden in the placentas. Curiously, they found some protection from HIV when a placenta had a low level of parasites, but the risk of MTCT increased when the parasite density rose to high levels. Hopkins's Gray says he and his co-workers now plan to conduct a similar analysis in the placentas they collected.

    Steketee suspects that the intensity of the malaria infection modifies the mother's immunity in different ways, affecting HIV's ability to transmit in utero. “It would be important for someone to look at this again,” says Steketee. “These are not easy studies and they are not cheap, but the impact could be substantial.”


    New Telescope in Hawaii Is Big Step Up for Taiwan

    1. Dennis Normile

    TOKYO—The official dedication of the Submillimeter Array (SMA), taking place tomorrow atop Mauna Kea in Hawaii, marks the arrival of a new kid on astronomy's most famous block (see p. 1319) in more ways than one. The eight-antenna array is the first interferometric telescope to operate at submillimeter wavelengths, promising unprecedented views of star-forming regions and young galaxies where dust obscures radiation at longer wavelengths. But SMA is also a coming-out party for Taiwanese astronomers and their fledgling institute.

    The Institute of Astronomy and Astrophysics (IAA) of Taiwan's Academia Sinica in Taipei was created 10 years ago, but its scientists didn't have access to a world-class instrument. Academy officials decided that their best bet for reaching the big leagues would be to piggyback on SMA, being built by the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, by offering to add two antennas to the six originally planned. The idea, in the words of IAA's director, Sun Kwok, was to “offer young scientists a good reason to come to Taiwan.” And Nagayoshi Ohashi, a Japanese national and a physics graduate of Nagoya University, is living proof that the strategy has worked.

    Ohashi held a temporary post at the Smithsonian, working on SMA, when IAA came calling. The IAA position “gave me a chance to continue my work” on circumstellar disks, he says. “Because IAA is a young institute, I will have more of a chance to explore my scientific interests and develop my own group.” IAA has also hired scientists from Australia, France, Germany, Vietnam, and South Korea as well as mainland China. “We are not just a Chinese institution,” says Kwok.

    Dish it up.

    The Submillimeter Array atop Mauna Kea, Hawaii, will explore regions where stars and galaxies form.


    James Moran, SMA's director, says that the collaboration with IAA was a win-win situation for both institutions. One reason is cost: The Smithsonian Astrophysical Observatory, which began planning for SMA in 1984, has spent $80 million for the design and construction of six antennas, and IAA has chipped in another $12 million. The partnership has also created a more powerful instrument. Because image quality improves as a function of the number of baselines, or connections between any two antennas, Moran says that adding two antennas has “nearly doubled the efficiency of the array” by boosting the number of connections from 15 to 28. In addition, Taiwanese companies have built some of SMA's key components, and IAA has contributed significant brainpower to the project.

    IAA is already using SMA as a springboard to bigger and better things. Its scientists are developing a 19-element array on Hawaii's Mauna Loa to probe the age of the universe. “The experience gained on SMA has allowed us to do something on our own,” Kwok says. IAA is getting support from Australian scientists on the project, called the Array for Microwave Background Anisotropy. But this time, IAA is taking the lead.


    Japan 'Hopes' to Avoid Crash of Ill-Fated Mars Probe

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    The Japanese space agency JAXA has all but given up hope of saving its crippled Mars probe. The only question remaining is whether it will also crash ignominiously into the Red Planet next month.

    Launched in July 1998, Nozomi (“Hope”) has suffered numerous setbacks, including a fuel shortage and a damaging solar flare. That has delayed its arrival for 4 years. En route, a malfunctioning power system has allowed the craft's rocket propellant to freeze so that ground controllers are now unable to carry out the prolonged firing needed to put Nozomi into its intended orbit.

    As of now, Nozomi is more or less on a collision course with Mars. If controllers can't solve the fuel problem by 9 December, they will use small alternate thrusters to prevent a possible crash of the spacecraft, which wasn't sterilized before launch. Nozomi will then end up drifting uselessly in a wide orbit around the sun.

    Nozomi was designed to explore the upper atmosphere of Mars and its interaction with the solar wind. The probe was to have measured atmospheric atoms that escape into space through a variety of mechanisms, providing clues about possible reservoirs of water and other materials.

    Lost hope.

    Japan's Nozomi spacecraft may crash on Mars or miss the planet completely.


    Don't bet on more martian wreckage just yet. According to Nozomi project manager Hajime Hayakawa, there is only a 1% chance that the craft will plunge through the martian atmosphere and crash on 14 December if Nozomi's rocket motor can't fire. But if that happens, the planet might be contaminated with terrestrial bacteria, because “no sterilization has been done before launch,” says Hayakawa.

    The biohazard risk is also low, according to John Rummel, NASA's planetary protection officer. He cites the extreme dryness of the planet, which is also pelted by blistering levels of ultraviolet radiation. Microorganisms on Nozomi are unlikely to have survived a 5-year trip through interplanetary space, he adds, and those that did would face the partial burn-up of the spacecraft in the martian atmosphere. “It's unlikely that there would be a significant contamination problem,” says Rummel.

    If Nozomi does crash, it won't be the first contaminated debris to reach the martian surface. In 1999, NASA's Mars Climate Orbiter crash-landed on the surface due to an engineering blunder (Science, 1 October 1999, p. 18). Like Nozomi, the orbiter was cleaned but not fully sterilized. And many doubt the claims of Soviet scientists that their Mars landers—the first to touch down in the 1970s—were fully sterilized.


    Has an Impact Done It Again?

    1. Richard A. Kerr

    Researchers claim to have found more proof of a second major impact that triggered a mass extinction. But impact geologists say the newcomers still don't have the right kind of evidence to indict a second killer

    When geologists first proposed that the dinosaurs died off just when a meteorite plunged into the Gulf of Mexico 65 million years ago, the only evidence they offered was a geochemical oddity: an abundance of the rare element iridium. The metal must have been carried to Earth by the impact of a large meteorite, its discoverers argued. The scientific community was skeptical, but iridium eventually became one of two prime tools—the other being minerals scarred by the shock of impact—in the search for other extinction-triggering impacts.

    But 20 years later, that first generation of “impact geologists” is still looking for other impacts that also caused catastrophic extinctions. Part of the problem, say a growing number of researchers, is that geologists need more ways to recognize the traces of impacts in the geologic record.

    On page 1388 of this issue, petrologist and geochemist Asish Basu of the University of Rochester, New York, and four colleagues offer one such new marker for an impact-extinction link: tiny bits of meteorite they found in Antarctica. The researchers say that the fragments were created at the geologic instant of the Permian-Triassic (P-T) mass extinction 251 million years ago, the biggest of the Big Five mass extinctions. These meteoritic fragments join other nontraditional impact markers—gas-filled molecular carbon cages called buckyballs and odd metal grains—reported from the same P-T boundary rock. The new work “is clear evidence of an impact at the P-T boundary,” says Basu.

    E.T. messengers?

    Meteoritic fragments such as this one (top) are clearly extraterrestrial and come accompanied by exotic iron grains (bottom). But how could the fragments have survived for a quarter-billion years?


    Impact geologists tend to disagree. The proposed meteorite fragments and gases surely came from beyond Earth. But “things just don't fit cleanly with what we know and think we understand” about impacts, says meteoriticist and impact geologist David Kring of the University of Arizona in Tucson. Problems raised include how the chemically fragile bits of meteorite survived unaltered for a quarter-billion years and why most analysts have such a hard time finding the gas-trapping buckyballs, also called fullerenes. There also are the persistent questions about why some of the world's largest volcanic eruptions seem to have shot off at the same geologic moment as great mass extinctions (see sidebar).

    A plethora of putative impact markers

    At the core of Basu and colleagues' claim of “new criteria for future investigations of the P-T and other boundary layers” is their discovery of 40 bits of meteorite 50 to 400 micrometers in diameter at what they take to be the P-T boundary at Graphite Peak in Antarctica. Microscopic inspection of mineral textures and electron microprobe measurements of composition show that the grains must be from a meteorite, says the group, which includes meteoriticist Michail Petaev of Harvard University.

    The fragments are not contamination from the field or the lab, the researchers argue. That's because samples were recovered from 10 to 20 centimeters beneath the rock surface, collected and conveyed to the group by two different fieldworkers, and analyzed in a lab that had never seen meteoritic material before and was therefore devoid of potential contamination. A similarly handled sample from 80 centimeters below the boundary contained no detectable meteoritic fragments.

    Basu and his colleagues also report a second, recently proposed type of impact marker in the same P-T boundary beds that yielded the meteoritic fragments: microscopic bits of nearly pure metallic iron. Their composition is neither terrestrial nor meteoritic, says the group, but they closely resemble particles reported at the reliably defined P-T boundary at Meishan, China, in 2001 by paleontologist Kunio Kaiho of Tohoku University in Sendai, Japan. Kaiho suggested that the iron—which has not been reported elsewhere—had condensed from the vapor of an impact cloud.

    A third nontraditional impact marker—extraterrestrial gases trapped in fullerenes—has also shown up at the Graphite Peak P-T boundary. Geochemists Luann Becker of the University of California, Santa Barbara, and Robert Poreda of the University of Rochester—who are co-authors of the Science paper—reported this year in Astrobiology that they found gas-charged fullerenes there, as they had in Meishan, China (Science, 23 February 2001, p. 1469).

    Intriguing but puzzling

    The latest signs of extraterrestrial debris at the P-T are getting a mixed reception. “I don't have any doubt that” the fragments are meteoritic, says meteoriticist Jeffrey Grossman of the U.S. Geological Survey in Reston, Virginia. “The mineral chemistry is the same, the texture is the same, there is some metal.” So no one is denying that these Antarctic fragments came from outer space. But meteoriticists and impact geologists alike are stunned that tiny, fresh-looking, unaltered fragments of a meteorite should have survived burial for 251 million years. “It's astonishing, it's incredible, it's unbelievable,” says Grossman. Adds Kring, “You have to ask how they got there.”

    Meteoritic minerals such as forsterite and metallic iron are “incredibly unstable at the surface of the Earth,” Kring notes; meteoriticists like to say that “If a meteorite falls in the [rainy U.S.] Pacific Northwest, it's going to be soil the next year.” Basu has no explanation for the extraordinary stability. “There's something peculiar about this deposit that needs to be worked out,” he says.

    Not every meteorite that falls to Earth weathers away without a trace, of course. Chunks of large impactors have in fact been recognized in the geologic record. Geochemist Frank Kyte of the University of California, Los Angeles, convinced colleagues that a few-millimeter piece of totally altered rock he found in 65-million-year-old North Pacific sediments is probably a bit of the dinosaur killer. And geochemist Birger Schmitz of the University of Göteborg in Sweden has found fist-sized meteorites in 480-million-year-old sedimentary rock in southern Sweden, although nothing unaltered was left except traces of the exceptionally resistant mineral chromite. Microscopic, unaltered meteoritic fragments surviving a quarter-billion years “would really be remarkable,” says Schmitz. “I get the gut feeling it's wrong.”

    Schmitz had the same feeling about the essentially unaltered micrometeorites found in samples of 1.4-billion-year-old sandstone from Finland. In their 1998 report in Nature, meteoriticist Alexander Deutsch of the University of Münster in Germany and three colleagues demonstrated that the spherules were without a doubt meteoritic. But they offered no explanation at the time of how the submillimeter objects survived a clearly hostile environment that long.

    The answer, it turned out, is that they hadn't. Deutsch and some of his colleagues from the Nature paper resampled the Finnish sandstone and concluded that it does not contain any micrometeorites whatsoever. They have failed to find any obvious route of contamination in the field or lab, however.

    A failure to replicate

    Impact geologists would like to see Basu's findings replicated before they accept them. But if fullerenes—the other hot alternative impact marker—are any example, that could take a while. Ten years after the first report of fullerenes in a geologic sample, “their origin and occurrence remain elusive,” wrote Peter Buseck of Arizona State University, Tempe, in a review last year. His 1992 paper in Science (10 July 1992, p. 215) was the first to report the detection of natural fullerenes, from a sample of an odd, organic-rich rock from western Russia called shungite. Since 1993, he's had a harder time of it. “We have flat-out failed at finding fullerenes” in anything since, he says. “We've given up. The only one that's been strikingly successful is the Becker group.”

    Becker and Poreda have indeed reported considerable success. They have published detections of fullerenes in three different organic-rich meteorites and in debris from three large impacts—the putative P-T impact (at Meishan, Graphite Peak, and Sasayama in Japan), the Cretaceous-Tertiary (K-T) impact at the end of the dinosaur age (five locations), and in the 250-kilometer Sudbury impact crater of Canada. Some analyses have supported the existence of natural fullerenes. Geochemist David Mossman of Mount Allison University in Sackville, Canada, and his colleagues reported in the March issue of Geology that they found fullerenes in a Sudbury sample, although they found none in a shungite that they also examined. And Dieter Heymann of Rice University in Houston, Texas, and his colleagues reported fullerenes at the K-T boundary in a 1994 Science paper (29 July 1994, p. 645).

    But the number of failures no doubt outnumbers the reported successes at finding fullerenes. No one but Becker and Poreda has identified fullerenes in meteorites, despite considerable effort, most of it unpublished. Roger Taylor, one of the group at the University of Sussex, U.K., that first made fullerenes in the lab, reported in 2000 that a K-T sample lacked fullerenes at the part-per-trillion level. And geochemist Kenneth Farley of the California Institute of Technology in Pasadena has come up empty after looking for the helium-3 (but not fullerenes) supposedly trapped in fullerenes at the P-T boundary at Meishan and another location in China. “Fullerenes in the geologic record are still awaiting confirmation,” says organic geochemist Iain Gilmour of the Open University in Milton Keynes, U.K., who has had his share of failures in meteorites and at the P-T. “There's not a great incentive for people to chase things and not find them,” he says.

    Why all the conflicting results? Becker points the finger at her critics. “We're not talking about the same samples or doing the same experiment,” she says. “There's no attempt to replicate my results. You have to do the organic chemistry” the way she does in order to produce a clean extract containing the fullerenes.

    Many researchers see a larger problem in the samples being analyzed. No two analysts work on exactly the same boundary sample, despite sometimes receiving parts of a field sample from the same source. But Becker says they will be distributing Graphite Peak P-T samples to a number of different groups. There is also talk, but nothing more, of someone providing samples for a blind test, as happened in the course of the K-T debate.

    Just one impact extinction

    Acceptance of the K-T impact has led to impacts being proposed for most other major extinctions. Just last June, a group of scientist proposed that a comet triggered the Paleocene-Eocene extinction of 55 million years ago, and another presented evidence for the latest of several impacts in the mid-Devonian 380 million years ago. But all depend on iridium anomalies that are too small, unconvincing shocked minerals, or impact markers as yet not generally accepted.

    Twenty years after the K-T impact gained convincing support, some impact geologists are getting discouraged by their failure to find a second example. “I've tried for 10 years to look for impact layers,” says Schmitz. “I almost ruined my career. I have lots and lots of negative data in my drawers. This is evidence for the uniqueness of the K-T boundary.”

    But Schmitz is not giving up. Basu and colleagues “say the haystack is a needlestack,” he says. “Who knows, maybe they are right,” and bits of killer meteorite will soon be turning up everywhere.


    Extinction by a Whoosh, Not a Bang?

    1. Richard A. Kerr

    Ever since scientists realized that a huge impact gave the dinosaurs their comeuppance 65 million years ago, extraterrestrial whacks have become the leading suspect in almost every mass extinction (see main text). But some say that massively erupting volcanoes are capable of similar mayhem. One new study fingers the largest volcanic eruption ever as the leading cause of the greatest mass extinction 251 million years ago. At the same time, however, in this issue of Science (p. 1392), a second study seems to diminish greatly the role of another great eruption in the death of the dinosaurs.

    The problem with volcanoes as extinction triggers has been the difficulty of pinning down when they spewed their noxious, climate-altering gases. The stunningly exact coincidence of the impact 65 million years ago with the disappearance of numerous creatures in the sea was obvious once scientists recognized impact debris in the geologic record within a hair's breadth of the last fossils of doomed animals. But the huge, although relatively quiet, nonexplosive lava eruptions called flood basalt eruptions didn't leave an obvious mark in the records of mass extinctions. So it's been up to geochronologists to link the two events by dating both eruptions and extinctions.

    Eruption, then impact.

    A declining osmium isotope ratio denotes an earlier volcanic eruption, and a downward spike marks the dinosaur-killer impact.


    What may be the best possible geochronological tie between eruption and extinction appeared in the 10 September issue of Earth and Planetary Science Letters. Geochronologist Sandra Kamo of the University of Toronto and her colleagues reported a new uranium-lead radiometric age for the beginning of the Siberian Traps eruption—the most voluminous known—of 251.7 ± 0.4 million years. According to her second new date, 6.5 kilometers of lava and ash—most of the Siberian Traps production—had erupted by 251.1 ± 0.3 million years ago. And the midpoint of this eruption falls exactly at the uranium-lead age for the great Permian-Triassic mass extinction, as determined in 1998 by Samuel Bowring of the Massachusetts Institute of Technology and his colleagues.

    Similar improvements of radiometric ages for eruptions and extinctions are tightening the links between a number of mass extinctions and flood basalts. But uncertainties of many hundreds of thousands of years remain. Already, a paper in this issue of Science looks to break up another proposed eruption-extinction pair.

    Geochemists Gregory Ravizza of the University of Hawaii, Manoa, and Bernhard Peucker-Ehrenbrink of the Woods Hole Oceanographic Institution in Massachusetts propose a new geologic marker of volcanism: the mix of stable osmium isotopes recorded in ocean sediments. In sediment cores from around the world, they trace a steady decline in the ratio of osmium-187 to osmium-188 starting about 66.0 million years ago, followed by a sharp downward spike 65.5 million years ago. Ravizza and Peucker-Ehrenbrink attribute the initial decline to the Deccan Traps of India. As the traps weathered, they fed low-ratio osmium into the sea. Half a million years later, the asteroid or comet that took out the dinosaurs dumped even more low-ratio osmium, say the geochemists, causing the downward spike.

    The Deccan Traps eruption came half a million years too soon to trigger the extinction. However, the greenhouse gases from the eruption probably set off the warming recorded then, Ravizza says. That warming possibly weakened the biota before the devastating impact. If this timing is correct, one thing is certain: Impacts trigger extinctions, but they don't trigger great eruptions.


    Public Projects Gear Up to Chart the Protein Landscape

    1. Robert F. Service

    Researchers in industry as well as those in the public sector seek the protein equivalent of the human genome sequence

    MONTREAL, CANADA—With the human gene sequence now in hand, researchers have moved on to a new goal: identifying all the body's proteins. The task is massive. Not only does each of the body's 252 cell types harbor its own complement of proteins, but their expression patterns also vary with age, nutrition, health, and disease.

    These difficulties haven't deterred researchers who want to determine the body's complete set of proteins—the proteome, as it's called. Buoyed by hopes that their efforts will identify proteins that can serve as both markers for disease and targets for new drug therapies, the pharmaceutical industry jumped into the research a few years ago, investing hundreds of millions of dollars in high-speed protein-tracking technology (Science, 7 December 2001, p. 2074).

    More recently, the public funding agencies have gotten into the act, launching their own large-scale proteomics projects. They say they had little choice, as the corporate push has left university researchers out in the cold. “Companies don't open their resources for academics,” says Marius Ueffing, a proteomics researcher at the German Society for Proteome Research and the Institute of Human Genetics in Munich.

    To fill that gap, four separate public international proteomics initiatives have been launched over the past year and a half. Three of them, spearheaded by researchers in the United States, Germany, and China, are aimed at tracking down all the proteins in human blood plasma, the brain, and liver. A fourth effort seeks to create antibodies against thousands of human proteins, a resource that should help researchers devise other protein-tracking tools.

    Additional proteome projects are looming, with kidney, muscle, heart, and saliva among the possible targets. “We are being bombarded by groups that want to have this or that initiative,” says Samir Hanash, president of the Human Proteome Organization (HUPO), an international coordinating group. Once researchers have collected snapshots of the ever-changing proteomes of different tissues and cells, they hope to assemble them into a kind of full-length movie showing the ebb and flow of proteins in the body.

    Speed boost.

    An international effort to churn out 10,000 antibodies to human proteins could enable the development of protein-tracking technology, such as the antibody microarrays produced by this machine.


    In addition to these major efforts, research groups are also pursuing numerous smaller efforts, such as tracking down all the proteins in subcellular structures—including the mitochondria and Golgi—as well as in various microbes (see sidebar). “It's a very, very hot field,” says Fuchu He, director of China's Beijing Institute of Radiation Medicine and head of HUPO's liver proteome project.

    Initial plans and results from many of these projects were on display at HUPO's 2nd Annual World Congress held here 8 to 11 October. Just how they will unfold is uncertain. “We're still early on and testing the waters,” Hanash says.

    Indeed, large-scale programs are likely to be far more difficult to pull off with proteins than with genes. The chemistry is more variable, for one. Some proteins reside in watery environments such as blood, for example, whereas others hide out in the fatty membranes surrounding cells. Proteins range considerably in size, from 5000 daltons to 1 million or more. They differ in their electrical charges. And most challenging of all, most proteins exist only in vanishingly small quantities. “This is a nightmare analytically,” says Thomas Conrads, a biochemist at the National Cancer Institute at Frederick (NCI-Frederick) in Maryland.

    The equipment is more demanding, too. Whereas genome researchers could concentrate their resources on a single technology, sequencing machines, that's not possible in proteomics. “No method right now is able to analyze a complete single proteome,” says Thierry Rabilloud, a proteomics expert at the French Atomic Energy Commission in Grenoble. As a result, proteomics researchers must make hard choices about what to go after. “You have to reduce the proteome to manageable units and define achievable goals,” Hanash says.

    That's where HUPO has stepped in. Launched in 2001, this loosely knit federation of proteome researchers doesn't dole out any research funds itself. That money comes from traditional biomedical funding agencies in participating countries. But HUPO helps set priorities, coordinate research, set standards for handling and processing samples, and arrange for the use of common bioinformatics tools to ensure that researchers can directly compare their results.

    HUPO didn't waste any time picking favorites. The organization quickly targeted blood plasma as its first priority, aiming to discover blood-borne proteins indicative of disease. The $1 million pilot project was launched in April 2002 and currently consists of researchers from 47 labs around the globe, including 28 in the United States.

    For now, HUPO's Plasma Proteome Project (PPP) is focused on comparing the strengths and weaknesses of different protein-hunting technologies, such as two-dimensional gel electrophoresis and liquid chromatography for separating proteins, and various versions of mass spectrometry for identifying them.

    In July, PPP's leaders mailed out standardized plasma samples to participating labs and asked each to use its technique of choice to separate and identify as many proteins as possible. HUPO leaders plan to set recommendations about which technologies are most appropriate for tracking down different subsets of plasma proteins. Initial results, which started coming in last month, suggest that the task won't yield simple answers, however.

    Conrads, for example, described how the NCI-Frederick team had managed to sift out some 1444 plasma proteins using a standard technique—first separating out the high-abundance proteins, dividing the rest of the sample into numerous fractions, and then scanning each one using an electrospray mass spectrometer. That laborious effort ensures that researchers don't see only common proteins such as albumin, which constitutes up to 50% of the total protein in plasma.

    Where's Waldo?

    Most plasma proteins, including potential diagnostic markers and drug targets, are present in only tiny quantities.


    But discarding the high-abundance proteins comes at a cost. When the NCI-Frederick team took a closer look at such proteins, they found that they readily bind a wide variety of low-abundance proteins, acting like molecular sponges. The researchers identified 341 different proteins and peptides bound to albumin alone. Other sets of proteins bind to other highly abundant proteins such as antibodies. “This was rather stunning to us,” Conrads says. “Each one of these proteins is binding different peptides. So there does seem to be some precise interaction.” Although it's still too early to be certain, Conrads says it might be possible to use common proteins to track down the low-abundance proteins and peptides that most groups are interested in as potential diagnostic markers.

    PPP director Gilbert Omenn of the University of Michigan Health System in Ann Arbor says that other techniques will have their own strengths and weaknesses. It's still too early to make meaningful comparisons, he adds. He expects that the final results of the plasma analyses should be in by the end of the year.

    PPP aims to move on to a full-scale plasma project, attempting to correlate changes in the abundance of select proteins with various diseases. There could be a snag, though. The project could cost about $50 million, Omenn says, and he hasn't yet identified a funding source. But he is confident that “there will be a major follow-up.”

    HUPO's Human Brain Proteome Project (HBPP) is also up and running. Begun in April as a pilot project, HBPP builds on a brain proteome project begun in 2000 and backed by $17 million from the German ministry of research. The collaboration aims to sort out technology and standards issues, focusing at first on tracking down the proteins in the substantia nigra and hippocampus, the brain regions that degenerate in Parkinson's and Alzheimer's diseases.

    Project co-leader Helmut Meyer, a protein chemist at the University of Bochum in Germany, says that HBPP researchers hope to find proteins that mark the early disease stages, because most damage to brain cells occurs before the first symptoms show up. If the researchers find these markers they can ask, “Are they already visible when people are 30 or 40 years old?” Meyer says. The HBPP team will also scan cerebral spinal fluid and blood plasma for proteins linked to brain diseases.

    The early results reveal some tantalizing hints of what's to come. At the meeting, HBPP co-director Joachim Klose, a protein scientist at Humboldt University in Berlin, described a series of experiments with mice. Klose and colleagues tracked changes in the abundance of 250 brain proteins as the mice grew from embryos to aging adults. As expected, the researchers found that the overall amount of proteins remained essentially constant from a few days after birth until the animals died.

    On the hunt.

    Proteomics researchers hope that their quest to find thousands of novel proteins will turn up good candidates for new therapeutic drugs such as the so-called Src kinase, which has been implicated in some cancers.


    Even so, the abundance of a large percentage of different proteins changed considerably during the animals' early growth. But the researchers were somewhat surprised to find that nearly 20% of brain proteins continued to change their abundance levels when the animals were in the final stages of life. The results, Klose says, suggest that changes in members of this protein subset could be linked to disease.

    The Human Liver Proteome Project (HLPP), meanwhile, is backed by an initial round of $25 million in funding from the Chinese government for a 3-year pilot study to be completed in 2005. The effort was launched in May and is aimed at setting up the collaborations, standards, and procedures for tallying the thousands of proteins expressed in human liver cells. So far, 79 labs, 37 of them in China, have signed on to the liver proteome effort. The project's leader, Fuchu He, says he expects a full-scale production phase to follow from 2006 to 2010.

    As with the other proteome efforts, the idea is to ultimately link liver-specific proteins to diseases such as hepatitis and liver cancer. According to He, the Chinese government has promised to kick in another $250 million if HLPP makes it into the production phase. This commitment, He says, stems from the fact that liver diseases kill hundreds of thousands of people in China each year.

    Proteomics experts caution, however, that they can't link a change in protein expression to disease from just one or two samples. “If you want sound results, you will have to repeat it five or 10 times,” Klose says. That's not likely to be accomplished with the large proteome projects.

    But confirming a protein linkage to disease should become much easier if HUPO's fourth initiative—to make a vast library of antibodies against human proteins—succeeds. Such antibodies could be used to track a particular protein in many people as a way of confirming its involvement in a disease. In addition, Omenn says, this project will help researchers in each of the other initiatives create protein microarrays and other tests for tracking the ebb and flow of thousands of proteins simultaneously in different tissues.

    Both HBPP and HLPP have dedicated a significant portion of their early funds to antibody production. And a group led by Ueffing is applying to companies and the European Union for an initial round of $12.5 million in research funding on what it hopes will eventually become a $60 million initiative to raise antibodies against 10,000 human proteins. If researchers line up the money, the antibodies produced could make life easier for proteomics researchers.


    A Sharper Focus

    1. Robert F. Service

    The large-scale proteomic surveys coordinated by the Human Proteome Organization (HUPO) have a long way to go to produce their promised medical benefits (see main text). But more-focused proteomic projects may have a more immediate impact. Researchers around the globe are hard at work using proteomic technologies to identify the proteins that make up the mitochondria, the cell's energy powerhouse, and other subcellular structures. A talk given last month at HUPO's annual meeting by University of Montreal cell biologist Michel Desjardins provides an example of how proteomic tools can lead to crucial new insights into cell biology as well as potential medical benefits.

    Over the last few years, Desjardins and colleagues used a standard combination of techniques known as two-dimensional gel electrophoresis and mass spectrometry to separate and identify proteins from phagosomes, transitory saclike structures involved in engulfing and eliminating foreign invaders ranging from dust particles to pathogenic bacteria. A 2001 paper by the scientists in the Journal of Cell Biology revealed that they had identified some 150 proteins.

    When the researchers scanned the literature to find out the functions of these proteins, one mystery stood out. A protein called flotillin was known to be present on the outer cell membrane, where it apparently helps other membrane-bound proteins assemble into so-called lipid rafts. Rafts, which may help coordinate cell-signaling pathways by organizing the membrane proteins that transport substances such as hormones and growth factors into cells, had not previously been spied on the surface of phagosomes. But, Desjardins says, flotillin's presence there suggested that phagosomes might also contain them. A series of traditional cell biology tests confirmed the hypothesis. The work shows that proteomics has “enormous potential” for leading biologists to new molecular insights, says cell biologist Kathryn Howell of the University of Colorado Health Sciences Center in Denver.

    But Desjardins's team went one step further. The researchers knew that certain pathogens, such as the bacterium that causes leishmaniasis, are able to survive in phagosomes. So they decided to test whether they do so by disrupting raft formation. And here too they received a pleasant surprise. A series of cell biology studies revealed that a molecule called lipophosphoglycan (LPG) that is abundant on the surface of Leishmania bacteria inhibits the formation of lipid rafts. In a way not yet understood, this prevents the organism from being killed in the phagosome.

    For drugmakers interested in fighting Leishmania, which is thought to infect some 2 million people around the world, LPG “presents an ideal target,” Desjardins says. For example, it may be possible to design a small drug molecule that blocks its effects on rafts.


    Tailor-Made Weather Forecasts Help Star-Gazers Look Ahead

    1. Dennis Normile

    At the cluster of observatories atop Hawaii's Mauna Kea, a high-resolution supercomputer model is enabling researchers to fine-tune their observations of the cosmos

    TOKYO—Forecasting weather conditions atop Hawaii's Mauna Kea, recalls astronomer Doug Simons, used to mean “scrounging for the local Big Island newspaper, which had the high and the low and maybe a day-old satellite photograph.” But “rolling the dice” in hopes of optimizing use of a billion dollars' worth of world-class telescopes wasn't nearly good enough for Simons, associate director for instrumentation at the Gemini Observatory, which operates 8-meter telescopes on Mauna Kea and on Cerro Pachón in Chile. What was needed, he and others decided, was detailed, reliable predictions of nightly observing conditions at the Hawaiian site.

    Today, after 5 years of work, astronomers using Gemini and nine other telescopes on Mauna Kea receive what may be the most precisely customized weather forecasts anywhere outside classified military efforts. Using a Japanese supercomputer, a modeling program, and some specialized weather algorithms, scientists at the University of Hawaii at Manoa's Mauna Kea Weather Center issue twice-daily updates on details of particular interest to astronomers—including temperature, high-altitude cloud cover, atmospheric water vapor, and turbulence. “It's paradigm breaking,” crows Gemini director C. Matt Mountain. “These large telescopes cost a dollar a second to run, and you can now use that time very effectively.”

    The creation of this customized service started after Simons, a veteran of Mauna Kea observatories, joined Gemini in the mid-1990s and was asked to improve weather forecasts. The National Weather Service's model essentially treats the Hawaiian Islands as a single 80-km-square data point, and its sea-level forecasts were of little use to astronomers working at 4000 meters, above low-lying clouds. The National Weather Service's Honolulu office couldn't provide much help. But Steven Businger, a university meteorologist based in the same building, took an interest in the project. “I overheard the conversation between Simons and the staff,” he recalls. Before the day was out, the two had drawn up a list of objectives—most of which required access to a supercomputer. Japan's National Astronomical Observatory was building its Subaru Telescope at that time, with a supercomputer among its support facilities. “It was a resource we could share with the community,” says Subaru's director, Hiroshi Karoji.

    To get a forecast specific for Mauna Kea, Businger and his team take the National Weather Service's global model, add data from weather balloons and ground stations, and feed the initial conditions into a version of the fifth-generation Pennsylvania State University-National Center for Atmospheric Research Mesoscale Model, or MM5, tuned to give high-resolution results for Mauna Kea.

    Cool and clear.

    Computer-assisted foresight helps keep Gemini at the right temperature for sharp views of the heavens.


    The model projects the weather over the next 42 hours, computing key variables such as temperature and precipitable water vapor at 6-hour intervals and generating graphics for things such as high-altitude clouds. The information is posted on the weather center's Web page ( “Now the [operating team] reads the forecast over dinner and adjusts the plan for any given night,” Simons says. Too much water vapor, and they'll forget about infrared observations. Light clouds are bad for imaging but may allow spectroscopy, and so on.

    The Mauna Kea Weather Center debuted in 1999 and proved to be “a real success,” says Robert McLaren, associate director of the Institute for Astronomy at the University of Hawaii, Manoa. Initially a 2-year experiment, it's a permanent part of the Mauna Kea infrastructure. Ten of the 13 Mauna Kea observatories split the $165,000 annual cost, which supports a forecast meteorologist and a researcher continually tweaking the models and algorithms. “I don't think there is anything like this at any other observatory,” McLaren says.

    The heaviest consumers of the information are optical astronomers, whose large telescopes face the greatest weather-related challenges. To avoid image-distorting heat fluxes, for example, they must keep the temperature of the primary mirror and the enclosure within 1°C or so of the ambient temperature. Operators begin cooling the massive structures—Gemini contains about 300 tons of steel—in the afternoon and once could only guess at the nighttime temperature. “Now we rely on the forecasts,” says Subaru's Karoji.

    Subaru and Gemini both make use of several other forecast parameters, including the precipitable water vapor, cloud cover, and “seeing,” a measure of air turbulence, which can distort incoming images. The major benefit of these forecasts, Gemini's Mountain says, is in operating the telescopes more efficiently, or “doing the best possible science for what the conditions are.”

    The biggest remaining obstacle may be teaching astronomers more-flexible work habits. Traditionally, even if the forecasts warn of bad infrared conditions, they tend to work with the scheduled instrument instead of, say, switching observations from the telescope's midinfrared instrument to its multi-object spectrograph. The model must become more accurate, too. Businger readily admits that a dearth of observational data prevents them from improving their rough forecasts of atmospheric water vapor and seeing. Help should arrive next year, in the form of a seeing monitor on the Canada-France-Hawaii Telescope that will enable the weather center team to refine its forecast algorithms.

    Even with perfect foresight, Mountain says, some scientists would still balk at having their observing schedules disrupted at the last minute. “There are astronomers who believe that being at the scope is where you really innovate,” he says. But with more of astronomy's breakthroughs coming from analyzing data sets on computers, forecast-driven observing programs seem to be an idea whose time has come. In any case, Mauna Kea researchers agree, the days of scrounging for forecasts in the local newspaper are gone for good.


    Caudate-Over-Heels in Love

    1. Laura Helmuth

    NEW ORLEANS, LOUISIANA—Almost 29,000 brainiacs gathered here 8 to 12 November to discuss neuroscience at all levels, from genes to neurotransmitters to behavior. Fear, sex, synapses, and true love were some of the many crowd pleasers.

    People falling in love use emotion-laden words, like giddy, passionate, and blissful, to describe the sensation. But a brain-imaging study presented at the meeting suggests that the brain experiences the first heady days of love more as a drive or motivation than an emotion. Another study, on the intimacies of romance, found that an orgasm involves mostly the same brain regions in men and women.

    The college students participating in the first study—seven men and 10 women—had been with their “one true love” for between 2 and 17 months. They displayed all the classic, feverish, delusional symptoms: obsessive thinking about their partners, sleeplessness, and euphoria when things are going well. They topped the charts of a lab standard called the Passionate Love Scale, which asks how strongly participants agree with statements such as “Sometimes my body trembles with excitement at the sight of [my partner].”

    These lovebirds then went into a functional magnetic resonance imaging scanner, watched closely by a team including psychologist Arthur Aron of the State University of New York, Stony Brook, neuroscientist Lucy Brown of Albert Einstein College of Medicine in New York City, and anthropologist Helen Fisher of Rutgers University in New Brunswick, New Jersey. The subjects saw pictures of their beloved interspersed with pictures of another familiar, but emotionally neutral, face.

    Regions of the brain involved in the motivation and reward system lit up in response to the loved one, including parts of the caudate nucleus and the ventral tegmental area. The men's and women's brain activity patterns were very similar. One of the areas activated in men but not women had previously been linked to penile turgidity.


    Brain-imaging studies are taking some of the mystery out of love and sex.


    These results differ from those of a previous study of romantic love. It imaged the brains of people who'd been in longer relationships—on average more than 2 years—and found lots of activity in emotional areas such as the insula and anterior cingulate. Fisher and colleagues reexamined their data and found that the subjects in relatively longer-term relationships also activated these emotion centers when viewing their loved ones, suggesting that the brain traces change as love evolves.

    “It's a very thoughtful study,” says cognitive neuroscientist John Gabrieli of Stanford University, pointing out that the researchers controlled for many variables that might have made the findings suspect, such as the attractiveness of the comparison faces.

    The meeting also provided an answer to the classic postcoital question of trashy novels: “How was it for you, dear?” The correct reply is: “The same as it was for you.”

    Earlier this year, neuroscientist Gert Holstege and colleagues at the University of Groningen, the Netherlands, reported the brain regions that glow under a positron emission tomography (PET) scanner when men ejaculate. For the new study, eight female subjects lay in the PET scanner while their male partners manually provided the necessary stimulation. Four reached orgasm and some did so repeatedly, for a total of eight data points. Holstege found that, during orgasm, women's brains show about the same pattern of widespread activity as men's. Compared to clitoral stimulation alone, orgasm caused greater activation in several parts of the brain, including the same reward region tickled by romantic love, the ventral tegmental area.

    The main difference between the sexes was female activation of a midbrain area called the periaqueductal gray. It's the sine qua non of the female sexual response in cats, rats, and hamsters; if it's damaged, the animals don't assume a mating position. Other than that, the brain activity for women “is very much the same as during ejaculation in males,” says Holstege.


    CO Gas Joins Brain Signaling Team

    1. Ingrid Wickelgren

    NEW ORLEANS, LOUISIANA—Almost 29,000 brainiacs gathered here 8 to 12 November to discuss neuroscience at all levels, from genes to neurotransmitters to behavior. Fear, sex, synapses, and true love were some of the many crowd pleasers.

    A team of neuroscientists at Johns Hopkins University in Baltimore, Maryland, made huge waves a dozen years ago when it proved that a gas—nitric oxide (NO)—could relay messages between nerve cells. Unlike all other types of so-called neurotransmitters, a gas can be neither stored inside a neuron nor carefully controlled after its release—and thus NO violated some sacred tenets of neuroscience. Now the same team, led by neuroscientist Solomon Snyder, is pushing to admit another gaseous member to the neurotransmitter club: carbon monoxide (CO).

    The group's work, some of which was described by postdoc Darren Boehning at the meeting, shows that the enzyme that produces CO in nerve cells has a switch that can be flipped “on” every time a neuron fires. In fact, control of the enzyme, called heme oxygenase-2 (HO2), is strikingly similar to control of the one that makes NO. The findings go a long way toward giving CO full neurotransmitter status.

    “The possibility of CO being a neurotransmitter has been greeted with some skepticism because the enzyme that produces CO had not been shown to be regulated,” says Paul Greengard, a neuroscientist at Rockefeller University in New York City. The Snyder lab's recent discoveries, Greengard adds, suggest that CO may be involved in a neural signaling pathway “of major importance.”

    Snyder's team has shown CO to be a major player in the olfactory system and gastrointestinal tract, enabling the perception of odors and ordinary movement of the lower intestine. In addition, CO is required for normal ejaculation in mice. Thus, a drug that blocks CO production in the vas deferens, the duct through which sperm are expelled, is a potential treatment for premature ejaculation, Snyder says.

    Carbon monoxide signaling.

    When a neuron fires, calcium ions move in. This activates the CO-producing enzyme HO2 in two ways. It initiates phosphorylation of the enzyme and flips a calmodulin protein switch.


    Until recently, CO was thought to be nothing more than a waste product of heme oxygenase-1, which breaks down the iron-containing pigment heme in aging red blood cells. Then in the early 1990s, Ajay Verma, a graduate student in the Snyder lab, began looking for other gaseous neurotransmitters to accompany NO. He suspected that a CO neurotransmitter, if such a thing existed, might be produced by a second form of heme oxygenase, HO2, that had been found in brain tissue but whose function was unknown.

    Verma, Snyder, and their colleagues soon found evidence for that idea when they discovered that HO2 is not scattered diffusely through the rat brain but is instead localized to several discrete areas as well as to olfactory neurons. The researchers also saw that HO2 stimulates the same enzyme, guanylyl cyclase, that NO activates in neurons that receive its signal.

    Then in the late 1990s, the Snyder team examined mice in which the HO2 gene had been disabled. The mice had impaired intestinal function: Opaque pellets moved abnormally slowly through their intestines. The males also ejaculated very weakly, suggesting that CO controls ejaculation, complementing NO, which enables erection.

    But the researchers still had no idea how CO production might be regulated. This was a major gap, considering that gases, unlike more ordinary neurotransmitters, cannot be stored for release but must be made on demand when a neuron is stimulated.

    The first clue to CO's regulation emerged 25 September in Neuron. Boehning, Snyder, and their colleagues reported that in a test tube assay, an enzyme called CK2 puts a phosphate group onto HO2, thereby activating it. The same thing happened when they stimulated cultured neurons from rat brain, rat olfactory bulb, and mouse colon. In the brain cultures, CK2 was, in turn, activated by the enzyme protein kinase C (PKC), which phosphorylates it. And neural activity is known to excite PKC by triggering calcium ion entry into the neurons.

    Thinking he had an activation mechanism for HO2, Boehning wasn't looking for another one. But 4 months ago, he was screening proteins to see if any bind HO2 and was stunned to find that calmodulin adheres to it strongly. Calmodulin, he knew, regulates NO production by binding to the enzyme nitric oxide synthase (NOS): When a neuron fires, calcium rushes into the cell and sticks to the calmodulin on NOS. This in turn activates the enzyme. Boehning immediately realized that CO might be made in the same way. “It was an eureka moment,” Boehning says.

    In results reported at the meeting, Boehning found that calcium and calmodulin dramatically activate HO2 in the test tube, whereas inhibitors of calmodulin block HO2 activity. This provides the first strong evidence that calmodulin turns on HO2 in response to fast, millisecond pulses of neuronal activity and the transient calcium ion influxes they trigger. How this relates to the previously described phosphorylation pathway for HO2 activation is unclear, but the researchers suggest that calmodulin's effects may be short-term, with phosphorylation ramping up HO2 activity in the same neurons for seconds to minutes to respond to longer-lasting changes in bodily states.

    Boehning says that more work is needed to confirm calmodulin's role in living tissues and to pin down HO2's functions in the brain. Assuming that can be done and neuroscientists come to accept that CO is a second gaseous neurotransmitter, the current views of neuronal organization in the brain may have to be revised.

    Among other things, the existence of gaseous neurotransmitters challenges the long-held notion that brain neurons are hard-wired to one another, says neurologist Gavril Pasternak of Memorial Sloan-Kettering Cancer Center in New York City. Because gases can diffuse, Pasternak notes, they could, in theory, transmit messages to any of various nearby neurons rather than just a single target. “It really makes you think again about how the brain works,” he says.


    Pills and Games Help Conquer Fear

    1. Greg Miller

    NEW ORLEANS, LOUISIANA—Almost 29,000 brainiacs gathered here 8 to 12 November to discuss neuroscience at all levels, from genes to neurotransmitters to behavior. Fear, sex, synapses, and true love were some of the many crowd pleasers.

    Riding a glass elevator to dizzying heights inside the courtyard of a high-rise hotel is an acrophobe's nightmare. But a virtual reality game that simulates this experience, combined with a drug that revs up certain learning circuits in the brain, is helping people overcome their fear of heights, researchers announced at the meeting.

    Counterintuitively, many neurobiologists see forgetting fearful experiences as a special kind of learning, and they've found considerable overlap in the neural mechanisms underlying these seemingly different processes. In the mid-1990s, for example, neuroscientist Michael Davis and colleagues found that a particular receptor for the neurotransmitter glutamate—the so-called NMDA receptor—plays a critical role in rats' ability both to learn that a flash of light signals an impending electrical shock and to forget their fear of the light after the researchers stop pairing flash with zap. More recently, Davis, now at Emory University in Atlanta, Georgia, and others have found that drugs that enhance NMDA receptor activity in rats help the animals lose their fearfulness more quickly.

    Going up.

    A virtual elevator and a drug help acrophobes learn how to brave heights.


    To see if such drugs could help humans with irrational fears, Davis and colleagues Kerry Ressler and Barbara Rothbaum gave a drug called D-cycloserine (DCS) to people who were terrified of heights. Subjects were given either DCS or a sugar pill (neither they nor the experimenters knew which) just before strapping on virtual reality goggles and taking the elevator as far up as they could bear. They could push themselves even further by walking out on a narrow virtual catwalk suspended high above the ground.

    Although DCS didn't reduce the subjects' self-described level of discomfort immediately, the way an antianxiety drug such as Valium might, subjects who took it were markedly bolder in subsequent virtual reality sessions. Many subjects who got DCS took the elevator all the way to the top floor after just two such sessions, compared with seven or eight sessions for control subjects. And most importantly, Davis says, those who got DCS were twice as likely to drive on high bridges or use an elevator in real life.

    “I think it's great,” says Mark Barad, a psychiatrist and neuroscientist at the University of California, Los Angeles. “This is the first example of a new way of doing psychiatry, by using drugs not to treat the illness but as a way of making therapy work.”


    Neurons Get Connected Via Glia

    1. Greg Miller

    NEW ORLEANS, LOUISIANA—Almost 29,000 brainiacs gathered here 8 to 12 November to discuss neuroscience at all levels, from genes to neurotransmitters to behavior. Fear, sex, synapses, and true love were some of the many crowd pleasers.

    Glial cells once had a reputation as the support staff for neurons, the real movers and shakers in the nervous system. In recent years, however, researchers have gleaned hints that glia, which make up about 90% of the cells in the brain, do more than just maintenance work. New findings presented at the meeting suggest that in the crucial task of building synapses—the contact points that allow neurons to talk to one another—glia may even run the show.

    The study firms up earlier evidence that glial cells called astrocytes instruct neurons to make synapses and identifies the first extracellular signal known to spur synapse formation in the brain. “That's a pretty big deal,” says Michael Ehlers, a neuroscientist at Duke University in Durham, North Carolina.

    Although the signals that establish synapses between motor neurons and muscle fibers are fairly well understood, neuroscientists have searched in vain for such signals within the brain and spinal cord. A clue that glia might be the source of the message came in 2001, when Ben Barres and colleagues at Stanford University reported that lab-grown rat neurons make more synapses in the presence of astrocytes (Science, 26 January 2001, p. 657).

    Construction sites.

    Astrocytes tell neurons to build synapses.


    To hunt for the “build synapses” signal, postdocs Karen Christopherson and Erik Ullian in Barres's lab grew astrocytes in culture and filtered the solution bathing the cells before they added it to neurons. To their surprise, these experiments indicated that the chemical signal was a whopper—more than 300 kilodaltons.

    Combing the literature for jumbo proteins made by astrocytes produced a short list. It included a protein called thrombospondin, which has a variety of biological roles but whose function in the nervous system was obscure. Purified thrombospondin mimicked the synapse-forming effect of astrocyte culture medium, the researchers discovered, and the more they added, the more synapses sprouted. Furthermore, the time course of thrombospondin expression in the developing brain mirrors that of synapse formation, they found, putting the protein in the right place at the right time to hook neurons up to one another.

    It makes sense that a large, multipurpose protein like thrombospondin would turn out to be behind synapse formation, Ehlers says: “It can bind and regulate all sorts of things. You can imagine it bringing together a variety of surface proteins, a variety of components needed to assemble something like a synapse.”

    But although thrombospondin spurs the creation of synapses, the structures aren't physiologically active, the researchers discovered. An unknown second signal from astrocytes—coupled with neural activity—is apparently needed to turn them on.

    This type of activity-dependent modification of synapses is just the sort of mechanism many researchers think underlies learning and memory, says Douglas Fields, a neuroscientist at the National Institutes of Health in Bethesda, Maryland. If it turns out that thrombospondin does play a role in these highly regarded functions, glia may deserve yet another promotion—all the way to the boardroom.


    Birds Can Put Two and Two Together

    1. Laura Helmuth

    NEW ORLEANS, LOUISIANA—Almost 29,000 brainiacs gathered here 8 to 12 November to discuss neuroscience at all levels, from genes to neurotransmitters to behavior. Fear, sex, synapses, and true love were some of the many crowd pleasers.

    Birds aren't born knowing how to sing. Chicks must hear adult songs during a short period soon after birth, or they'll be reduced to the avian equivalent of stammering. A study presented here shows that chicks don't need to hear the whole rendition: Given the pieces, they can put a song together themselves.

    White-crowned sparrows that hatch in late summer hear the songs of nearby males. After a relatively silent winter, in which the adults rarely sing, the young sparrows perform the song they heard as chicks. To do so, neuroscientists assumed, the birds must have neurons that harbor a song “template,” cells that encode a complete version of the song. Such neurons would remember the heard song and thus guide the bird's own singing. But scientists have found only neurons that respond to a portion of the song, usually a few notes, or syllables, at a time.


    Suspecting that birds might not need a complete template after all, neuroethologist Gary Rose and colleagues at the University of Utah in Salt Lake City tested whether birds can build a song out of parts. They played white-crowned sparrow chicks recordings of syllable pairs. For a song with the structure A, B, C, D, E, in which each letter represents a particular whistle, buzz, or trill, the birds heard pairs of syllables in a mixed-up order: CD, DE, BC, and AB. Sometimes the pairs would fit together into a real song; other times they'd add up to a song never heard in the white-crowned sparrow world. Months later, the birds composed the syllables into complete songs, whether the songs matched natural patterns or not.

    “It's a cool result,” says Eliot Brenowitz of the University of Washington, Seattle. “It shows that birds can reconstruct the entire song just by paired syllables.” Other bird-song researchers say that understanding the birds' behavior will boost further research into how the song template is stored neurally. “This is the first atomization of the template,” says Eric Fortune of Johns Hopkins University in Baltimore, Maryland.

Log in to view full text