News this Week

Science  24 Jan 2003:
Vol. 299, Issue 5606, pp. 486

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Europe's Comet Chaser Put on Hold Following Launcher Failure

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    NOORDWIJK, THE NETHERLANDS—European planetary scientists last week were nearing the end of a hectic, decade-long development effort as they readied a much-anticipated spacecraft for launch. It was set to perform an unprecedented feat: Catch up with a speeding comet, go into orbit around it, and dispatch a lander to its surface. This week, the scientists don't know where they're going—or when.

    The failure of an upgraded version of Europe's Ariane 5 launcher in December forced the European Space Agency (ESA) to cancel the planned January launch of its $1 billion craft Rosetta and to mothball it indefinitely. “It's a sad day,” ESA's director of science, David Southwood, told a press conference in Paris on 15 January. Other science missions will probably be affected, too.

    Researchers had hoped that Rosetta would be spared because it was to be launched on a standard Ariane 5, not the souped-up version that failed. But during a review of the launcher mishap, “doubts arose about the manner in which the overall system was put together,” says Southwood. “[We felt that] the way things are done needs to be reviewed from the bottom up.” That process will almost certainly delay ESA's Smart-1 lunar explorer, scheduled to ride into space as a secondary payload on an Ariane 5 in March of this year.

    Rosetta was to chase after a comet called Wirtanen, accompany it on its journey to the inner solar system, study the evaporation of surface ice and the release of dust, and even send a small lander to touch down on the tiny nucleus. To catch the comet would have required a complex series of maneuvers involving swinging by Mars in August 2005 and by Earth in November 2005 and November 2007. This tortuous route was designed to accelerate Rosetta enough to reach the outer solar system and rendezvous with Wirtanen in late 2011. But the alignment of the planets required the spacecraft to leave Earth no later than the end of this month. Scientists must now select a new target comet and await a suitable launch window.

    If only.

    Rosetta is imagined orbiting a comet, with its lander on the surface.


    Planetary scientist Alan Stern of the Southwest Research Institute in Boulder, Colorado, principal investigator of Rosetta's ultraviolet spectrometer, says the cancellation was prudent. “The science is delayed but preserved,” he says. “A later mission is better than a disaster.” Rosetta's project scientist, Gerhard Schwehm of the European Space and Technology Centre here in Noordwijk, agrees: “In the end, when you see the results of the mission, you'll forget about these little delays.”

    Schwehm expects a final decision on a new mission plan by the end of May. “It's not a straightforward process,” explains Southwood. “We have to find a balance between maximizing scientific return, minimizing technical risk, and finding the necessary financial resources.”

    Orbital dynamicist Yves Langevin of the Institute for Space Astrophysics in Orsay, France, says the options are rather limited. A launch in October 2003 and a swing-by of Venus instead of Mars would still get Rosetta to comet Wirtanen by the end of 2011, he says, but ESA doesn't want to risk taking the spacecraft so close to the sun. If Venus is too hot, the next “Mars window” is in early to mid-2005, says Langevin, but that would require a slightly more powerful rocket. Alternative candidate comets already identified are Howell, Tempel-2, and Churyumov-Gerasimenko, he says.

    Another option is to launch on a standard Ariane 5 in February 2004 and follow a longer, more complicated, but less fuel-consuming trajectory. In both cases, Rosetta would reach its new goal between late 2013 and late 2014, depending on the comet selected.

    Schwehm adds that the new target comet must satisfy a number of criteria: It should be bright enough for detailed support observations from Earth, active enough to be of scientific interest, small enough to allow a soft landing, and, of course, available at the right time. “We will refine our options in the next couple of months,” he says.

    The delay comes at a price, however. Prolonged storage will add tens of millions of dollars to the total project cost, and some delicate parts of the science instruments might have to be replaced. Moreover, physicist Hans Balsiger of the University of Bern, Switzerland, principal investigator of Rosetta's mass spectrometer, says it might be hard to keep scientific teams together. “Our team has already partly dissolved,” he says. “The longer you have to wait, the less people will be around who know what we're doing.”

    Southwood agrees that it's a “very difficult situation. But you don't get into the space business without realizing you've got to face the unexpected.” And he is confident that Rosetta will eventually fly: “We're gonna do it; don't you have any doubt.”


    Go Slow With Smallpox Shots, Panel Says

    1. Martin Enserink

    Last year, the Bush Administration seemed like it couldn't settle on a smallpox vaccination policy. Now, it's moving ahead so quickly that it risks losing sight of essential details, an Institute of Medicine (IOM) panel concluded in a report issued last week.

    The panel's report, itself a 4-week rush job, offers the first scientific analysis of how the Centers for Disease Control and Prevention (CDC) will implement a policy that President George W. Bush announced on 13 December (Science, 20 December 2002, p. 2312). The IOM experts join a growing chorus urging the government to proceed with caution. In particular, says the report, the government must do a better job of explaining the vaccine's risks to the public, monitoring the inevitable side effects that range from rashes to brain damage and death, and providing financial aid for possible victims.

    The first phase of the government effort could start as early as next week with the vaccination of almost 450,000 hospital workers. That round, which could take several months, will be followed immediately by a second wave that targets as many as 10 million other medical workers, firefighters, and police. Independently, the military is vaccinating about half a million service members.

    Although the panel doesn't offer an alternative timetable, it presents a list of issues that should be addressed before vaccination begins. Education of vaccinees, their households, contacts, and the general public is key, it says. There should be more active surveillance for side effects than CDC proposes, as well as a study of people who have chosen not to be vaccinated. In addition, the second round of vaccinations should not start until all these data have been analyzed. “We have to make sure we learn as much from the first phase as possible,” says William Foege, a former CDC director and a panel consultant. The panel suggests using a single, trustworthy individual with a strong scientific background as the campaign's spokesperson.

    Not so fast.

    A panel says more safeguards are needed before civilians follow the military in receiving smallpox shots.


    Without being asked, the IOM group also addressed problems of liability and compensation. The current patchwork of state workers' compensation laws and insurance plans might not cover loss of income or permanent disabilities, the panel cautions, so vaccinees should be made fully aware of their rights before they decide to be inoculated.

    The compensation issue could derail the program if people decide they don't want to take chances, says Martin Myers, a vaccine policy expert at the University of Texas Medical Branch in Galveston, adding that it would be “unfair” not to fully compensate those who advance the national interest by taking the shots. One solution would be to create a new program like the Vaccine Injury Compensation Program, says Myers, which covers childhood vaccine side effects. This week, two labor unions called for such a federal plan, although the Bush Administration has said it does not intend to create one. “This is an issue [that] Congress could take on,” says George Hardy, executive director of the Association of State and Territorial Health Officials in Washington, D.C.

    Outside experts say the panel made some valuable points but should have been even more outspoken. Given all the hurdles, says Tara O'Toole, director of the Johns Hopkins Center for Civilian Biodefense Strategies in Baltimore, there's no way the Administration can stick to its timetable. “The government doesn't understand how feeble our public health system is,” she says. “They're on another planet.”

    CDC director Julie Gerberding vigorously defended her agency's plans in a press conference shortly after the report was released. She said that several issues raised by the IOM panel had already been addressed. “We have done a heroic job” of getting information out, Gerberding said. Although she acknowledged that there are gaps in compensation plans, “we are certainly not going to delay this program” because of them.


    Private Pact Ends the DNA Data War

    1. Leslie Roberts

    Sequencers and bioinformaticists may have gone in swinging, but they emerged from a 2-day workshop last week in Fort Lauderdale, Florida, well, … almost embracing.

    At the meeting—its proceedings are being guarded as tightly as a state secret—the two camps unexpectedly resolved a rancorous dispute over access to DNA data. The solution seems to be good news for fans of “free and unfettered” access.

    As the debate heated up over the past 2 years, bioinformaticists accused the big genome sequencing centers of hoarding data or trying to restrict their use—in violation of the community's policy of immediate release. The sequencers pleaded self-defense, charging that some bioinformaticists were stealing credit, “scooping” them before they could publish an analysis of their hard-earned findings. A solution proposed a few months ago—a slight restriction on data use—only fanned the flames (Science, 15 November 2002, p. 1312). Just about the only thing on which the two sides could agree was that the existing standoff was serving no one well.

    So hopes were not high going into the meeting, co-sponsored by the Wellcome Trust, a U.K. charity, and the U.S. National Human Genome Research Institute (NHGRI). Amazingly, says Ewan Birney, a computational biologist at the European Bioinformatics Institute in Cambridge, U.K. (and one of 40 or so in the gathering of sequencers, data users, database managers, and journal editors), it was “very productive.” “Extraordinarily successful,” echoed James Battey, director of the National Institute on Deafness and Other Communication Disorders. But beyond such generalities, they and others were not allowed to comment.

    Invoking the archaic “Chatham House Rule,” the Wellcome Trust swore the participants to secrecy—on penalty of what, Science could not learn. Francis Collins, head of NHGRI, was allowed to communicate three sanctioned sentences, including: “We believe this consensus will be warmly welcomed by the scientific community.” Others confirm that this means the community has backed away from imposing restrictions on the use of DNA data.

    The question of when sequencers must release their data has dogged the Human Genome Project from the outset. In a magnanimous and politically savvy gesture in 1996, the publicly funded international consortium crafted the “Bermuda convention,” requiring sequencers to deposit their data immediately in the public database, GenBank.

    Welcome consensus.

    Francis Collins was chosen to speak about an unexpected resolution on data release.


    But as the pace of sequencing sped up, tensions arose. “Scientists can sequence faster and faster, but they don't write papers any faster,” said David Lipman, head of the National Center for Biotechnology Information, which runs GenBank, before the meeting. Meanwhile, bioinformaticists—who make their living analyzing rather than generating data—were ready and waiting. A few “abuses” occurred, in which groups wrote comprehensive analyses of a genome they didn't sequence—sometimes without even acknowledging the producers.

    Quietly, restrictions on data use started cropping up on individual Web sites—including at the Wellcome Trust Sanger Institute in Hinxton, U.K. A few even crept into GenBank before being deleted, Lipman noted last fall. More worrisome, several people said, data sometimes weren't deposited at all or were very slow to appear.

    The task of trying to broker a solution fell to Collins. His proposal to GenBank was, in essence, that anyone should be free to use prepublication data to analyze a single gene or region, but that the centers producing the data should be allowed to reserve the right to publish the initial large-scale or “global” analysis. Proponents of this plan argued that such a statement, attached to the data in GenBank, would stop the backsliding away from immediate release.

    Database managers hit the roof, as did others. “They want to have their cake and eat it too,” several people told Science at the time. The international databases refused to go along. Several critics noted that the big genome centers receive huge amounts of taxpayer money to perform a service for the community; if sequencers don't like the terms of that arrangement, critics said, they should find another line of work.

    Although he said he was “empathetic,” computational biologist Sean Eddy of Washington University in St. Louis, Missouri, considered the proposal unworkable: “If every piece of sequence has different restrictions, if I have to write a letter requesting permission to use it, there will be gridlock.” Of course sequencers deserve credit, he told Science before the meeting, “but it seems narrow that the only way to get rewarded is through an old-style publication.” Draconian measures aren't needed, because “people do know where credit belongs.”

    That view apparently prevailed in Florida. In a late-night session, a subgroup drafted a statement that reportedly reaffirms the community's commitment to the Bermuda rules and promotes the use of moral suasion—not requirements—to ensure that people give credit where it is due. And that means that funding agencies, for instance, must back up the standards. The large-scale sequencers must release their data, whatever the risks. And the users must be respectful of those who produced the data.

    Details will become clear, say various participants, when the Wellcome Trust releases a “considered statement, agreed [to] by the attendees,” within a few weeks.


    Plague of Lies Lands Texas Scientist in Jail

    1. David Malakoff

    A prominent infectious-disease researcher last week discovered just how jumpy the country has become about bioterrorism. He was jailed after allegedly lying about the status of 30 vials of plague bacteria in his laboratory. Bioscientists say the bizarre case, which sparked a massive investigation and captured the White House's attention, highlights the increasingly tense legal climate surrounding researchers working with potential bioweapons.

    Thomas Butler, 61, is a senior researcher at Texas Tech University's Health Sciences Center in Lubbock. He was arrested last week, after FBI agents accused him of reporting the vials missing to cover up the fact that he hadn't documented destroying them, as required by government rules. The vials contained the plague bacteria, Yersinia pestis. The U.S. government has placed Y. pestis on a list of nearly 100 “select agents” that must be registered, tracked, and stored in secure facilities because of their potential use as a bioweapon. Under new antiterror laws, most people are barred from possessing the agents, and researchers who mishandle them face stiff criminal penalties.

    Butler has worked with the bacterium for more than 25 years and has dozens of samples from around the world in his lab, according to university officials. On 13 January, say court documents, Butler met with a campus safety officer and “told him that I had noticed for the first time that 30 vials … were missing. I gave him this explanation to demonstrate why I could not account for the [vials].” Actually, the vials “had been accidentally destroyed earlier,” Butler wrote in a statement to the FBI. But he didn't realize his account would prompt “such an extensive investigation.”


    Plague specialist Thomas Butler is arrested for lying about the whereabouts of samples of plague bacteria.


    The next day, more than 60 state and federal investigators descended on the campus after a tip from university officials. White House homeland security czar Tom Ridge called Lubbock's mayor to offer help. Butler repeated his tale to FBI agents but then confessed on 14 January, according to court documents. He was then arrested.

    On 21 January, a federal judge released Butler on $100,000 bail and required him to wear an electronic monitoring anklet. Federal prosecutors have until late next month to seek an indictment. University officials meanwhile have placed him on paid leave, changed the locks on his laboratory, and barred him from campus. If he's found guilty of lying to investigators, Butler could face up to 5 years in jail. Floyd Holder, his attorney, had previously said that Butler intended to plead not guilty to any charges.

    The case underscores the government's concern about bioterror, observers say. It means researchers “have to take [select-agent rules] just as seriously as issues such as human subjects,” says Paul Keim, a microbial geneticist specializing in anthrax and plague at Northern Arizona University in Flagstaff. But another microbiologist, who asked not to be identified, wonders “if the government isn't overreacting [in] making a stupid lie a high crime instead of a misdemeanor.”


    Spanish Researchers Vent Anger Over Handling of Oil Spill

    1. John Bohannon,
    2. Xavier Bosch*
    1. John Bohannon writes from Lyon, France, Xavier Bosch from Barcelona.

    BARCELONA—Ever since the oil tanker Prestige sank in deep water off the Iberian coast on 19 November 2002, the Spanish government has been under fire for its handling of the accident. Now, scientists are adding their voices, en masse, to the din of protest. In a letter on page 511, 422 marine and atmospheric scientists accuse the government of largely ignoring the scientific community in the aftermath of the spill.

    Of all the government's actions, most controversial is its decision to tow the stricken tanker away from shore and sink it rather than guide it into port (Science, 29 November, p. 1695). The government's early assurances that the sunken ship's remaining oil—an estimated 60,000 tons—would solidify in the cold depths have turned out to be spectacularly wrong. According to Spain's National Research Council, roughly 125 tons of oil per day have risen to the surface, apparently because it has cooled much more slowly than experts had anticipated, the council reports. Much of the oil has ended up polluting more than 900 kilometers of Spanish and French coastline, causing an estimated $1 billion in damages. By implying that its handling of the accident has been based on the advice of scientists, the government has tarnished their reputation, the letter's authors contend.

    Outside experts concur that someone's reputation deserves being sullied. “It is difficult to imagine a worse course of action than the one taken. The location of the wreck is ideally situated to spread oil along the coasts,” says oceanographer Desmond Barton of the United Kingdom's Plymouth Marine Laboratory. “I was amazed,” adds Isabel Ambar, an oceanographer at the University of Lisbon, Portugal. “I could not believe that these decisions were taken based on scientific grounds.”

    Damage control.

    A robotic arm patches a gushing leak on the Prestige (top) as workers last month scoop up muck from a rocky beach in northwestern Spain.


    Spain's science minister, economist Josep Piqué, acknowledged to Science that researchers were not consulted about the decision to sink the vessel. But he says that the government has engaged the scientific community ever since. “We did make contact with scientists 1 day after Prestige sank,” says Piqué. He adds that the government has also established a commission to coordinate scientific efforts in managing the spill, evaluating the damage, and creating a science-based plan of action for future spills. Piqué defends his government's management of scientific input, calling it “an optimization of available resources.”

    Few scientists seem convinced. The government has worked harder at defending itself than managing the crisis, charges one of the letter's lead authors, marine ecologist Antonio Bode of the Spanish Institute of Oceanography in A Coruña. According to Bode, government scientists, including those at his institute, were told in a 15 December mass e-mail not to speak with the press about the Prestige. (He and many others defied the order in penning the letter to Science.) Bode also challenges Piqué's claim that a commission is coordinating a scientific response, noting that his team is studying the spill's effects without any input from Madrid. “The government has no awareness of its researchers,” fumes Bode.

    The scientists' demand for better dialog with their government “is a sensible one,” says Barton. “What was obviously needed was planning prior to the incident and, one would hope, better preparation in the future.”


    Four-Winged Dinos Create a Flutter

    1. Erik Stokstad

    Aerodynamic feathers have turned up where paleontologists least expected them: on the hind limbs of small predatory dinosaurs. The birdlike plumes, discovered in half a dozen fossils from China, surprised paleontologists who believe that flight evolved in running dinosaurs that flapped their arms. Instead, the scientists who studied the finds say, the new specimens suggest that avian flight originated in small dinosaurs that glided from trees.

    Although cautious about interpreting the fossils, paleontologists are giving them an enthusiastic welcome. “It's an incredible discovery,” says Kevin Padian of the University of California, Berkeley.

    The diminutive dinosaurs, described in the current issue of Nature, are dromeosaurs, the group that the majority of paleontologists thinks is most closely related to birds. All six belong to a small club called the microraptors, and one is a member of a new species, Microraptor gui. A team led by Xing Xu and Zhonghe Zhou of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing collected one of the new specimens in 2001 in Liaoning Province, and the group purchased five others that came from the same area.

    Zhou argues that microraptors lived in trees (Science, 8 December 2000, p. 1871). Part of what makes that plausible is their small size: M. gui's trunk is roughly 15 cm long.

    The feathers of the six animals are arranged for the most part like those of modern birds. The body is covered with downy feathers, the tail has a tuft of longer ones, and the hands each have about a dozen flightlike “primary” feathers. Along the front limb are 18 or so shorter, “secondary” feathers. The hind limbs sport the same pattern of plumage, confirming a hint published last year when another team found a dromeosaur with a single hind-limb feather.

    Easy glider?

    Wings on the hind limbs of the 77-cm-long Microraptor may have helped it soar.


    These rear feathers would make “a perfect airfoil” and probably allowed Microraptor to glide, the authors argue. Dated at about 126 million years old—some 25 million years younger than the oldest known bird, Archaeopteryx—the Chinese microraptors were not themselves the ancestors of birds. But earlier gliding dinosaurs could have been an evolutionary step from flightless dinosaurs to airborne birds, which would have eventually lost the rear wings, Zhou and Xu say. In addition, the rear feathers would have tripped up a cursorial, or running, dinosaur, they argue—making it unlikely that the creatures used a running start to evolve flight. (Other recent evidence, however, favors such a ground-up scenario; see Science, 17 January, p. 329.)

    “The remote ancestors were probably cursorial, but the close ancestors of birds were clearly arboreal,” Zhou says.

    Zhou's argument makes sense to Rick Prum, an ornithologist at the University of Kansas, Lawrence. “I don't think there's any way to interpret it other than as stunning new support for an arboreal stage in the evolution of flight,” he says. “At the moment of the actual taking to the air, dinosaurs are likely to have been four-winged.”

    Others aren't so sure. Padian points out that known dinosaur hips didn't allow legs to extend out laterally from the body, which would be necessary to spread the rear wings.

    Larry Witmer of Ohio University in Athens notes that even if Microraptor was a glider, it may have been a failed experiment among one branch of dromeosaurs; powered flight could have evolved independently elsewhere. “The scientific community needs to chew on this for a while,” Witmer says. The four wings will no doubt provide more than a mouthful.


    Fire Destroys Historic Australian Observatory

    1. Robert Irion

    Explosive wildfires near the Australian capital of Canberra gutted the Mount Stromlo Observatory on 18 January. The facility, Australia's oldest research observatory and home to the country's largest group of astronomers, was largely destroyed, but all staff and students were evacuated safely in advance of the wind-driven flames.

    Researchers saved most of their data and records, says Penny Sackett, director of the Research School in Astronomy and Astrophysics at Australian National University (ANU), Canberra, which runs the observatory. However, fire incinerated five telescopes and a critical instrument workshop, which housed a nearly finished spectrograph for the 8.1-meter Gemini North telescope at Mauna Kea, Hawaii.

    The initial damage estimate of $12 million is likely to climb, Sackett says. “This is really about total devastation,” she says. “Some of the domes are completely obliterated, and the main administration building is a shell.” The flames also torched eight staff residences but spared the visitor's center and two academic buildings.

    Safety officials warned observatory staff late Saturday morning, 18 January, that fires in the wooded mountains north of Mount Stromlo had broken out of control. Fierce winds whipped the flames into a crown fire that raced above the trees. At 2:30 p.m. Saturday, officials ordered an immediate evacuation; the firestorm engulfed the observatory 30 minutes later, Sackett estimates.

    Destroyed were a 1.9-meter research telescope and the historic 1.3-meter Great Melbourne Telescope, built in 1868 and upgraded in the late 1980s to find massive compact halo objects in the outskirts of our galaxy. The telescopes also were key parts of an international program to look for planets around other stars by means of gravitational microlensing, in which the gravity of a small object passing in front of a star briefly amplifies the star's light.

    Three other telescopes were lost, including the famed 0.23-meter Oddie refractor, erected on Mount Stromlo in 1911. Tens of thousands of people visited each year for public viewing and education programs at the smaller telescopes, Sackett says.

    The rest of Australia's tight-knit community of astronomers was stunned. “Everyone's initial reaction was shock at the amount of damage sustained,” says Christopher Tinney, head astronomer at the Anglo-Australian Observatory in Epping, New South Wales. “I let loose a few four-letter expletives when I read the reports.”

    Loss of the instrument labs will have the most severe impact on Australian astronomy, Tinney believes. Astronomers planned to ship the $2.5 million Gemini spectrograph to Hawaii in July, and workshop staff members were preparing to build a $3.7 million adaptive-optics imaging system for the Gemini South telescope in Chile. Those plans were saved, Sackett says, but the machining, electronics, and optics facilities were burned.

    ANU vice chancellor Ian Chubb has pledged to rebuild the observatory. The new telescopes and labs will “surpass our previous capabilities,” Sackett says. “It's very hard for us to lose these historic telescopes, but they're gone.”


    Stanford Gets Serious About Space Physics

    1. Robert Irion

    Four consecutive Nobel Prizes in the 1990s put physicists at Stanford University and the nearby Stanford Linear Accelerator Center (SLAC) atop their field. However, weakness in one area marred their portfolio: astrophysics. Last week, Stanford and SLAC, a Department of Energy (DOE) laboratory, filled that hole with a flourish by creating a new astrophysics institute—and luring two of the field's stars to direct it.

    “This makes Stanford and SLAC major players,” says astrophysicist Lynn Cominsky of Sonoma State University in Rohnert Park, California. The institute's mission, she notes, is “right at the intersection where the most exciting research is happening.”

    The name reflects that synergy: the Kavli Institute for Particle Astrophysics and Cosmology. Particles zing through space at vastly higher energies than accelerators on Earth can muster. Satellites and other experiments are harnessing nature's prowess to probe basic mysteries, including the “dark matter” and “dark energy” that rule the universe.

    New digs.

    Roger Blandford (top) and Steven Kahn will lead a new Stanford institute.


    The Kavli Institute—named for benefactor Fred Kavli and the Kavli Foundation of Oxnard, California, which provided a $7.5 million grant—will explore those areas “from electronics to equations,” says theoretical astrophysicist Roger Blandford, the new director. “This is a real opportunity to marshal the resources of a major DOE lab,” adds physicist Steven Kahn, the deputy director.

    Stanford physicists are pinching themselves over the new hires. Kahn, the former chair of physics at Columbia University in New York City, is regarded as the country's top x-ray spectroscopist for astronomy satellites, and Blandford has been a leading light in astrophysics at the California Institute of Technology in Pasadena for 28 years. “It's a bit of a wrench to leave Caltech,” Blandford acknowledges.

    Starting this fall, the pair will recruit seven more faculty colleagues, and they hope to employ 90 researchers and staff within a few years.


    Putin Aims to Turn Science Cities Into Silicon Steppes

    1. Paul Webster*
    1. Paul Webster is a writer in Moscow.

    MOSCOW—The Kremlin hopes to knock a little business sense into Russia's scientific community. Last week, President Vladimir Putin's science council agreed on a plan to anoint a few of the country's struggling science cities as “innovation zones.”

    The move hews to a line that Putin laid down shortly after coming to power: Russian scientists must focus limited funds on applied research. That message was clear last week at a meeting of Putin's Council on Science and High Technologies. According to Alexander Sokolov, deputy director of the science ministry's Center for Science Research and Statistics, Putin spurned pleas for blanket salary raises. Instead, he argued that federally supported science must be integrated with industry. “We aren't rich enough to fund pure science,” Sokolov told Science.

    That philosophy will now be applied directly to Russia's 70-odd science cities, many of which were given wide academic freedom in Soviet years but have withered over the past decade. Some were given a boost 3 years ago when new laws granted tax and subsidy privileges to 10 closed research cities, including the nuclear bomb towns of Sarov and Snezhinsk, and to three civilian centers near Moscow: Obninsk and Dubna, both nuclear physics bastions, and Korolev, devoted to space science. Other science cities do not enjoy such concessions. At the science council meeting last week, Economic Development and Trade Minister German Gref presented a draft bill to create “innovation zones” where subsidies, tax breaks, and seed money from a $3 million government venture fund will spur investment in civilian research. Parliament is expected to pass the bill in March, after which the government will tap a handful of science cities as innovation zones.

    Sitting pretty?

    Dozens of science cities hope to capitalize on new economic incentives similar to those enjoyed by the closed nuclear town of Snezhinsk.


    A raft of research projects expected to be announced next month is designed to dovetail with the cities initiative. Last year, a committee of business and science leaders, chaired by Science Minister Ilya Klebanov, set out to choose nine projects to foster industry-academic collaboration in the innovation zones. The committee is due to announce the winners on 18 February; the government has pledged up to $150 million to get them started. “The idea is to push these cities to innovate,” says committee member Mikhail Alfimov, director of the Russian Foundation for Basic Research. “It's not too late to save them as science cities.”

    Significantly, these two initiatives bypass the Russian Academy of Sciences, the country's main basic research funding body. Observers predict that the academy must reform or face further erosion of its longstanding sway over science policy. “[The members] realize that if they remain only basic researchers, they will lose influence,” says Irina Dezhina, a science policy analyst at the Institute for the Economy in Transition in Moscow. “They are very actively trying to develop innovation activities.” For Russian scientists, it's clearly time to get down to business.


    Key Questions Loom Over Effort to Energize Research

    1. Gretchen Vogel,
    2. Constance Holden

    BERLIN AND AWSHINGTON, D.C.—Research chiefs from around the world have agreed to set up a Web-based information clearinghouse on embryonic stem (ES) cells, one of biomedicine's hottest commodities, and to step up efforts to train researchers in the art of working with the cells. However, guidelines on ensuring the quality of ES cell preparations, due out this week after weeks of wrangling, illustrate that scientists are still struggling to draw up a roadmap for navigating the ethically charged field.

    Now that the political controversy surrounding human ES cells is beginning to subside, officials in many countries are eager to help the field take off. At a 7 January meeting in London organized by the U.K. Medical Research Council (MRC), heads of research agencies in eight countries—Australia, Canada, Finland, Israel, Singapore, Sweden, the United Kingdom, and the United States—hashed out ways to boost the pace of human ES cell research. The group decided that easier access to information is essential, says MRC chief executive George Radda. He and his fellow research directors have asked their organizations to join forces on a Web site where scientists will be able to access data on cell lines, regulations on use, and other information such as training courses for handling the sometimes-finicky cells. “The most difficult part of this work is the relatively small number of people who are available to do it,” Radda says.

    The funding agencies also are laying plans for an international stem cell task force. One objective of this dream team—which would consist of as many as 10 of the world's top stem cell researchers—would be to advise agencies on how to ensure the quality and availability of cell lines. Meeting participants will offer nominations; Radda hopes to send out invitations to potential members within a few weeks.

    Judging by the experience of top experts who met at Rockefeller University in New York City in November, the task force could face some difficult issues. The group tried to draw up a list of markers to define human ES cells and set standards for growing them and testing whether they are pluripotent, or capable of becoming any tissue type. But discussions bogged down over a test in which human ES cells would be injected into mouse embryos. This is routinely done with nonhuman ES cells to assess whether they are pluripotent. However, the procedure has never been done with human ES cells, says Janet Rossant of Mount Sinai Hospital in Toronto. She and others worry that human ES cell contributions to a developing embryo could create ethically troublesome mouse-human chimeras.


    MRC chief George Radda led a rap session on how to spur human ES cell research.


    In the end, the group agreed that the procedure would be acceptable as long as any mouse-human hybrid is destroyed early in development, says Rockefeller biologist Ali Hemmati Brivanlou. An early draft of the group's white paper held up the chimera test as the “gold standard” for pluripotency, but many discussants objected. “There are probably many ethical and other problems that should be resolved before we can consider” this approach for human cells, says Rudolf Jaenisch of MIT. The final version is more tempered, referring to the chimera test as “interesting” and “potentially valuable.”

    The white paper was modified in other sections, too. “We didn't want to have a list of ‘thou shalts’ and ‘thou shall nots,’” says Rossant. The report now contains suggestions—but does not propose standards—on a range of issues from molecular markers and animal assays to “infrastructure needs for the field.” Although the group didn't come up with definitive standards, says Harvard's Douglas Melton, “there is broad agreement [on the need to] use measurable, defined criteria to define a stem cell.” Such a definition is essential if an international stem cell repository is to be built, Rossant says.

    A human ES cell bank, containing well-characterized cell lines, still tops most researchers' wish lists. A central repository with a range of cell lines “would be a huge step for the field,” says Nissim Benvenisty of Hebrew University in Jerusalem. Radda says an MRC-led stem cell bank, expected to be up and running later this year (Science, 13 September 2002, p. 1784), might be a first step toward that goal.


    Seeking the Cause of Induced Leukemias in X-SCID Trial

    1. Jocelyn Kaiser

    Details of a second case of cancer in a gene-therapy trial in France, revealed last week, raise the odds that both were therapy induced. In both cases, a retrovirus engineered to shuttle corrective genes into cells inserted itself in or near a cancer-causing gene, apparently triggering uncontrolled cell growth. The risks seem “surprisingly high,” says pediatrician Alain Fischer, who with Marina Cavazzana-Calvo led the trial at the Necker Hospital for Sick Children in Paris.

    The French team has restored the immune systems of nine of 11 boys with X-linked severe combined immunodeficiency disease (X-SCID), making this the first clear success in gene therapy. But the appearance of two cancers, one in September and a second in December, is a major setback for the field (Science, 17 January, p. 320).

    Last week, the U.S. Food and Drug Administration (FDA), which was already reviewing three other SCID trials, put a hold on 27 additional U.S. trials using retroviruses to insert genes into blood stem cells. The National Institutes of Health's (NIH's) Recombinant DNA Advisory Committee (RAC), meanwhile, postponed a planned emergency meeting to allow investigators to gather more data.

    The RAC released information from Fischer's team last week, revealing that in both cases of leukemia, the problem arose from an overproduction of immune cells in the blood. The first case involved the replication of a single γδ T cell, and the second involved an excess of three types of an αβ T cell. The problem in the second patient may have occurred in a young, undifferentiated cell that gave rise to three αβ T cell subclones, says molecular biologist Christof von Kalle of Cincinnati Children's Hospital Medical Center in Ohio, who is collaborating with the French team. Although experts had thought that other factors might have contributed to the first child's leukemia— including a chickenpox infection and a family history of cancer—the second child apparently did not share the same risk factors.

    Searching for answers.

    Christof von Kalle (with research assistant Barb Jensen) probes the molecular events that led to leukemia in two patients.


    Von Kalle's lab has found that, in both patients, the vector inserted itself near the same gene, LMO2, which has been linked to leukemia. “It is a surprise. There's no good explanation at the moment,” says gene-therapy expert Inder Verma of the Salk Institute for Biological Studies in La Jolla, California. The French team's hypothesis is that the gene inserted to correct the X-SCID problem (common γ chain, or γc) may be playing a crucial role. This vector contains a sequence that promotes γc, which in turn boosts the production of T cells. In a cell in which the vector has landed near and also activated the LMO2 gene, LMO2 may be cooperating with γc, giving cell growth an extra kick. If so, even if just one cell in 100,000 carried the insertion in or near the LMO2 gene, it could multiply quickly enough to dominate the T cell population, notes von Kalle.

    This gene insertion may seem “frightening” because it can lead to cancer, says Theodore Friedmann, a gene-therapy researcher at the University of California, San Diego, and chair of the RAC group. But there is also reason for optimism: The technique is producing therapeutic results, and the adverse effects might be “specific to the X-SCID trials,” von Kalle says. Other SCID trials that don't involve the same vector sequences might be safe, he and others suggest. Moreover, it may be possible to modify X-SCID gene therapy to reduce risks. For example, researchers could use a regulated promoter sequence for γc and target fewer cells. In addition, trials that begin therapy when children are older may lower the risk for harmful alterations in immature T cells.

    Researchers are awaiting more data before they attempt to reassess the risks. RAC aims to have more details in hand in early February, when it will hold an open meeting, says Stephen Rose of NIH's Office of Biotechnology Activities. By then, von Kalle expects to have data on precisely where the therapeutic gene inserted itself and whether it activated LMO2. An FDA advisory committee will take up the case in late February, and the American Society of Gene Therapy, which issued a press release supporting the suspension of trials, also expects to undertake a review.


    Tapping the Mind

    1. Ingrid Wickelgren

    Although still largely experimental, devices that decipher brain signals are advancing quickly and allowing some fully paralyzed people to interact with the world

    Amyotrophic lateral sclerosis (ALS) can trap the mind inside an immobile body. It destroys the nerves that control muscles, eventually leaving patients without the ability to speak or even flick their eyes to one side. In the past few years, however, researchers have started to equip a few such “locked-in” patients, including those paralyzed by stroke or other diseases, with communication devices that unlock their minds.

    For decades, science-fiction writers have envisioned computers that communicate directly with the brain. Now a rapidly expanding clique of researchers is making it a reality. A few laboratories started developing these so-called brain-computer interfaces (BCIs) in the 1980s and have been refining them since then (Science, 29 October 1999, p. 888). Now several dozen teams have entered the field. Together they're improving upon early BCI models and coming up with new ways to read brain signals. According to BCI pioneer Jonathan Wolpaw of the Wadsworth Center, part of the New York State Department of Health in Albany, “it's a very exciting time; a lot of people are getting involved.”

    Most BCIs read brain waves, the electrical impulses created by neural activity that can be detected—albeit fuzzily—through the scalp. By diligently controlling their mental activity, patients can choose letters to spell words, guide a cursor, or direct crude robots. But a rival circle of scientists is rapidly advancing a type of BCI that is implanted inside the brain. Such devices tap into the more detailed neural signals relayed by individual neurons. The most sophisticated of these implanted BCIs have recently enabled monkeys to play video games and even manipulate robotic arms. Whether the implanted devices will actually lead to more versatile and workable BCIs than the external type is a matter of fierce debate.

    In the past few years, brain-wave BCI technologies have been advancing rapidly, providing faster spelling, better cursor control, and headway into prosthetics, environmental-control devices, and smart wheelchairs. The advances are fueled in part by cheaper and more sophisticated computer hardware and software, which has given BCI researchers access to portable machines that perform complex mathematical manipulations on the fly. Good old-fashioned funding helps, too: The National Institutes of Health awarded $3.3 million in late 2002 to a partnership headed by the Wadsworth group to further develop software that can test several BCI systems to see which is best for a patient. Researchers can also use the software to build and test their own brain-tapping technologies.

    In addition, the Defense Advanced Research Projects Agency (DARPA) recently awarded a Duke University research team $26 million to improve its implanted BCI technique. A DARPA spokesperson says the agency is interested in technology that might, for example, enable soldiers to push buttons with their brains, giving them speedier control of submarines and aircraft and enabling them to more adeptly manipulate robotic arms that move munitions.

    So far, very few patients have had access to a BCI. A few scattered labs have conducted tests on one or more severely disabled patients, and one research team has tried its BCI technology on a record 11 patients so far. Most of the experimental subjects tested during the development of various BCIs have been healthy individuals. However, advancing technology and software innovations have put BCIs on the cusp of becoming more widely available.

    I think I can.

    With training, people can modulate their brain waves to direct a miniature robot to navigate its way through the rooms of a model house.


    Developing an array of useful applications, in particular, is critical to bringing BCIs past the experimental stage and into regular use in people's homes—the field's next big challenge. “A lot of good work has been done in the past 15 years to build a foundation” for BCIs, says computer scientist Melody Moore of Georgia State University in Atlanta. “Now, it's time to build the house.”

    Tens of thousands stand to benefit from BCI technologies. Initial beneficiaries will be people who are almost totally paralyzed: some ALS patients, who number 30,000 in the United States alone; people with severe forms of cerebral palsy; and patients who have suffered severe strokes or accidents, among others. As BCI technology improves, it is expected to become useful to people who are less severely disabled, such as quadriplegics who want to operate a wheelchair or a robot.

    Surfing brain waves

    In the brain, billions of neurons are continuously sucking in and spewing out ions, creating tiny electrical currents. A detector called an electroencephalogram (EEG) can measure the sum of these subtle sparks, millions at a time, by means of electrodes affixed to the scalp.

    Niels Birbaumer, a psychologist at the University of Tübingen in Germany, was one of the first to find that people can control certain brain waves. He focused on so-called slow wave cortical potentials: gradual voltage changes that emanate from the brain's exterior section, known as the cerebral cortex, and occur over seconds. In the early 1990s, Birbaumer and his team created a speller that patients learn to control using positive or negative slow waves to choose between two banks of letters. Once selected, a bank splits in two, continuing a process of elimination to reveal the wished-for letter. In March 1999, Birbaumer and his colleagues reported that after 2 months of training for about an hour a day, two ALS patients on respirators learned to write messages at a rate of about two characters per minute.

    Wolpaw trained his eye on another set of brain currents, EEG rhythms with frequencies between 8 and 12 hertz known as mu waves, and beta rhythms, which have about double the frequency of mu rhythms. Both emanate from the part of the brain's surface that mediates sensation and movement, the sensorimotor cortex. Wolpaw and his colleagues, including Wadsworth psychologist Dennis McFarland and program coordinator Theresa Vaughan, developed a system that enables a person to move a cursor up or down by raising or lowering the amplitude of a mu or beta rhythm. Usually a person first learns to do this by imagining moving a hand or other body part up or down. Healthy subjects, the team reported in 1994, could use mu and beta rhythms to direct a cursor—somewhat crudely, but with up to 70% accuracy—in two dimensions to one of four large targets at the corners of a computer screen.

    Since then, the Wadsworth team has improved its techniques for homing in on the desired EEG frequencies, translating those signals into cursor movements, and tuning the BCI to individual users. These advances have given users much more precise control of the cursor. In work now in press, McFarland, Wolpaw, and their colleagues show that college students can use the BCI to nudge a cursor a precise distance up or down to land on one of four icons, a step toward the goal of a brain-wave mouse. “Once you can control a mouse, the whole world of software opens up to you,” Wolpaw says. Wadsworth neuroscientist Irina Goncharova is now leading an effort to test this BCI on patients with mild or moderate ALS at Drexel University Hospital in Philadelphia.

    But both Wolpaw's and Birbaumer's techniques require weeks to months of training to teach a person how to control their brain waves. In contrast, a BCI control technique developed by psychologist Emanuel Donchin, now at the University of South Florida in Tampa, requires almost no training. Donchin and his graduate student Larry Farwell, then at the University of Illinois, Urbana-Champaign, in the mid-1980s based their prototype BCI on the so-called P300 wave, a brief voltage increase that peaks about 300 milliseconds after the onset of certain surprising or unexpected events.

    Donchin and Farwell devised a grid containing the letters of the alphabet and typing functions such as space and backspace, which appear in rows and columns that flash randomly on the screen. A person focuses on a letter in the grid and mentally indicates “that's it!” whenever the row or column containing the letter is illuminated. This happens about 1 out of 6 flashes, making the event somewhat surprising and therefore likely to elicit a P300 wave. The computer then identifies the letter by finding the intersection of the row and column that produced the wave.

    Recent tests on college students indicate that the system could be used to “type” nearly eight characters per minute with 80% accuracy. Donchin and his team are now starting to test their BCI with severely disabled patients.

    Another BCI gives users split-second control over a mobile robot. Instead of analyzing a particular EEG component, such as mu rhythms or slow waves, José del R. Millán at the Dalle Molle Institute for Perceptual Artificial Intelligence in Martigny, Switzerland, developed a BCI that analyzes overall EEG signals at eight scalp locations. It relies on the fact that thinking vastly different thoughts will produce different EEG patterns. Using a neural network algorithm, a computer learns to distinguish among three such thoughts—say, mental arithmetic, visualizing a spinning cube, or imagining arm movements—and is programmed to perform a specific command based on the mental pattern it detects.

    In unpublished work, two healthy individuals this past spring learned to use Millán's BCI to manipulate a pocket-sized wheeled robot, a stand-in for a smart wheelchair. They could make it scoot forward, turn right, turn left, or stop, and thus were able to direct it through a model house with surprising speed. “The striking finding is that subjects can do this with brain control in only 35% more time than it would take if they were simply pressing a key,” Millán says. Millán programmed his BCI to issue commands twice a second, so users can make decisions about where to go on the fly and, say, avoid overshooting an entryway.

    Computer communication

    One big drawback of these technologies is that they are incompatible, making them difficult to combine and slowing their development. Most of the systems started out so inflexible that adding a new feature was agonizingly difficult. For instance, Wolpaw and his team, including software engineer Gerwin Schalk, found an EEG signal that appeared when a person made a mistake while using their BCI, a discovery that could be used to design a quick “erase” option for their system. But adding such an option would have meant extensive reprogramming.

    So in February 2000, Wolpaw, Schalk, and McFarland teamed up with Birbaumer and Tübingen software engineer Thilo Hinterberger to build BCI2000, a flexible, universal BCI platform on which various brain waves could be selected and new applications could be easily built. They share it widely, making it easier for newcomers to enter the field. Ultimately, they hope that nurses or family caregivers will use the Windows-based system with minimal training, eliminating the need for scarce experts to accompany BCI software. “We need a worldwide user-friendly system that a reasonably intelligent person can download from the Internet for free,” Birbaumer says.

    Brain dictionary.

    New brain-computer interfaces are helping patients translate thoughts to words more efficiently than earlier models did.


    The basic framework for BCI2000 is now complete, and Wolpaw expects details to be published soon. It has four easily adaptable modules that handle the four essential functions of a BCI. One takes the raw brain signal, amplifies it, and encodes it digitally. Another extracts the desired features of the brain signal, such as a mu rhythm or P300 signal, and translates that signal into a command, such as movement of a cursor in a certain direction. The third controls a device, say, one that navigates the Internet or operates a prosthetic arm. And the fourth allows a user to start and stop the BCI and to specify details, such as the speed, of its operation.

    The software has already had an impact on the field. In spring 2002, South Florida's Donchin needed to upgrade his BCI, which was incompatible with state-of-the-art PCs, so he could start testing disabled patients. Donchin met with Wolpaw and Schalk at a BCI conference in Rensselaerville, New York, in June 2002 and described his predicament. Schalk volunteered to do the necessary programming on BCI2000. Within 2 weeks, Schalk managed to get Donchin's BCI up and running, enabling Donchin to bring it to New York City to test it on his first patient (this author's father) in September.

    Georgia State's Moore and her colleagues are using BCI2000 to develop an environmental-control system that allows a user to turn on and off lights, a television set, and a radio with brain waves. They've also built a communication system in which a person can select words from a list of nouns, verbs, and objects, and that will predict words and even conversations, potentially providing faster communication than traditional spellers allow. Their Web browser causes a cursor to hop from one Web link to the next in response to altered brain signals.

    So far these prototype applications have been largely tested on simulated brain signals, but Moore has just started testing them on healthy volunteers and will soon include patients with spinal cord injuries or early-stage ALS. Eventually, Moore plans to run all of these applications on a laptop mounted to a smart wheelchair under development that will also be controlled by brain waves.

    Direct line from the brain

    Many researchers believe that BCIs that rely on fuzzy brain-wave signals are of limited value. These devices listen in on the accumulated hums of millions of neurons after they merge and pass through the skull, akin to listening to a crowd in a baseball stadium from the parking lot. EEG-based BCIs “do not extract the actual information in our brains—for example, our concept of a word,” says neuroscientist John Chapin of the State University of New York Health Sciences Center in Brooklyn.

    In contrast, a relatively new generation of BCI researchers implants electrodes inside the brain to pick up the chatter of single neurons, something like eavesdropping on the conversation of a couple inside the stadium from a few seats away. These signals, Chapin and others contend, comprise the actual brain code for movement and thought.

    Neurologist Philip Kennedy, head of Atlanta-based Neural Signals, and his neurosurgeon colleagues Ron Bakay and Princewill Ehirim are so far the only team to create a BCI with electrodes implanted in a human. Their most successful patient, a Georgia dry-wall contractor named Johnny Ray who had suffered a massive stroke that left him almost totally paralyzed, learned to tune his neural signals to operate a cursor, enabling him to spell and hit icons for statements such as “I'm hungry.” (Ray, whom Kennedy calls the first human cyborg, used the BCI for 4 years before his death in spring 2002.)

    Ray communicated through a novel electrode technology Kennedy invented: In the brain, chemicals lining a glass cone coax neurons to grow through the electrode and link to recording wires. This anchors the electrode and allows stable recording. In the current design, Kennedy and his colleagues implant two electrodes into their patients' brains; they are working toward implanting eight at a time. Other research teams are testing devices that extract information from dozens to hundreds of brain cells at a time. More massive arrays, some researchers believe, are the key to advanced prosthetics.

    In the mid-1990s, Chapin, then at the Medical College of Pennsylvania-Hanemann School of Medicine in Philadelphia, and Miguel Nicolelis, at Duke, threaded 46 hair-thin wires inside a rat's brain and taught the animal to use its thoughts alone to tip a lever and receive a drop of water. A computer depressed the lever whenever the nerve signals picked up by the microwires displayed a pattern like that present when the rat moved the lever with its paw.

    In spring 2000, Nicolelis, Chapin, and their colleagues implanted a more extensive array inside the brains of two owl monkeys. They taught the monkeys to operate a joystick with their hands, maneuvering a cursor, or to reach out with their arms to grab a piece of fruit and bring it to their mouths. A simple formula, the team discovered, could predict from the electrical activity of 100 neurons a monkey's hand position milliseconds later. They translated these natural neuronal patterns into instructions for a robot arm—and watched the robot obediently mimic the monkey's arm movements.

    Such systems have limitations. The monkey has to move its arm to produce the correct brain signal pattern, which won't work for paralyzed people. In addition, the monkey has no idea that its brain signals are controlling a machine, and so it cannot learn to improve its robotic performance.

    Direct connection.

    Monkeys have learned to control robotic arms such as this one via electrodes implanted in their brains.


    Recently, a group led by neuroscientist Andrew Schwartz, formerly at Arizona State University, addressed these issues. They tied monkeys' hands down and had them play a game in which their brain signals directly controlled a cursor. After 2 to 3 weeks of practice, one of the monkeys could hit the correct target nearly every time. The researchers linked observable changes in the firing patterns of 64 neurons to the animal's improved skills, indicating that practicing the brain-wave game was honing the responses of the cells (Science, 7 June 2002, p. 1829). “A monkey can squeeze a lot of information from a minimal neuronal signal,” Schwartz says. He emphasizes that learning, guided by the game's visual feedback, was key to this ability.

    Similarly, in work presented at the 2002 Society for Neuroscience meeting in November, Nicolelis's team, while collecting data from 86 motor cortex neurons, taught a macaque monkey to use a joystick to quickly position a cursor inside a target. The scientists then disconnected the joystick—although the monkey could still handle it—and ran the game off decoded neural signals. The monkey appeared to learn to manipulate the cursor just by thinking and eventually stopped moving its hands altogether.

    Recently, the Duke researchers have added a new robot with a gripper hand into the loop. When the monkey moves the cursor toward the on-screen target, the robot will reach for an object. The monkey will also receive tactile feedback—from small vibrators attached to its skin—to indicate the force with which the gripper grasps the object. The faster the vibrations, the higher the force. The researchers hope that this will enable a monkey to learn to pick up an object without crushing or dropping it. This experiment, says Nicolelis, should have “a tremendous impact on what you do to control prosthetic devices.” Using tactile feedback would be particularly useful in ALS patients, who retain some sensation even after most motor neurons are destroyed.

    In a similar effort, Schwartz, now at the University of Pittsburgh, and his collaborators have hooked a monkey's brain up to a robotic arm. If the monkey is allowed to view the robot and the food on the computer screen, it can get the robot to reach out and retrieve the food.

    In both Schwartz's and Nicolelis's experiments, the monkeys were first trained by practicing the movement with their hands, something paralyzed people cannot do. But Nicolelis is optimistic that this hurdle can be overcome, because people can be trained with verbal instructions. “We hope that we can show a visual trajectory to a human or tell him just to think about executing a movement,” and that thought alone will elicit coherent patterns of neuronal activity in the motor cortex, Nicolelis says.

    Many other hurdles must be overcome before implanting such arrays in humans—not least of them establishing that implanted electrodes are safe. As for stability, the microwires in the macaques were still picking up signals from the vast majority of the initial crop of neurons 1 year later. In Schwartz's case, some of the electrodes have lasted up to 3 years. “But we need to know that's the rule,” Schwartz says.

    Meanwhile, Schwartz and a University of Michigan team are developing electrodes designed to be easier to implant and to interact more securely and safely with natural brain tissue. Brown University's John Donoghue is also working with researchers at Cyberkinetics of Providence, Rhode Island, on a novel silicon array of 100 microelectrodes that he says will make extracting neural signals much easier.

    But some researchers in the field wonder if trying to implant such arrays in the human brain to get motor instructions might be overkill. When operating a prosthetic limb, for instance, the user could just tell it to lift, lower, or open or close its hand—or even grasp an object at a certain location—and let robotics do the rest. “BCIs just need to convey intent,” Wolpaw contends, and not the details of how a brain would coordinate movements. Indeed, neuroscientist Gert Pfurtscheller and his team at the Graz University of Technology in Austria have already demonstrated that a quadriplegic patient fitted with a prosthetic left hand learned to use mental imagery along with a scalp-based BCI to open and close the hand. After 5 months of training, the patient picked up an apple with his new hand and ate it.


    Power to the Paralyzed

    1. Ingrid Wickelgren

    Most amyotrophic lateral sclerosis (ALS) patients do not go on respirators when their breathing muscles deteriorate but instead elect to die. Niels Birbaumer of the University of Tübingen, Germany, attributes that in large part to a widespread misperception that life with ALS is not worth living. But he has found otherwise.

    In an unpublished study, Birbaumer, neuroscientist Andrea Kuebler of Trinity College in Dublin, Ireland, and their colleagues discovered that although three-quarters of the 76 ALS patients Kuebler interviewed had some depressive symptoms, as a group they were significantly less depressed than patients diagnosed with major depression. With symptoms that ranged from minor to total paralysis, few had access to brain-computer interfaces (BCIs) to improve communication. Yet 80% rated their quality of life as “good” or “satisfying.”

    Birbaumer, who depends upon these paralyzed patients for his research, has seen how some of them can thrive, in some cases even more so by using his pioneering BCI technology. For example, a 47-year-old former lawyer named Hans-Peter Salzmann with total paralysis from ALS has become so adept with Birbaumer's Thought Translation Device, the only BCI that patients have used for years in their homes, that he has now begun communicating over the Internet. “He is a happy person. Why kill a happy person?” Birbaumer asks.

    Lines of communication.

    Niels Birbaumer (in black) teaches a Peruvian patient to express himself by sending EEG signals to a computer.


    Financial reasons may also prompt some severely disabled patients to decline artificial respiration. U.S. insurance companies often won't foot the hefty bill—which can be up to $900 per day—for the 24-hour nursing care that is typically required. German companies are far more likely to pay, but when Salzmann's insurance company refused, Birbaumer helped his family sue—and win.

    Birbaumer dreams of getting BCI technology to tens of thousands of other patients scattered around the world. He spent August 2002 in Lima, Peru, training a wealthy 58-year-old businessman named Elias Musiris to use a BCI. He's also been flying twice a month to Tel Aviv, seeking to assist victims of terrorism who lie paralyzed in hospitals, unable to talk.

    The Internet may help spread BCI technology. Neuroscientist Gert Pfurtscheller and his team at Graz University of Technology in Austria recently used a net-based telemonitoring system to interact with a 32-year-old German man with severe cerebral palsy, training him to use a BCI spelling device.

    The next step, says Birbaumer, is to develop a BCI system that can be operated by a caregiver without ongoing assistance. “We need to come up with a system that is independent of us,” he says. He hopes BCI2000, a software system that allows users to develop their own BCI or find the most effective current model, will be the answer.


    Through a Lens, Deeply

    1. Robert Irion

    SEATTLE—Pacific Northwest winter skies cleared for 4 days in early January, welcoming more than 2000 astronomers to their biannual meeting.

    Photographers and astronomers love their zoom lenses. Now, a new camera on the Hubble Space Telescope has looked through the best natural lens in space with unprecedented clarity in search of baby galaxies—and has found far fewer than expected.

    The view comes by means of a “gravitational lens,” which arises when light from a distant source passes by a massive object in direct line of sight to Earth. The intervening object's gravity bends and magnifies the source's light. The strongest known lens is Abell 1689, a cluster of galaxies that crams hundreds of times the mass of the Milky Way into a small volume of space more than 2 billion light-years away. Light goes through funhouse-mirror distortions as it traverses the cluster's gravitational dips and bumps. Hubble's recently installed Advanced Camera for Surveys (ACS) targeted Abell 1689 for 13 hours in June 2002, creating an extraordinary map of its eerie arcs and blobs—the imprints of remote galaxies behind Abell 1689.

    The result, which dazzled viewers at the meeting, is “the deepest view of the universe so far,” says astronomer Narciso Benitez of Johns Hopkins University in Baltimore, Maryland. The lens amplifies light from a few galaxies at least 13 billion light-years away—ordinarily beyond Hubble's vision. Judging by the number of slightly closer galaxies, astronomers had expected to see 25 to 30 such remote objects with Abell 1689's boosted view. Instead, the ACS images captured just three, Benitez reported. Most galaxies that existed less than a billion years after the big bang might not have grown bright enough for the lens to expose, he speculates.


    Abell 1689, the most powerful gravitational lens, magnifies remote galaxies into distorted blobs and crisp arcs.


    Because even a powerful gravitational lens zooms in on only tiny patches of sky, Benitez and colleagues plan to bolster their statistics by using ACS to explore a half-dozen more lenses. Rich cradles of baby galaxies probably won't appear, an ongoing survey suggests. A team led by astronomer Richard Ellis of the California Institute of Technology in Pasadena has found only a dozen primitive galaxies in eight gravitational lenses examined by another Hubble camera with less sensitivity. “We may be hitting the wall when we can no longer see these feeble little glowworms,” says Ellis, whose team has not yet reported its findings.

    Most light from infant galaxies may have been absorbed by the remnants of neutral hydrogen gas that filled space before the first stars and quasars shone—the so-called dark ages of the universe. Alternatively, the first gasps of star birth in the tiny galaxies may have snuffed out further star formation by heating up gas so much that it could no longer collapse into stars. That process, Ellis says, would have left the galaxies too dim to see until they grew larger as the universe aged.

    However, other ACS images shown at the meeting reveal remote galaxies freckling the sky, a discrepancy that astronomers are hotly debating. Graduate student Haojing Yan of Arizona State University in Tempe and his colleagues found about two dozen extremely faint galaxies in a patch of sky just 1/10th the width of the full moon, revealed by a 7-hour-long ACS exposure. “We think these were the significant contributors to ending the dark ages,” Yan says. Radiation from vigorous star birth in the young galaxies should have ionized the last of the neutral hydrogen, he says, allowing light to stream freely through the universe.

    ACS is the prime tool to resolve the debate, says instrument chief Holland Ford of Johns Hopkins. The camera is 10% to 15% more sensitive than hoped, he notes. However, the electronic pixels in its largest detector are suffering a higher rate of irreparable radiation damage than projected. Astronomer Adam Riess of the Space Telescope Science Institute in Baltimore says the damage—which will knock out about 2% of the camera's imaging surface within 3 years—is “a potential annoyance that will have little scientific impact.” Still, Ford's team will take the most demanding ACS images sooner rather than later.


    A Tsunami of Hot Jupiters?

    1. Robert Irion

    SEATTLE—Pacific Northwest winter skies cleared for 4 days in early January, welcoming more than 2000 astronomers to their biannual meeting.

    Astronomers have found about 100 planets beyond our solar system, so new ones must smash records to get noticed. The latest, announced at the meeting and scheduled for publication in the 30 January issue of Nature, sets three standards: farthest from Earth, tiniest orbit, and the first to be revealed by a technique that could expose thousands more like it. “This is the beginning of a new wave of extrasolar planets,” says astronomer Sara Seager of the Carnegie Institution of Washington, D.C. “It's a big step forward.”

    All previous planets were inferred by the back-and-forth gravitational tugs they exert on their parent stars. For several years, many teams have tried a more direct method by looking for planetary “transits.” A planet crossing the face of its star, as seen from Earth, will block a bit of the star's light at regular intervals. Astronomers have watched repeated transits by one Jupiter-size body since 1999, but that planet was first detected by means of the wobbling of its star.

    The new object, dubbed OGLE-TR-56b, marks the first time a transit led to a planetary discovery. Its name comes from the Optical Gravitational Lensing Experiment, a survey of light variations in thousands of stars, conducted in Chile by astronomers at Warsaw University in Poland. Astronomer Maciej Konacki of the California Institute of Technology in Pasadena and colleagues used several large telescopes to scrutinize 59 stars that OGLE singled out for a closer look by noting subtle dips in their brightnesses.

    Swifter than Mercury.

    In this artist's conception, planet OGLE-TR-56b roasts just 3 million kilometers from its star.


    Nearly all candidates had binary star partners rather than planets, a “false alarm” problem that complicates transit searches. The 56th star, however, harbored a surprising companion: a planet that dashes around the star once every 29 hours. That puts the planet in a scorching orbit just 1/14th as far from its star as Mercury is from our sun—by far the tightest orbit ever seen. Moreover, the star is about 5000 light-years away, more than 30 times farther than any other sunlike star with a planet. By peering so deeply into space at millions of stars, researchers should unleash “a tsunami of transit discoveries” within several years, predicts team leader Dimitar Sasselov of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts.

    “It's awfully nice to see good evidence for another transiting planet instead of all false alarms,” comments astronomer Timothy Brown of the National Center for Atmospheric Research in Boulder, Colorado. “It reassures us that we'll discover things with transits that would elude any other technique.”

    Brown's optimism is justified, says astronomer Keith Horne of the University of St. Andrews, U.K. Horne's assessment of more than 20 current transit programs foresees that the planetary discovery rate will soon soar by a factor of 10 to 100. Most objects will be gas giants that orbit their stars in 10 days or less, called “hot Jupiters.” The real quarry for future searches will be planets the size of Earth, but stellar eclipses by such bodies are so slight that astronomers can't measure them reliably from the ground. Two planned satellites—NASA's Kepler and the European Space Agency's Eddington—should spy other Earths by decade's end, Horne says.


    Stars Behaving Badly

    1. Robert Irion

    SEATTLE—Pacific Northwest winter skies cleared for 4 days in early January, welcoming more than 2000 astronomers to their biannual meeting.

    Supernovas may grab the spotlight, but the massive stars that give rise to them put on flashy shows long before they blow up. Violent pulsations blast gas and dust into space for thousands of years in prologues to the stars' explosive deaths. Two observations described at the meeting cast light on these unstable phases—and the dramatic effects they can have on nearby stars.

    In one event, the supergiant star Rho Cassiopeiae shed more mass than any other stellar eruption witnessed with modern instruments. A team led by astronomer Alex Lobel of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, used five telescopes in the United States and Europe to monitor the star for a decade. During its latest eruption, a 200-day outburst that peaked in fall 2000, the star brightened and then dimmed dramatically as an ejected shell expanded at supersonic speeds. Temperatures in the star's distended atmosphere plummeted to an unusually chilly 4000 kelvin, allowing molecules such as titanium oxide to form. Analysis of the light absorbed by those molecules shows that the ejected shell contains about 10,000 times the mass of Earth, Lobel says.

    Windy cities.

    Giant disks of gas and dust (top) in the Carina Nebula survive intense radiation from dozens of hot young stars.


    The team's analysis suggests that Rho Cassiopeiae swelled to 700 times the sun's size during its outburst, making it one of the biggest stars known. “At those low temperatures and densities, the atmosphere is extremely elastic,” says astronomer Andrea Dupree, Lobel's CfA colleague. “A perturbation anywhere in the star creates enormous movements.” The team will keep watching the star's fits as it staggers toward a supernova death—a fate less than 50,000 years away, Dupree estimates.

    If such outbursts occur in a stellar nursery, they can influence a whole slew of emerging stars. A prime example is Eta Carinae, a supermassive star whose shell of expelled dust from a 19th century eruption marks the heart of the Carina Nebula. Surprising new images of the nebula have exposed possible homes for nascent planetary systems amid the harsh onslaught of energy and ultraviolet (UV) light from Eta Carinae and its neighbors.

    Earlier Hubble images of the Orion Nebula had revealed scores of dusty cocoons called protoplanetary disks, or “proplyds” (Science, 8 December 2000, p. 1884). But the Carina Nebula is a more violent setting. It hosts about 60 gigantic stars, each as powerful as the single star that irradiates most of Orion. Despite the radiation bath, proplyds dot Carina in large numbers, according to images from a 4-meter telescope at the Cerro Tololo Inter-American Observatory in Chile. “This is an extremely threatening environment to dusty disks, but they somehow survive,” says astronomer Nathan Smith of the University of Colorado, Boulder.

    Smith's team doesn't yet know whether the disks last long enough to make planets, because they clearly get baked by pulses of radiation. Some blobs display cometlike tails pointing away from Eta Carinae, suggesting that the star blasts the nebula most intensely. However, Smith has identified a few dark proplyds that aren't being lit up today. The thick shroud spat out by Eta Carinae during its eruption in the 1840s blocks its worst radiation for now, he believes. “It's as if we had a UV light bulb that suddenly turned off,” Smith says—giving the proplyds a brief respite from their inevitable erosion.


    Propelled by Recent Advances, NMR Moves Into the Fast Lane

    1. Robert F. Service

    A speedy new NMR technique could finally help structural genomics groups achieve their goal of devising factory-style approaches to mapping protein structures at high speeds

    As a tool for mapping the atoms in proteins, nuclear magnetic resonance (NMR) arrived on the scene decades after the workhorse mapping technique, x-ray crystallography. So perhaps it's no surprise that protein structures solved with x-ray crystallography make up 80% of the deposits in the Protein Data Bank, the international protein structure repository. But NMR, which accounts for the other 20%, may be poised to expand its turf.

    In the 29 January issue of the Journal of the American Chemical Society, two chemists—Thomas Szyperski of the State University of New York, Buffalo, and his former postdoc Seho Kim, now at Rutgers University in Piscataway, New Jersey— report a new way to collect and interpret NMR data, which promises to slash the data-collection time of a typical protein-mapping experiment from days to hours. “This is an important method,” says Cheryl Arrowsmith, an NMR expert at the University of Toronto in Canada. The speed boost, Arrowsmith says, will be particularly useful for structural genomics groups working to devise factory-style approaches to mapping protein structures at high speeds.

    Slow speeds have long hobbled NMR. The technique determines molecular structures with the help of powerful magnetic fields, which cause protons in particular atoms to precess like spinning tops, each with a characteristic frequency that depends on its chemical identity. To spot these atomic tops, researchers bombard vials containing millions of copies of a protein in solution with trains of radio-frequency (RF) pulses. The pulses nudge the spinning nuclei, effectively causing tops of the same chemical flavor to whirl in unison. Detectors then record the frequency at which different nuclei are spinning. Nearby atoms also affect the spins in ways that depend on their distance from the original atom and on their chemical identity. A proton in a carbon atom, for example, spins at a different rate depending on whether it sits near another carbon atom or a nitrogen atom. NMR experts use this “chemical shift” to help them solve their giant jigsaw puzzle of how neighboring atoms fit together to make up the protein.

    That's relatively straightforward as long as the number of nuclei being looked at is small. But things get complicated when researchers start trying to map large proteins that harbor dozens of amino acids. In part, that's because the NMR data come out as a series of peaks, each one representing the frequency of rotation of a particular nucleus. When nuclei abound, the peaks in such a spectrum crowd together so tightly that it's impossible for researchers to sort out which peaks correspond to which atoms.

    Speed demon.

    Thomas Szyperski, shown perched beside a 750-megahertz NMR machine, says his new method would slash the amount of time needed to map proteins.


    Technology has eased matters somewhat, as more-powerful NMR magnets and advances such as cryogenically cooling the RF electronics provide better resolution and sharper signals. In recent years, researchers have also devised “multidimensional NMR” techniques to analyze the interactions not just between pairs of atomic neighbors but among threesomes and foursomes as well—extra clues to the giant molecular puzzle. But that information comes at a price: Each time researchers increase the number of neighbors they evaluate—known in NMR parlance as adding dimensions—they must collect an exponentially greater number of spectra. (Analyzing one dimension requires just a single spectrum. For two, researchers typically collect 64 spectra. For three, it's 64 × 64, or 4096 spectra, and for four it's 64 × 64 × 64, or 262,144 spectra.) “The minimum measurement times explode when the dimensions are increased,” Szyperski says. A standard 4D NMR experiment, Arrowsmith says, can take up to a week just to collect data.

    Szyperski and Kim solved the problem by inventing a technique called G-matrix Fourier Transform NMR, or GFT NMR. Among other things, the technique not only controls the spacing of the RF pulses that bombard the protein sample, as the more conventional approach does, but it also tracks their phase—that is, whether the RF waves begin at their peaks, troughs, or somewhere in between. “Sampling different phases allows you to look at sums and differences of chemical shifts, and from those you can extract all the chemical-shift information you are collecting with even higher precision than with conventional techniques,” Szyperski says. The upshot, he says, is that re-searchers can get all the information from a 4D NMR experiment by collecting even fewer spectra than are now needed for 3D experiments. That improvement, he says, should cut data-collection times for larger proteins from days or weeks to hours.

    GFT NMR, Szyperski adds, is likely to prove especially useful in conjunction with new high-field NMR magnets, such as the 7-meter-high machines that operate at 900 megahertz. Those machines, particularly when equipped with low-temperature electronics, have more than enough resolution to solve structures for large proteins. But such experiments still take days to collect enough spectra to solve the protein structure. By cutting the number of spectra needed, GFT NMR “will allow scientists to take full advantage of the highest-field NMR machines,” Szyperski says.

    GFT NMR data are also easier for computers to analyze, an advantage that should make it easier to automate the jigsaw- solving process once data collection is complete, says Gaetano Montelione, an NMR expert who heads the Northeast Structural Genomics Consortium, of which Szyperski's group is a member. “I'm convinced that this is the way to go for automated NMR data analysis,” Montelione says. If so, NMR may yet give x-ray crystallography a run for its money


    Shedding Light on Avian Iridescence

    1. Elizabeth Pennisi

    TORONTO—About 1100 researchers from all walks of biology met here 4 to 8 January to discuss organisms ranging from hawk moths to crocodiles and swap results.

    Fancy feathers have adorned hats and the garb of royalty for centuries. But some of their most alluring colors, including the peacock's iridescent blue and the mallard's green neck, have puzzled biologists because they are not produced by pigments. Many are created by reflective subsurface patterns, says Richard Prum, an ornithologist at the University of Kansas (KU), Lawrence. But this is “a mechanism that we hadn't really thought about” in detail, notes Thomas Cronin, a vision ecologist at the University of Maryland, Baltimore County. Now Prum has done so—and has an idea of how it works.

    Many feather colors—reds, yellows, and oranges, for example—are produced by pigments. But most blues, violets, and iridescent whites—the so-called structural colors—are not. For more than a century, researchers have thought that birds' noniridescent blue feathers and skin were produced by the same optical tricks that make the sky blue: light bouncing off specks of material that backscatter just smaller, bluer wavelengths.

    Prum started to look into this phenomenon when he noticed a change of color in a preserved rare Madagascar bird called the velvet asity; the specimen was royal blue instead of the fluorescent green of live birds. When he and his colleagues examined the skin with an electron microscope, “we were astounded by what we found,” he recalls. The skin's collagen protein fibers were lined up in ranks “so regular you could tile your bathroom” with them, he recalls. Wondering how that might have affected the color change, he decided to investigate the submicroscopic structures of feathers and skin in other species. He found several types of structures. For example, in the sunbird asity, the fiber patterns were not as well organized—but they weren't random, either.

    Why so blue?

    Mathematicians are analyzing surface structures to help explain what makes the colors of some birds so bright.


    Curious about these quasi-ordered patterns, he turned to KU mathematician Rodolfo Torres, who helped develop a Fourier analysis tool to determine optical properties from electron microscope views of thinly sliced sections of feathers and skin. The method reveals what the light-modulating particles in the tissue “look like on a tiny, tiny scale” and translate that to “our perceptual world,” comments Sönke Johnsen, a comparative physiologist at Duke University in Durham, North Carolina. “It's a neat technique.”

    The observation is novel, says Cronin. The blue color of the sky is produced by incoherent interference of reflected light waves. But the bright blue of birds occurs when light waves are reflected in phase, producing coherent interference.

    Prum has analyzed skins of almost 30 species of birds from 15 families, he reported at the meeting, and found that different colors are produced by variations in how collagen fibers are arranged. What's more, he has discovered an evolutionary trend in structural color patterns among manakins from South and Central America. In these birds, blue feathers have a partially ordered structure; white ones have a slightly more organized pattern; and those with “iridescent mirror”-like white and tinges of silver and yellow have a closely packed hexagonal array. He concludes that an increase in the orderliness of the pattern yields a purer color, and that the quasi-ordered array appeared first in a common ancestor, followed by the structured hexagonal array. The change might have been driven by females preferring mates with purer colors.

    Prum intends to look more closely at the evolution of these patterns in other species. Already, he is finding similar coloration patterns in blue dragonflies, mandrill baboon faces, and vervet monkey scrotums. Making these connections is thrilling: “To understand why a blue jay is really blue,” he says, “is like understanding the phases of the moon.”


    Seeing the Invisible

    1. Elizabeth Pennisi

    TORONTO—About 1100 researchers from all walks of biology met here 4 to 8 January to discuss organisms ranging from hawk moths to crocodiles and swap results.

    Biologists are increasingly, humbly aware that many animals see far more of the world than humans do. Two talks at the integrative biology meeting, for example, presented new information on the unusual light-sensing capabilities of moths and tiny undersea crustaceans, including mantis shrimp. Almut Kelber of the University of Lund, Sweden, has found that moths see color at night. And Thomas Cronin, a vision ecologist at the University of Maryland, Baltimore County, has demonstrated that marine animals use polarized light to communicate. The work “demonstrates the hidden complexities of visual systems,” says Mason Posner, an integrative biologist at Ashland University in Ohio.

    Kelber never doubted that hawk moths, a large, globally distributed group of day- and night-flying flower hunters, could see color during the day. She had been studying one variety called a hummingbird hawk moth, which forages during the day, and her experiments had demonstrated that it could distinguish yellow, blue, and green. The moths readily chose the right feeder when trained to know by color which one had food. But because of the way humans see, “we just took for granted that at night there are no colors [for the moths], just shades of gray,” she recalls.

    Yet some moths seek out particular flowers at night for food, and Kelber wondered whether their large eyes might give them an edge in seeing in the dark. To find out, she began studying how trained moths distinguished particular colors under different low-light conditions. “It's a tough problem, training these moths to do what you would like them to do,” particularly in the dark, Cronin points out. But Kelber succeeded—and for comparison, she tried the same tests with humans.

    She asked people to pick out spots colored blue from among various shades of gray in three light levels. One level simulated twilight in a forest; a second represented dim moonlight; and the third was the equivalent of a dark, starlit night.

    In twilight, both humans and moths always picked blue out from a series of grays. The moths also excelled in moonlight, but humans could only see the blue when it was among light grays, not dark ones. And moths completely outdid humans at the darkest level, Kelber reported at the meeting and in the 31 October 2002 issue of Nature. “This is the first proof of an animal using color vision for object detection under dim light levels,” she says.

    Night-time hues.

    Even in the dark, hawk moths see color.


    The moths “have excellent color vision at light levels in which we can barely see at all,” says Sönke Johnsen, a comparative physiologist at Duke University in Durham, North Carolina. “It's hard to imagine walking around in a moonlit forest and seeing green leaves and red flowers, yet [moths] are doing it quite well.”

    The mechanism that enables night vision is remarkable, too. Insect “eyes” are actually made of many simple eyes called ommatidia that are packed together. In moths, each ommatidium contains nine photoreceptors that are activated by photons and transmit signals to the brain. Yet at night, even all that “eye power” isn't enough because there are few photons. The only way a moth can see color is to sum together the signals of several receptors or gather photons over a longer time, Kelber suggests.

    Less understood is the ability, discovered more recently, of some animals to make use of polarized light for more than just navigation. Octopi, shrimp, and other crustaceans appear to manipulate polarized light (waves aligned in a plane) to send messages. “An entire visual channel that we cannot perceive is fundamental to the visual world of these creatures,” says Richard Prum, an ornithologist at the University of Kansas, Lawrence.

    Cronin had long suspected that animals living underwater might harness polarized light because other visual signals can be obscured by the water's depth or turbidity. One of the biggest problems in testing the idea, Johnsen says, was “finding a nice way of looking at polarization underwater.” Cronin used a video camera that records polarized light to capture sea animals' secret code.

    In some mantis shrimp, his videos reveal, reflective appendages can generate polarized light. “It could be that by moving [the leg], they may change the angle of polarization” and alter the signal, Cronin suggests. He thinks they might use the signals in courtship or to give a warning when strangers come close to their burrows. Guarding the mouth of their lair, they display their polarizing patch like a badge of courage. In both cases, they may communicate with one another without alerting other species. “Cronin has established how lowly shrimp and cuttlefish go far beyond our visual systems,” says Prum.


    Lean Times, Lean Gut

    1. Elizabeth Pennisi

    TORONTO—About 1100 researchers from all walks of biology met here 4 to 8 January to discuss organisms ranging from hawk moths to crocodiles and swap results.

    Desperate measures are called for when you have to wait 8 months to find a meal. But that's life for the broad-nosed caiman, a Brazilian crocodile. It fasts most of the year and binges during the few months of the rainy season. J. Matthias Starck, an integrative biologist at the University of Jena, Germany, has discovered that to cope with this lifestyle, the caiman's small intestine shuts down. But it remains ready to fire right back up with the arrival of the next meal, Starck and his colleagues reported.

    This discovery adds to published research on several snake species and unpublished data on turtles and frogs, all showing that some species undergo big changes in intestinal mass between meals. Indeed, “it seems to be a general condition, at least in infrequently feeding animals, that the small intestine isn't always turned on,” says Albert Bennett, a comparative physiologist at the University of California, Irvine.

    Until now, researchers had primarily looked at snakes to learn how fasting reptiles alter their gut morphology and physiology depending on food availability. “They thought that this [adaptation] was specific for sit-and-wait predators,” says Starck. He decided to study a crocodile because its irregular eating schedule is seasonal, not prey-dependent. And because of how crocs are related to their fellow reptiles and to birds, he thought they could help him understand the evolution of this trait. With these results, he says, “we've shown that the ability to up- and down-regulate the small intestine [is] an ancestral feature of tetrapods.”

    Feast and famine.

    Cells lining the python's intestine have tiny projections that expand and shrink depending on food availability.


    Starck and his colleagues used ultrasound to study 38 animals ranging in age from 1 to 3 years and weighing 2 to 18 kilograms, half of whom fasted for 12 weeks before being fed. The others received food about twice a week. The ultrasound revealed a dynamic gut. Histologic studies showed that cells lining the intestine are quick to react to the presence or absence of food.

    In lean times, this lining appears compacted. But once the caiman has eaten, the gut surface cells swell and elongate, and the lining smoothes out. Small projections from the cells called microvilli help absorb nutrients from the gut. In fasting animals, the microvilli, and even many cells, seemed to slough off, Starck reported.

    Jean Herve Lignot, a physiologist at the CEPE CNRS University Louis Pasteur in Strasbourg, France, working with Stephen Secor, a physiologist at the University of Alabama, Birmingham, saw a similar trend in detailed light and electron microscopy studies of python intestines. Lignot examined the cells lining the gut almost daily for about 2 weeks after a meal. He saw dramatic changes in the gut, with microvilli expanding and shrinking depending on food availability. And a few hours after a meal, the cells along the gut expanded. There's nothing like a good meal, both teams found, to rouse a reptile's cold guts.