News this Week

Science  06 Oct 2006:
Vol. 314, Issue 5796, pp. 30

    Seeing a 'Plot,' deCODE Sues to Block a DNA Research Center

    1. Eliot Marshall

    A public brawl has broken out between deCODE Genetics, the Icelandic firm that created a country-wide DNA database to investigate diseases and develop drugs, and a U.S. hospital that recently announced the launch of the world's largest genotyping project focused on children's health. DeCODE has sued to stop the project, claiming it is built on information “stolen” by former deCODE employees.

    In papers filed in a U.S. District Court and made public last week, the company accuses four employees of Children's Hospital of Philadelphia (CHOP) and one consultant—all ex-deCODE workers—of a “plot” to “steal deCODE's most prized assets” to create a commercial rival. Claiming that computer files and data were removed from Iceland, deCODE seeks an immediate court order to prevent CHOP from moving forward with its Center for Applied Genomics, a $39 million project to gather and analyze DNA from 100,000 children (Science, 16 June, p. 1584).


    In a response filed with the Philadelphia court, CHOP denies the allegations and warns that if the court sides with deCODE, it could “paralyze … critical research into the genetic basis for childhood diseases.” The court began a public hearing on the case on 26 September. No decision on an injunction had been announced as Science went to press, and some geneticists outside the fray are watching closely. Rory Collins of the University of Oxford, principal investigator for a massive new U.K. DNA bank, says of the CHOP project, “It would be a shame to lose a key resource such as this. It's hard enough to get the research started and approved; we need as many as we can get.”

    Officials at deCODE and CHOP have declined to comment on specifics of the lawsuit, but court documents indicate that the fight erupted after the deCODE employees moved to CHOP this summer. According to the legal complaint filed by deCODE, Hákon Hákonarson, an M.D.-Ph.D. and former deCODE vice president for business development, in September 2005 discussed a “business plan” with CHOP to create a large DNA data bank and genetics research center in Philadelphia. DeCODE alleges that Hákonarson negotiated a $100,000 “signing bonus” for himself and signed an employment letter with CHOP in December 2005, agreeing to head its fledgling DNA project. Hákonarson submitted his resignation in January and left deCODE in late May. However, the brief says, he did not fully inform deCODE's chief executive, Kári Stefánsson, about his intentions.


    In the intervening months, according to deCODE, Hákonarson “on at least 60 occasions … exceeded his authorization to access deCODE's computer system by attaching removable storage devices” to obtain company research and business information that could be used to help set up CHOP's center. (DeCODE does not contend that any person's DNA sequences were copied.) “Dr. Hákonarson acted as a double agent for CHOP while appearing to remain a loyal employee,” deCODE's brief alleges; it asserts that he recruited three other deCODE employees and a deCODE consultant to CHOP's payroll. In addition to seeking unspecified damages, deCODE asks the court to block the new Philadelphia center from competing with it for grants or business for at least 2 years.

    In its rebuttal brief, CHOP dismisses these “strident allegations” as “simply not true.” The filing acknowledges that “some allegedly confidential information was innocently transferred to Dr. Hákonarson” at a time when “a proposed collaboration between deCODE and [CHOP] was in the making.” But the information was “trivial,” CHOP's brief says. The brief denies that CHOP's genetics research unit would be a commercial rival to deCODE.

    According to CHOP, relations soured when Stefánsson ordered Hákonarson's name removed from an “important” scientific paper. “Hákonarson had been considering leaving deCODE,” the brief says, and told Stefánsson in January 2006 that he would resign—but at Stefánsson's request stayed on. Hákonarson and Stefánsson discussed the idea of deCODE and CHOP collaborating, according to the hospital's brief, but Stefánsson eventually rejected this and “fired” Hákonarson in May after becoming “angered over a review article Dr. Hákonarson published in a medical journal.” CHOP blames the defection of deCODE employees on the firm's boss. “Because of Dr. Stefánsson's tempestuous behavior, the working environment at deCODE could be brutal,” the CHOP brief says.

    According to a deCODE statement, the company “believes it is in a strong position to secure its intellectual property and intends to do so.” CHOP meanwhile asserts in a press release that “the claims against its researchers are without merit.”


    Hopes for Innovation Bill Rest on Lame-Duck Congress

    1. Jeffrey Mervis

    Members of Congress left town last week to campaign for reelection with little to show from a yearlong push for legislation aimed at bolstering U.S. competitiveness. But rather than being depressed, science advocates are hoping that the best is yet to come. The cause of their optimism: Senate leaders last week introduced a sprawling, bipartisan bill that would authorize $20 billion in new spending over 5 years to strengthen science and math education and expand federal research programs. Advocates hope the Senate will pass the measure, dubbed the American Competitiveness and Innovation Act, when Congress returns for a lame-duck session after the 7 November election. But persuading the House to approve a comprehensive innovation bill before the end of the year could be difficult. A different and much slimmer version has been languishing there for more than 3 months.

    Last week, five influential lawmakers appealed to the community for help in getting the job done, appearing separately during a 1-day meeting at the National Academies that turned into an impromptu pep rally. “We need you to talk to your elected officials, and to the president, and tell them to support this bill,” declared Senator Lamar Alexander (R-TN). The meeting marked the first anniversary of the publication by the academies of a widely acclaimed report—from a panel chaired by Norman Augustine and entitled Rising Above the Gathering Storm—that shaped the Senate bill (Science, 21 October 2005, p. 423).

    A strong lineup.

    From left, NAS President Ralph Cicerone, Norman Augustine, NAE President Bill Wulf, and Senator Lamar Alexander enjoy the National Academies' convocation on competitiveness.


    It should be an easy sell, Senator Jeff Bingaman (D-NM) told the 600 university and corporate research leaders. “It's rare to get people on both sides of the aisle and on both sides of Capitol Hill to be singing from the same hymnal,” said Bingaman, one of 32 co-sponsors of S. 3936, introduced 26 September by Senate Majority Leader Bill Frist (R-TN) and his Democratic counterpart, Senator Harry Reid of Nevada. “But that's the case with this topic and this bill.”

    “This bill contains almost all of the recommendations in the academies' report, and we're ready to move on it,” proclaimed Senator Pete Domenici (R-NM), chair of the Energy and Natural Resources Committee. He proudly ticked off provisions that would authorize doubling the budget for the National Science Foundation over 5 years, doubling the budget for the Department of Energy's Office of Science over 10 years, and a slew of programs to train better science and math teachers and to attract more students into science, technology, engineering, and math fields.

    Then he added a caveat. “I also lead one of the appropriations subcommittees—Energy and Water—that has to fund a lot of what is in this bill, and we don't have money for everything. I have promised to do it by taking money from other areas. But I don't know if I will be able to hold on in upcoming negotiations with the White House and my colleagues. So I'm going to close my eyes and do my best.”

    Representative Sherwood Boehlert (R-NY), who chairs the House Science Committee, was equally candid about the hurdles in the House. Boehlert, whose retirement this fall after 24 years in Congress will further drain an already shallow pool of moderate House Republicans, has pushed unsuccessfully for floor debate on two bills that would expand existing research and education programs, H.R. 5356 and H.R. 5358, which in June cleared his committee. “Our bill includes a lot of what's in the Gathering Storm report,” he said, “but, unfortunately, the bill has been stalled by a handful of conservative members who don't want to see any new spending and by some ideological forces in the White House. And the House leadership hasn't been willing to move it ahead.” On the bright side, however, Boehlert said that the existence of a Senate bill, after months of wrangling over its content, improves the odds that “we will be able to work something out during the lame-duck session.”

    That postelection session poses its own problems. Beyond clearing must-pass spending bills and other legislation left over at the tail end of the 2-year congressional term, legislators must reconcile the House and Senate bills. Although they share the same party affiliation, Representative Boehlert and Senator Alexander don't exactly see eye to eye.

    “I'd like to see a more streamlined version [of the 209-page Senate bill]. They need to set some priorities,” says Boehlert. “The Senate bill doesn't have a prayer in the House. But I think we can pass something once we come to our senses.”

    For his part, Alexander predicts that S. 3936 “will go through the Senate quickly” once the Senate reconvenes on 14 November. “Then, the House could just pass our bill, and we'd be done.” He also offered some not-too-subtle advice about winning over House Speaker Dennis Hastert (R-IL), whose district includes Fermi National Accelerator Laboratory.

    “Suppose one of you lives in a district that contains a national lab and which happens to be represented by the Speaker of the House,” Alexander said to laughter from the audience. “Maybe you could make a call.”


    U.S. Needs New Icebreakers, Report Tells Congress

    1. Jeffrey Mervis

    Despite the United States's growing economic, military, and scientific interests in the polar regions, for the past 2 years the National Science Foundation (NSF) has relied on a rented Russian icebreaker to make sure that the U.S. research stations in Antarctica don't run out of fuel. The need to have a former Cold War archrival protect U.S. strategic interests is more than simple irony, however. A new report by the National Academies says that it's unacceptable—and that the federal government should build and deploy two new icebreakers in the next decade to make sure that the United States has full and timely access to the region. But finding the political will—and the money—to get the job done is another matter.

    The report, from a panel of the Polar Research Board that was chaired by computer engineer Anita Jones of the University of Virginia in Charlottesville, scolds the government for allowing the current four-ship U.S. icebreaking fleet to deteriorate and recommends that the Bush Administration review its recent decision to shift management and fiscal responsibilities for icebreaking from the Coast Guard, where they have historically resided, to NSF, which coordinates U.S. polar research and runs the Antarctic program (Science, 16 December 2005, p. 1753). Jones says the $1.4 billion cost of two new ships is a small price to pay to ensure U.S. access to both polar regions, noting for example that accelerated melting of sea ice has focused new global interest on the Arctic.

    “I'm concerned about the diminishing capacity of our current fleet,” says Representative Don Young (R-AK), chair of the House Transportation Committee, whose Coast Guard and maritime subcommittee held a hearing last week on the academies' report. The Alaskan legislator, who had requested the report, noted that global warming presents an unprecedented opportunity for the United States to develop shipping lanes and natural resources in the Arctic, and that he's been frustrated by the inaction of the Bush Administration in renovating the icebreaking fleet: “Maybe this report will help.”

    One major issue for the panel is the country's ability to maintain open water each winter for oil tankers to resupply McMurdo Station, the largest of the three U.S. stations on the Antarctic continent and the staging area for activity at the South Pole. The mainstays of the U.S. fleet are the Polar Sea and Polar Star, the world's most powerful non-nuclear icebreakers. Both are now at the end of their expected 30-year life spans. Their increasing fragility, combined with unusually thick pack ice in the Antarctic, forced NSF to rent the Russian icebreaker Krasin for several weeks during the past two winters. (The Coast Guard also operates the Healy, a 7-year-old research icebreaker used mainly for Arctic missions, and NSF leases the Nathaniel B. Palmer, which has limited icebreaking capabilities, for Antarctic research cruises.) NSF has lined up a Swedish icebreaker for this winter's Antarctic chores.

    With a little help …

    Russian icebreaker Krasin clears path to NSF base.


    “Congress likes to wait until there's a crisis, and unfortunately, that's what it has come to,” says Representative Frank LoBiondo (R-NJ), chair of the maritime subcommittee. Mead Treadwell, chair of the U.S. Arctic Research Commission and a witness at the hearing, hopes the report will help the scientific community pressure the Bush Administration to address the issue. “The first step would be $30 million in the [president's 2008 budget request] for a design study,” says Treadwell. “If it's not there, then we'll have to start screaming.”


    Method to Silence Genes Earns Loud Praise

    1. Jennifer Couzin

    Over beer and coffee, in labs and at scientific conferences, the speculation has been intense for years: Who in the RNA interference (RNAi) field, biologists wondered, would win the Nobel Prize, and when? Science's ultimate accolade was considered increasingly inevitable as the gene-silencing method revolutionized genetics, spurred development of new medical treatments, and transformed our understanding of cellular behavior. But, under Nobel rules, the prize can go to no more than three people. Yet many had made seminal contributions to the discovery and understanding of RNAi.

    Early Monday morning, several years earlier than many expected, the guessing game came to an end. Two Americans—Craig Mello of the University of Massachusetts (UMass) Medical School in Worcester and Andrew Fire of Stanford University in Palo Alto, California—learned that they had won this year's $1.37 million Nobel Prize in physiology or medicine. Like many Nobel winners before him, Fire, who was woken by a phone call from Sweden, wondered at first if he was dreaming, or the caller had the wrong number. Although many had predicted that he and Mello would be winners, Fire still felt a “certain amount of disbelief,” he said during a press conference. “We looked at this very, very complex jigsaw puzzle and put in a significant piece,” he said.

    Silence is golden.

    Andrew Fire (left) and Craig Mello learned this week that they'd won the Nobel Prize for their groundbreaking discovery of RNAi's gene-quelling power.


    That piece came in 1998, when the pair, with colleagues, reported in Nature that injecting double-stranded RNA into worms silenced genes. That nailed down the mechanism for seemingly disparate and baffling observations others had made in plants, worms, and even mold over previous years. It also laid the groundwork for subsequent RNAi findings, including the discovery of the phenomenon's natural roles in mammalian cells—guiding early development, for example—and ways to manipulate it artificially. Today, it's thought that one type of RNA, microRNA, depends on RNAi to control upward of a quarter of the human genome.

    Although Fire said at a press conference that, given the strides made by others in the field, “I feel slightly guilty to be here,” Phillip Zamore, who works with Mello at UMass, calls the award “one of the most well-deserved Nobel Prizes ever given.” The Nature paper, Zamore notes, prompted him to leap into the RNAi field at the end of his postdoc 8 years ago, much as it inspired many other young researchers. Mello, he says, “is a scientist's scientist. … He's always stretching me intellectually.”

    Adds Phillip Sharp of the Massachusetts Institute of Technology in Cambridge, who himself won a Nobel 13 years ago for RNA discoveries and now works with RNAi, “The avalanche was started down the hill by this paper.”

    Fire, who at the time was based at the Carnegie Institution of Washington in Baltimore, Maryland, and Mello resolved the mystery of RNAi while experimenting with ways to control levels of a muscle protein in worms by ramping up or down its RNA. Like others, they recorded puzzling results when injecting either “sense” or “antisense” RNA into cells. These single strands of RNA were thought to either boost or reduce levels of messenger RNA that had matching or complementary sequences, but they did not act predictably. When Fire and Mello combined the strands of RNA into a double-stranded helix, they were taken aback by its power to dampen gene expression in their worms reliably and specifically.

    Several years later, Thomas Tuschl, who is now at Rockefeller University in New York City, translated the findings to mammalian cells. Dozens of companies are now seeking to apply RNAi to medical conditions as diverse as cancer and hepatitis, using the method to turn off oncogenes or shut down viral replication, for example. Meanwhile, labs around the world routinely test the function of proteins in cells or generate animal models of diseases by using RNAi to silence genes.

    While RNAi researchers praised Mello and Fire's experiments and deemed them a key turning point for the field, some also voiced quiet disappointment that the committee had decided not to share the prize with a third person. One RNAi researcher, who asked not to be named, even told Science that “plants got screwed.”

    It was observations in plants that first hinted at a mysterious gene-silencing method. But the findings were puzzling, disparate threads that never quite tied together. In the early and middle 1990s, researchers such as David Baulcombe of the Sainsbury Laboratory in Norwich, U.K., determined that adding genes into plants sometimes turned off the endogenous counterparts, a phenomenon then described as cosuppression (Science, 26 May 2000, p. 1370). Baulcombe and others suspected that RNA was behind this gene silencing but were unable to control the outcome of their experiments reliably. Around the same time, Victor Ambros of Dartmouth Medical School in Hanover, New Hampshire, and Gary Ruvkun of Harvard Medical School in Boston identified in worms a natural gene-silencing RNA that would later become known as microRNA.

    At a plant conference in 1995, Ruvkun discussed his worm work, recalls Richard Jorgensen, a plant geneticist at the University of Arizona, Tucson, who made some of the earliest discoveries of RNAi. “We talked about it, and we kind of shrugged our shoulders. We couldn't figure out what the connection was,” he says.

    It took Mello and Fire's results with double-stranded RNA for that light bulb to go on. Baulcombe, who many cited as deserving Nobel recognition, too, graciously acknowledged that, calling the pair's work “immensely significant.”


    Astrophysicists Lauded for First Baby Picture of the Universe

    1. Adrian Cho*
    1. With reporting by Daniel Clery.

    The 2006 Nobel Prize in physics honors two astrophysicists who first mapped the afterglow of the big bang—the so-called cosmic microwave background (CMB). Using an instrument on NASA's Cosmic Background Explorer (COBE) satellite, John Mather, 60, of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and colleagues measured the precise spectrum of the microwaves. Using another instrument on COBE, George Smoot, 61, of Lawrence Berkeley National Laboratory and the University of California, Berkeley, and colleagues detected slight variations in the temperature of the CMB, signs of the clumping of matter that would produce galaxies.

    First light.

    Astrophysicists John Mather (top) and George Smoot and colleagues probed the details of the afterglow of the big bang.


    The portrait of the universe as a young fireball paved the way for today's studies of how the cosmos evolved, says George Efstathiou, an astrophysicist at the University of Cambridge in the U.K. The spectrum “verified beyond any reasonable doubt that the cosmic microwave background radiation was created very early in the universe's history,” he says, and “the discovery of temperature ripples gave us the first probe of the exotic physics that occurred within 10−35 seconds after the big bang.”

    According to the big bang theory, the universe burst into existence and has been expanding and cooling ever since. Almost 14 billion years later, radiation from the primal explosion lingers, although it has cooled to a frigid 2.725 kelvin as its wavelength has stretched to the microwave range. Physicists Arno Penzias and Robert Wilson discovered those microwaves by accident in 1965 and won the Nobel Prize 13 years later. But when NASA launched COBE in 1989, scientists were still puzzling over the CMB's properties.

    Mather's team showed that the spectrum of the radiation fit a so-called blackbody spectrum that describes a glowing body in thermal equilibrium. That meant that the radiation was released quickly and that the early universe was very hot and dense, as the big bang theory predicted. “In the beginning, I was just trying to get the job done, and I didn't think about how important [the measurement] was,” Mather says. “But since then, I've realized that it is an essential piece of the history of the universe.”

    Smoot's team found that the temperature of the radiation varied from place to place in the sky by about one part in 100,000—just enough to be detected. “It was close,” Smoot says. “We had about a factor of 2 margin” in sensitivity. The small size of the variations indicated that the gravity from ordinary matter would not suffice to explain the structure of the universe; some form of unobserved dark matter must have sown the seeds for the galaxies and clusters, Smoot says.

    Mather and Smoot were postdocs in the 1970s when they proposed their experiments. “NASA was taking a big chance on us,” Smoot says. Since COBE produced its first results in 1992, the CMB has borne still more fruit. Three years ago, NASA's Wilkinson Microwave Anisotropy Probe satellite obtained a more-detailed map, which revealed the precise age and composition of the universe. And next year, the European Space Agency will launch Planck, a satellite that will map the polarization of the microwaves, which could reveal signs of gravitational waves in the primordial universe. “I think there [are] many things left to learn about the CMB,” Mather says. And perhaps more Nobels to come.


    Speedy Planets Near Galactic Center Show Sun's Region Is No Fluke

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Amersfoort, the Netherlands.

    The planet orbiting the dwarf star SWEEPS-10 is probably lifeless. But if you were born there, you would celebrate your “birthday” every 10 hours and 11 minutes. That's all the time it takes the Jupiter-like planet to complete one revolution. In addition to smashing the old record for shortest-period planet, SWEEPS-10 B—along with 15 other new exoplanets discovered by the Hubble Space Telescope—gives astronomers the first evidence that extrasolar planets are just as abundant in the center of the Milky Way galaxy as they are in the neighborhood of our sun.

    A team led by astronomer Kailash Sahu of the Space Telescope Science Institute in Baltimore, Maryland, used Hubble to monitor 180,000 extremely faint dwarf stars in the core of the Milky Way, some 30,000 lightyears away. Most of the 200 or so previous exoplanets were spotted by periodic Doppler shifts: changes in the wavelength of their stars' light that result from stellar wobbles caused by a planet's gravitational pull. That method limited astronomers to nearby stars, the only ones bright enough to be studied spectroscopically. The new search was based on a different principle: If a star has a close-in planet with an orbit seen edge-on, the planet should periodically block a small part of the star's light. The results of the week-long Sagittarius Window Eclipsing Extrasolar Planet Search (SWEEPS), carried out in February 2004, are published in this week's issue of Nature. The haul of 16 planet candidates includes five that orbit their stars in less than an Earth day.

    “It's a very tantalizing result,” says astronomer Don Pollacco of Queen's University Belfast in the U.K., “but you have to be cautious. These stars are so faint that it's incredibly tough to do follow-up observations” to prove that the brightness dips are indeed caused by transiting planets and not by binary companion stars or other effects. Pollacco is the project scientist of SuperWASP, a ground-based search for transiting exoplanets that recently published its first two detections.

    SWEEPS's five ultrashort-period planets all seem to orbit stars that are significantly less massive than the sun. Because low-mass stars are also cooler and less luminous, the new exoplanets are about the same temperature (some 2000 kelvin) as the many “hot Jupiters” that have been found at slightly larger distances from brighter, hotter stars. Planet hunter William Cochran of the University of Texas, Austin, speculates that 2000 K may mark the highest temperature a “gas giant” planet can reach before its atmosphere evaporates, leaving only a small, rocky core.

    Playing with fire.

    Record-setting exoplanets may show how close to a star “gas giants” can orbit and live to tell the tale.


    In the near future, two space missions will also look for transiting exoplanets. The French COROT satellite is scheduled for launch in late November this year, followed by NASA's Kepler mission in 2008. Cochran, who is a co-investigator on Kepler, says the Hubble results are “of significant importance” for the space missions, because they seem to confirm that the statistics of extrasolar planets are the same throughout the galaxy.


    Has Lazy Mixing Spoiled the Primordial Stew?

    1. Richard A. Kerr

    Last year, geochemists working out how the solar system was cooked up 4.6 billion years ago thought they had found some missing ingredients. A slightly skewed isotopic composition of meteorites suggested that some elements had been dragged into the deep Earth early on, in zones of rock that remain undetected to this day. That “layer cake” model of Earth's interior solved a number of problems for terrestrial geochemists. But two more isotopic studies published online by Science this week ( and indicate that the notion of permanent layering in Earth's depths may rest on shaky assumptions about the chemistry of the early solar system.

    The new papers agree that some isotopes vary in abundance among meteorites (and thus among the asteroids they came from), or between meteorites and Earth. Those variations imply that isotopic composition varied from place to place in the swirling disk of gas and dust that gave rise to planets and asteroids. Last year's paper, by contrast, assumed that the solar nebula was thoroughly mixed (Science, 17 June 2005, p. 1723).

    A planetary mixing bowl.

    Geochemists had assumed that the swirling solar nebula thoroughly stirred in all the planetary ingredients, but meteoritic isotopes now show the mixing was incomplete.


    That paper's authors, geochemists Richard Carlson of the Carnegie Institution of Washington's Department of Terrestrial Magnetism and Maud Boyet, now at University Jean Monnet in St. Etienne, France, found a previously unrecognized difference in the isotopic composition of the element neodymium between meteorites and rocks from Earth. The meteorites were so-called chondrites, little-altered building blocks of the rocky planets. Geochemists have long thought they have the same composition as Earth's rock. So Boyet and Carlson assumed that the newly recognized neodymium gap arose soon after Earth formed, as a result of chemical reactions that separated samarium from other elements. Samarium is a radioactive progenitor of neodymium.

    But in one of this week's papers, geochemists Michael Ranen and Stein Jacobsen of Harvard University report that the solar nebula was isotopically heterogeneous in the first place. The evidence comes from barium, one of the elements that swirled into the solar nebula after being forged by nuclear reactions in a dying star. The authors find more of some barium isotopes in chondritic meteorites than in rocks on Earth even though the barium isotopes—unlike the neodymium isotopes—are not the products of radioactive decay. Somehow, they say, some of the newly minted elements did not get thoroughly mixed into the nebula before the asteroids formed. As a result, the chondritic meteorites cannot be trusted as a benchmark for the starting composition of the whole Earth, they conclude, contrary to Boyet and Carlson's assumption. Earth's initial composition “becomes a more complicated puzzle to figure out,” Jacobsen says.

    The authors of the second online paper—geochemists Rasmus Andreasen and Mukul Sharma of Dartmouth College—also found signs of a heterogeneous solar nebula, but with a twist. They revisited the neodymium and samarium isotopes of chondritic meteorites. They found that the most primitive sort, the carbonaceous chondrites from the far edge of the asteroid belt, contain a mix of neodymium isotopes different from that in ordinary chondrites from the inner part of the belt.

    Carbonaceous chondrites are definite oddballs, Andreasen and Sharma conclude. But they see signs in neodymium and samarium isotopes that Earth and ordinary chondrites grew from the same sort of stuff. The difference in neodymium isotopes that Boyet and Carlson noted could indeed have resulted from the early separation of elements on Earth, they say. Andreasen and Sharma's analyses “bolster our claim” about a layered deep Earth, says Carlson. Sharma agrees.

    The two new isotopic studies agree in one respect. “What was supposed to be a homogeneous stew was not,” says geochemist Gerald Wasserburg, professor emeritus at the California Institute of Technology in Pasadena. “I don't know whether Boyet and Carlson are right or not, [but heterogeneity] threatens all the things one does in that area. Pandora's box is clearly open.”


    Bizarrely, Adding Delay to Delay Produces Synchronization

    1. Adrian Cho

    Running late? Add more delay, and you can end up right on time—if you happen to be a chaotically varying beam of laser light. When three lasers in a row shine into one another in just the right way, they can forge a connection in which the intensities of the first and last lasers vary in unison, physicists report. That's weird because if the researchers couple only two lasers, the variations of one simply lag those of the other by the amount of time it takes light to pass between them, as anyone might expect. The strange new effect could shed light on how the hemispheres of the brain stay in sync, researchers say.

    “These guys have shown experimentally that this happens,” says Rajarshi Roy, a physicist at the University of Maryland, College Park. “Explaining mathematically why this is possible is an open question.”

    When two lasers shine into each other, their intensities can start to vary randomly. The heart of each laser is a “resonant cavity” in which light begets more light in a process called stimulated emission. Within one laser, light from the other laser can interfere with the light already in the cavity, either increasing or decreasing the overall intensity. That change, in turn, increases or decreases the output of the laser and hence the amount of light beaming back into the other one. Such feedback can trigger chaotic oscillations in the intensities of both.

    Ingo Fischer of the Free University of Brussels, Belgium, and colleagues previously had shown that when two lasers couple, the fluctuations in one always lagged the other. But when the researchers added a third laser to the chain—so that the lasers on the ends shone into the one in the middle and the one in the middle shone into those on the ends (see diagram)—they got a surprise. The laser on one end instantaneously reproduced the variations of the laser on the other end, even as the middle laser trailed behind by 3.65 nanoseconds, the time it took light to travel the 1.1 meters between neighboring lasers, the team reports in the 22 September Physical Review Letters.

    Tag team.

    When three lasers couple, the outer two (red and green) stay in sync as the middle one lags.


    The effect might conjure up thoughts of faster-than-light communication, but that's not possible, Fischer says. The random variations are produced by the system as a whole, so it is impossible to feed a message into one end of the chain and immediately extract it from the other, he says.

    Although it may not challenge the laws of physics, the experiment could help decipher the synchronization of nerve signals in the brain, says Wolf Singer, a neuroscientist at the Max Planck Institute for Brain Research in Frankfurt, Germany. In 1986, Singer and colleagues showed that networks of neighboring neurons tend to fire at the same time, and 5 years later they showed that such tight synchrony extends to the opposite hemispheres of the brain—even though it takes 6 to 8 milliseconds for nerve impulses to propagate that far.

    Such synchronization may help define individual neural circuits, Singer says, and researchers can already explain how local networks of neurons get in sync. “What is less well understood is how remote sites get synchronized,” Singer says, “and that's where this work may be relevant.”

    Analyzing the effect may not be easy, says Jürgen Kurths, an expert in nonlinear dynamics at the University of Potsdam in Germany. Without the delays, the coupled lasers can be described with a finite number of equations. Add the delays, and “in theory you have an infinite number of equations, so it becomes quite difficult,” Kurths says. Understanding will come, he says, but it may take time.


    Presidential Hopefuls Discover a New Issue: Science

    1. Martin Enserink

    FLEURANCE (GERS), FRANCE—French scientists can barely believe it. In the run-up to the April-May 2007 presidential elections, research appears to be shaping up as a serious issue. The evidence: A parade of presidential hopefuls traveled to this tiny village 700 kilometers south of Paris last weekend—joining a retreat of the movement known as Sauvons la Recherche (SLR)—to engage scientists in debate and promise them better times.

    True, the two hottest candidates were absent. Nicholas Sarkozy, the populist minister of the interior and the frontrunner in the conservative party UMP, had declined the invitation. Media darling Ségolène Royal, the leading Socialist Party (PS) candidate—who would be France's first female president if elected—agreed to come months ago but canceled at the last minute, angering some of the more than 400 participants.

    But that left seven candidates, spanning the political spectrum from the Revolutionary Communist League—a Trotskyist group whose candidate, Olivier Besancenot, is a 32-year-old mailman—to the centrist Union for French Democracy (UDF). The result was a lively, occasionally raucous exercise in democracy, with most candidates fielding tough questions for almost an hour each while they were mocked real-time by a cartoonist whose drawings were projected on a giant screen behind them.

    Politicians have reason to court scientists this year. An SLR-led revolt against budget cuts and poor prospects for young researchers brought tens of thousands to the streets and may have helped defeat the governing UMP in regional elections in 2004. In response to the uprising, President Jacques Chirac offered a research reform bill, dubbed the “Pact for Science,” which the National Assembly approved this spring. Touted as an unprecedented shot in the arm for French science, the bill raises the overall research budget about 20% and creates thousands of new jobs. But SLR, which objected to some elements such as the new National Research Agency, says the bill falls short of what's needed (Science, 10 March, p. 1371).

    The candidates who came to the meeting agreed—and they knew how to flatter. “You are one of the two or three keys to the future of France,” said UDF leader François Bayrou, adding that if elected, he would not only offer a 10-year investment program including 5% annual budget growth but also boost the image of science in French society. “Your community has been the victim of unjust and deleterious measures,” said former prime minister and PS candidate Laurent Fabius, who proposed the most detailed package, including a 10% annual budget increase and a program to boost the life sciences, with emphasis on stem cells, antibiotics, and DNA tests.

    On the stump.

    François Bayrou was one of seven presidential candidates who sought support at a meeting of researchers last week.


    The pols did not escape criticism, however. Biologist and former SLR spokesperson Alain Trautmann argued that PS heavyweights such as Fabius and Royal could have done more to improve the science bill during debate on it. And Dominique Voynet, the candidate for the Greens, came under fire for her party's hostility toward nuclear energy and genetic modification. But in the end, the audience liked most of what it heard.

    Whether the researchers will make a difference in the election is anything but certain. Except for Fabius, none of the candidates has a real shot at the presidency, and in the end, science will likely remain an electoral sideshow compared to issues such as the economy, joblessness, and immigration, concedes physicist and SLR president Bertrand Monthubert. Still, the fact that politicians are paying attention is an important step in the right direction, he says: “We have never had an opportunity like this before.”


    House Passes Plan for Drug, Vaccine R&D

    1. Jocelyn Kaiser

    Hoping to plug holes in the nation's biodefenses, the U.S. House of Representatives last week approved a plan to create a new body to coordinate and finance the development of drugs and vaccines.

    The measure is meant to improve on Project BioShield, a $5.6 billion procurement fund that critics say has been slow to add new countermeasures to the nation's stockpile (Science, 7 July, p. 28). H.R. 5533 creates a Biomedical Advanced Research and Development Authority (BARDA) within the Department of Health and Human Services (HHS) to oversee and fund the development of defenses against bioweapons and natural outbreaks. The bill authorizes $160 million a year in 2007 and 2008 to help companies through the “valley of death”—manufacturing scale-up and clinical trials—before applying to BioShield. It sets up an advisory board with industry members to help identify new threats, and it allows BioShield payments when companies meet “milestones” instead of only when they deliver the final product.

    One controversial provision would exempt HHS from releasing information obtained through BARDA that could “reveal vulnerabilities” to bioweapons. Groups such as the Washington, D.C.-based Center for Arms Control and Non-Proliferation say that's too vague and could keep important research secret.

    The Senate has begun considering a similar bill, S. 2564, sponsored by Richard Burr (R-NC). Supporters are hopeful that it will pass during the Senate's lame-duck session after the November elections. Differences in the two bills—for instance, the Senate version includes antitrust provisions—would have to be reconciled in a conference.

    Meanwhile, HHS promises to spell out details of the existing BioShield program, such as exactly which biological agents HHS is interested in and whether it wants oral or injected vaccines, in an implementation plan in early 2007.


    A Physician-Scientist Takes the Helm of NCI

    1. Jocelyn Kaiser

    John Niederhuber applies a hands-on approach to running NIH's largest institute—and retains ties to the lab and the cancer clinic

    Back to basics.

    Niederhuber reviews results from his small NCI research lab.


    In the summer of 2005, 67-year-old cancer surgeon John Niederhuber was ready for a new chapter in a career spent hopscotching across the country in academic medicine. His wife had died of breast cancer a few years earlier, and his son would soon head off to college. So when Andrew von Eschenbach, director of the National Cancer Institute (NCI), asked him to join his staff as a deputy director, Niederhuber left his job as a surgery and oncology professor at the University of Wisconsin, sold his house in Madison, and headed to Washington, D.C.

    The idea was to get back into the lab as well as to coordinate NCI's translational and clinical programs. But that plan was scrapped after President George W. Bush asked von Eschenbach to fill a sudden vacancy at the top of the Food and Drug Administration, then nominated him as FDA commissioner. In August, the White House quietly announced that Niederhuber, then acting NCI chief, would become the institute's 13th director. “Why I'm here, I don't know,” he tells Science in an interview in his office suite on the National Institutes of Health (NIH) campus in Bethesda, Maryland. “One of those fates in life has put me here. I'll do my damnedest to do the best job I can … to serve the American people and the research community.”

    Longtime colleagues say such modesty and dedication are hallmarks of the tall, soft-spoken, Ohio-born biomedical research administrator, who by turns has headed a large cancer center, been an outstanding oncologic surgeon, and run an immunology lab. “John understands the politics of cancer, the patient care aspect of cancer, the basic science. He has a unique combination of talents,” says Allen Lichter, who recently stepped down as dean of the University of Michigan Medical School to become chief executive officer of the American Society of Clinical Oncology in Alexandria, Virginia. “I don't think you could find someone who was better prepared.”

    Niederhuber inherits NIH's largest institute, but one whose $4.8 billion budget has been flat 2 years running and is likely to remain so in 2007. One of his first moves has been to scrap the management model favored by von Eschenbach, who considered himself a big-picture CEO representing NCI in the cancer community while entrusting daily management to top-level deputies. Instead, Niederhuber interacts daily with the institute's seven division directors. Together, the team is rating existing programs and looking for cuts throughout the institute, including the $240 million in administrative funds that now go to the director's office. Niederhuber also has declined to embrace one of his predecessor's most visible—and controversial—goals: to eliminate suffering and death from cancer by 2015. Instead, Niederhuber says he prefers to “lessen the burden of cancer”—without setting a target date.

    Some of his moves seem designed to send a message to the cancer community. His decision to set up his own small lab with a staff scientist and two postdocs to study the role of stem cells in tumor growth is seen by NCI intramural scientists as a signal that he's one of them. And a 3-year, $9 million pilot project to bring cutting-edge cancer treatments to community hospitals reflects his strong belief that giving the average patient access to new science will lower mortality rates.

    One von Eschenbach priority that he supports is a collection of big new technology programs such as nanotechnology and bioinformatics. Some researchers say the programs are taking money from the bread-and-butter R01 grants to individual investigators and would like to see an outside review of their value. Instead, Niederhuber has resisted calls to shift funds wholesale to the R01 pool, citing the need to stay in line with NIH-wide policy to keep grant numbers steady rather than have pay lines that vary widely among institutes.

    Friends and colleagues are pleased with his appointment. “I think he's going to be a terrific director at a really difficult time. I hope the outside community appreciates how talented he is,” says Bruce Chabner, clinical director at the Massachusetts General Hospital Cancer Center and a former NCI division director on whose board Niederhuber chaired. Robert Wiltrout, head of the institute's $367 million intramural Center for Cancer Research, says, “I think we're very lucky to have found John. He focuses attention on those around him, which builds morale.” And John Mendelsohn, president of the University of Texas M. D. Anderson Cancer Center in Houston, calls Niederhuber “a calming influence. [He is] reaching out and bringing into his deliberations advice from just about all the stakeholders I can think of. We're all ready to roll up our sleeves and work with him.”

    Traveling man

    Niederhuber's roots as a physician-scientist go back to Bethany College, a small liberal arts school in West Virginia, where as an undergraduate he had his own chemistry lab. After picking up a medical degree from Ohio State University, he considered a Ph.D. program, partly to avoid being drafted for the war in Vietnam. When the Army eliminated that exemption, he instead joined as an officer and wound up at the U.S. Army Biological Laboratory at Fort Detrick, Maryland, for 2 years. After a postdoc in immunology at the Karolinska Institute in Sweden, he completed a residency at the University of Michigan and then joined its faculty, garnering an NIH career-development award and then an R01 in immunology while also pioneering pumps that can be implanted under the skin to deliver drugs directly to the liver. His 14 years at Michigan, he says, “gave me an appreciation for why both basic scientists and clinicians get up in the morning.”

    Lured to Johns Hopkins University in Baltimore, Maryland, in 1987, he conducted both clinical and basic research, studying the role in cell growth of tyrosine kinases—including blk, an oncogene his lab discovered. But after 4 years, he headed west to join Stanford University School of Medicine Dean David Korn in building surgery department. Dealing with sensitive tasks such as merging Stanford's faculty practice plan with its hospital took a toll, however, and after Korn left in 1995, Niederhuber headed to the University of Wisconsin to “get back into the cancer business.”

    At Wisconsin, he oversaw a challenging merger of basic and clinical cancer centers while coping with a recurrence of his wife's breast cancer; she died in 2001. After clashing with the medical school dean over raising money for the cancer center, Niederhuber stepped down as director by “mutual decision” and let his R01 lapse. Then last year, he answered the call from von Eschenbach, with whom he was already working as chair of the presidentially appointed National Cancer Advisory Board.

    His ascension to the directorship wasn't a shoo-in, however. Last spring, 62 prominent cancer researchers urged the White House to conduct a broad search after von Eschenbach was tapped for the FDA post (Science, 21 April, p. 357). Several candidates were interviewed, although no official search committee was formed. One of them, Joseph Pagano, the 74-year-old former director of the Wake Forest Lineberger Cancer Center in North Carolina, said he told his interviewers that the president should appoint someone younger. Pagano likes Niederhuber's “excellent scientific intellect” and grounding in basic science, something that he says von Eschenbach lacked. “I think he's a very good man.”

    Niederhuber, for his part, says he feels he's been warmly welcomed by both the basic and clinical cancer communities. “I haven't sensed that I have to prove anything,” he says.


    Budgets, Patients, Managing Conflicts

    1. Jocelyn Kaiser

    In a 21 September interview with Science, John Niederhuber talked about major issues facing the National Cancer Institute (NCI).



    My style is more to work directly with the division leadership. We've moved our executive committee meetings to almost once a week from twice a month before. … We have great leadership across the divisions at this time. I'm not going to have that layer of what's called the senior management team. …

    Intramural and extramural [division directors are rating each other's programs by anonymous ballot]. Yesterday, we decided there are certain concepts we wouldn't take forward to the BSA [Board of Scientific Advisors] … [because] we couldn't really give them a high enough priority. We're also trying to say, is there something that's outlived its usefulness. This may also be an extramural activity which … we can begin to plan how to phase out. … I have been tremendously impressed with how well the division leadership have been willing to work together in a very collegial way to make collective decisions, hard decisions.


    This is a very stressful time, budgetwise. I know full well, because I've lived through those periods of single-digit pay lines [the funding cutoff for grant applications, as ranked by peer review] in my own lab. … A big part of my responsibility is to help the leadership of NCI manage that [congressional] appropriation as effectively as we can.

    [But regarding efforts to raise the pay line], a lot of that decision is made at the NIH level. We don't want to have one institute having a 5% success rate and another one 20%. At NIH, we want to try to minimize that variability as much as we can. And so we try to make corporate-level decisions and targets. Some of those are to try to maintain the same number of competing awards that we had in the previous year, just as an example. … We would probably … try to keep the size of those grants about the same.


    I feel sad in many ways about Tom [Walsh, an NCI expert on fungal diseases who violated rules on reporting outside income] and some of the other individuals. I think these are very well-meaning people. Certainly, Tom couldn't be a harder worker in terms of delivering outstanding patient care. He's as committed as anyone to NCI, NIH. …

    But you know, there are rules and regulations, and none of us, no matter how good we are, are above those rules and regulations. … It's always easy to look for excuses, and maybe one of the excuses is the environment perhaps was not paying as close attention as it should have. There is sometimes a little bit of this absent-minded professor issue. … Even in academia and universities, many of us have struggled with this.


    I have been talking a lot about a continuum in this process of research and science that leads to impacting on patient outcome. … What I call a chemical space, … a biologic space, and a translational space. One of the things we're going to work hard on is the integration of those spaces. I see that integration process driven by our investment and work in technology development. …

    The community [cancer centers] program is important as an underpinning of that because one of the greatest challenges for us in the future will be getting science to patients where they live. … If we work in this continuum, we can do a lot to drive down the cost of drug discovery. … I think you're going to see that dramatically change with the technology developments and investments that we'll make.


    It's exciting for me to get the lab going again. … Monday morning, we have a very formal almost 3 hours, then I'll pop over at the end of the day every now and then. … The team here knows that that [scheduled lab] time can't be violated, even by the president.


    Picking Apart the Causes of Mysterious Dementias

    1. Jean Marx

    Researchers are identifying the genetic and biochemical underpinnings of frontotemporal lobar dementias, incapacitating and ultimately fatal conditions

    As we grow older, we face a host of diseases that can rob us of our mental vigor. Alzheimer's is the best known of these neurodegenerative diseases, but others can be just as devastating. Take the group of diseases collectively known as the frontotemporal lobar dementias (FTLDs).


    In FTLD cells, ubiquitin (green) and TDP-43 (red) colocalize (yellow).


    As the name suggests, these conditions stem from degeneration in the frontal and temporal lobes of the brain, areas that control behavior, emotions, and language. The symptoms, which usually develop when people are in their 50s or early 60s, include language difficulties and inappropriate behavior. Patients with early-stage FTLD typically can't control their impulses and may shoplift, overeat, and show excessive interest in sex. A decline in personal hygiene is also a frequent symptom.

    Once thought to be rare, FTLDs are gaining new respect as a significant cause of dementia. Good estimates of their prevalence are hard to come by. But Andrew Kertesz, an FTLD expert at the University of Western Ontario in London, Canada, says that about 12% of patients treated at dementia clinics have FTLDs. He notes, however, that this could be an underestimate because the conditions may be misdiagnosed as psychiatric disorders, at least in their early stages, or as Alzheimer's disease.

    Several recent reports, including one on page 130 of this issue, are beginning to identify the genetic and biochemical abnormalities underlying FTLDs. Neurobiologist Michael Hutton of the Mayo Clinic College of Medicine in Jacksonville, Florida, is impressed with the speed of the discoveries. “It's like compressing 10 years of Alzheimer's disease research into 6 months,” he says.

    In addition, some of the work has solidified a connection between FTLDs and another neurodegenerative disorder, amyotrophic lateral sclerosis (ALS), better known as Lou Gehrig's disease. It suggests that the underlying pathology may be similar in both sets of diseases. If so, therapies directed at one might also work on the others. Because neither FTLDs nor ALS can be treated now, both are ultimately fatal.

    Pick's disease

    Alzheimer's disease deserves its widespread notoriety, but the lesser known FTLDs have a longer history. The Czech neurologist Albert Pick described the first case in 1892, 16 years before Alois Alzheimer discovered the dementia that bears his name.

    For decades, cases of dementia caused by frontotemporal lobe damage were known simply as Pick's disease, but neurologists gradually learned that the condition is very heterogeneous. “Even in the same family, siblings have different symptoms,” says neurobiologist Virginia Lee of the University of Pennsylvania School of Medicine in Philadelphia.

    The variation in FTLD symptoms may reflect differences in brain pathology that neurobiologists are finding. As with Alzheimer's disease, ALS, and many other neurodegenerative diseases, FTLDs are characterized by the presence of abnormal protein deposits in affected neurons. Beginning about 15 years ago, researchers showed that in about half the people with FTLD, the deposits contain the protein tau, which has also been implicated in Alzheimer's pathology. Eight years ago, several teams provided genetic evidence that mutations in the tau gene itself can produce this particular form of FTLD, which includes the classic Pick's disease.

    The identities of the proteins in the inclusions of the remaining half of FTLD cases have been harder to pin down. Researchers did find, however, that the inclusions in these tau-negative cases are tagged with ubiquitin, a small protein that the cell uses to mark proteins for destruction.

    Even within this subclass of FTLDs, researchers are now finding pathological variations. Using antibodies that recognize ubiquitin, Lee, John Trojanowski, also at the University of Pennsylvania School of Medicine, and their colleagues examined the structure and distribution patterns of the inclusions in brains taken from such FTLD patients at autopsy. They found three distinct patterns, indicating that the cases could be subdivided into subtypes. (The findings appear in this week's American Journal of Pathology.)

    Brain damage.

    Compared to a normal brain (bottom images), a brain from someone with FTLD shows shrinkage of the frontal and temporal lobes (upper left) and an enlarged ventricle (upper right).


    Now, the Lee-Trojanowski team has used two of the same antibodies to identify the first protein, beyond ubiquitin, in the tau-free inclusions. As described on page 130, the researchers found a protein called TDP-43 that, Lee says, “was present in all the brains of patients with all subtypes” of FTLD with ubiquitin-positive, tau-free inclusions. What's more, Lee and her colleagues detected TDP-43 in the ubiquitinated inclusions within the neurons of ALS patients.

    FTLDs and ALS had already been linked clinically. ALS is best known as a destroyer of the body's motoneurons, ultimately leading to death as the muscles lose all ability to contract. But many people with ALS also develop dementia. Conversely, people with FTLDs frequently develop motoneuron disease similar to that of ALS.

    The identification of TDP-43 in both FTLD and ALS inclusions now provides a biochemical link between the two conditions. They “could be different parts of a spectrum of diseases,” says Ian Mackenzie of the University of British Columbia in Vancouver, Canada, who is a co-author of the Lee paper.

    Neurobiologists are hopeful that the TDP-43 finding can help unravel the underlying causes of both neurodegenerative diseases. Robert Brown, an ALS expert at Harvard's Massachusetts General Hospital in Boston, points out that the appearance of threads of ubiquitinated protein in neurons is “one of the earliest signs of pathology in ALS, and we had no hint at all of what the abnormal protein is.” The Lee team's observation, he adds, “means that we have some clue of what distinguishes ALS neurons as being abnormal.”

    At present, however, TDP-43's function in neurons and elsewhere is pretty much a mystery; PubMed contains only 12 references to the protein. There are some indications that it functions in the nucleus, perhaps as a regulator of gene expression or as a structural protein or both. And in the current work, the Lee team found that levels of nuclear TDP-43 are much lower in FTLD neurons than in normal ones; the protein is instead concentrated in inclusions, which are located in the cytoplasm. It remains unclear whether the FTLD neurons die because the inclusions are somehow toxic or because TDP-43's nuclear function has been lost.

    Help from genetics

    Both FTLD and ALS are known to run in families, and researchers are making progress in identifying the genes at fault in these hereditary cases. One such discovery, from John Collinge and colleagues at University College London in the United Kingdom, buttresses the link between FTLD and ALS. Last year, the team reported that mutations in a gene called CHMP2B cause FTLD in a Danish family. In work reported this summer in Neurology, the researchers have now linked CHMP2B mutations to two cases of ALS. “It supports the view that [FTLD and ALS] may have common etiologies,” Collinge says.

    So far, CHMP2B mutations appear to be a rare cause of FTLD, but the past 6 months have seen the identification of two additional FTLD genes—perhaps another indication of the dementia's heterogeneity. In the June issue of the Journal of Neuropathology & Experimental Neurology, Virginia Kimonis of Children's Hospital Boston and colleagues link a subset of cases to mutations in the gene for the valosin-containing protein (VCP). And in work published online by Nature in July, two independent teams, one led by Mackenzie and Hutton and the other by Christine van Broeckhoven of the University of Antwerp, Belgium, connect a different subset to mutations in the gene for a protein called progranulin. Since then, at least three more groups have weighed in with progranulin mutations in FTLD cases.

    Neither VCP nor progranulin appears to be located in FTLD inclusions. In the case of progranulin, there's a good explanation, says Mackenzie. The progranulin gene mutations identified result in loss of protein production.

    At present, the supposition is that loss or alterations in protein function caused by the various mutated FTLD genes trigger changes in neurons that somehow lead to formation of the abnormal inclusions containing ubiquitinated TDP-43. Although no one yet knows how that happens, the nature of the proteins normally encoded by the mutant genes points to the possibility that their malfunction leads to an abnormal buildup of proteins that have to be removed—a process for which ubiquitin addition is the first step.

    VCP and CHMP2B, for example, are directly involved in protein folding and trafficking in the cell. As for progranulin, it is a growth factor that might be needed for neuronal maintenance, although it is also a promoter of blood vessel formation. Failure to maintain a normal blood supply to cells can produce oxidative stress, a known inducer of the so-called unfolded protein response, which has been linked to neurodegeneration (Science, 15 September, p. 1564).

    A similar mechanism may also come into play in ALS. Brown and Orla Hardiman of the Royal College of Surgeons in Ireland in Dublin this spring identified mutations in the angiogenin (ANG) gene as a cause of ALS in Irish and Scottish populations. ANG, like progranulin, encourages blood vessel growth. And previously identified ALS-causing mutations in the SOD1 gene may lead to oxidative stress, too.

    Neurobiologists clearly have a lot to do before they can explain the causes of taunegative FTLDs and ALS, but the flurry of recent advances has made them hopeful. “We've got a genetic cause, and we know what protein is accumulating,” Hutton says. “Now we have to connect the dots.” Pick would be proud.


    Priorities Needed for Nano-Risk Research and Development

    1. Robert F. Service

    Nanotechnology observers are split over the best way to ensure that the up-and-coming industry remains safe for both people and the environment

    A broad array of nanotechnology experts agree that the United States needs to spend more money on understanding potential health and environmental dangers of exposure to materials engineered on the scale of a few clumps of atoms. But just how that research should be prioritized and organized is a topic of increasingly fierce debate.

    The potential adverse impacts of nanotechnology sprang to the fore again at a sometimes-contentious hearing of the U.S. House Science Committee on 21 September. At the hearing, leaders of the Nanotechnology Environmental and Health Implications (NEHI) working group—an interagency panel that coordinates federal funding on health and environmental risks of nanotechnology—released a long-overdue report outlining research needed to buttress regulation of products in the field. But critics both inside and outside Congress blasted the report as a jumbled wish list. “The government needs to establish a clear, prioritized research agenda and fund it adequately. We still haven't done that, and time is a-wasting,” says committee chair Sherwood Boehlert (R-NY).

    There is certainly plenty riding on how nanotechnology is regulated. More than 200 nanotechnology products are already on the market, including sunscreens and cosmetics, lightweight bicycle frames, and car wax, and they accounted for more than $32 billion in sales last year. A recent market survey by Lux Research, a nanotechnology research and advisory firm in New York City, predicts that by 2014, a whopping $2.6 trillion worth of manufactured goods will incorporate nanotechnology. “The nanotechnology industry, which has enormous economic potential, will be stymied if the risks of nanotechnology are not clearly addressed and understood,” Boehlert says.

    That is already happening, says Lux Research Vice President Matthew Nordan. At the hearing, Nordan said that Lux has learned through its private consulting work that some Fortune 500 companies are already backing out of nanotechnology research because of real and perceived risks of nanomaterials and uncertainties over how they would be regulated. Venture-capital funders and insurers have also pulled their services for some clients for the same reason, Nordan says, although he didn't offer specifics.

    To stem this tide, Nordan and other experts argue that nanotoxicology research funding should be increased dramatically. According to figures from the U.S. National Nanotechnology Initiative, federal agencies currently spend a combined $38.5 million annually on environmental, health, and safety research on nanotechnology. Last year, however, researchers at the Woodrow Wilson International Center for Scholars' Project on Emerging Nanotechnologies in Washington, D.C., concluded that only $11 million went to “highly relevant” research focused on understanding and dealing with the risks of nanomaterials (see table). At a congressional hearing last year, nongovernmental experts called for raising funds for such studies to between $50 million and $100 million a year (Science, 9 December 2005, p. 1609). Both the NEHI report and another report released on 25 September by the National Research Council echoed calls for expanding research in the field.


    Little nano-risk research is conducted by agencies that oversee health and environmental regulations.

    View this table:
    Chorus line.

    Two new reports call for increased funding for nano-safety studies.

    But there is far less agreement on how that money should be spent and coordinated. “Nanotech [environmental health and safety] research in government agencies, academic institutions, and industry is being performed in an ad hoc fashion according to individual priorities,” Nordan says. That scattershot approach has left broad gaps between what the agencies are pursuing and what is needed to tune regulations to products already on the market, argues Andrew Maynard, chief scientist of the Wilson Center's Project on Emerging Nanotechnologies. For example, Maynard says, carbon-based nanomaterials are incorporated into only about one-third of nanotech products. Yet the vast majority of nanotoxicology studies focus on those materials, while ignoring broad classes of other materials already on the market.

    To avoid such discrepancies, agencies need a more centralized top-down research approach, Nordan and Maynard argue. “What is missing is not [an] ingredients list but a specific game plan for accomplishing this research,” Nordan says.

    But NEHI leaders and other agency brass say a federal priorities list is coming and maintain that the current coordination scheme is the best way to implement it. “The coordination that is taking place is working,” argues Celia Merzbacher, a member of the President's Office of Science and Technology Policy as well as the co-chair of the Nanoscale Science, Engineering, and Technology (NSET) Subcommittee of the National Science and Technology Council. Furthermore, she adds, “our approach achieves the buy-in of the agencies.” In addition to the NEHI working group, she notes, there is already a full-time National Nanotechnology Coordination Off ice (NNCO) within NSET. “We don't need another coordination office,” she says. But Nordan counters that coordinating bodies such as NEHI and NNCO have no authority to mandate priorities and can't allocate funding.

    With the strongest calls for reform still coming from outside government, a shakeup of nanotechnology research looks unlikely anytime soon. At the hearing, Boehlert did argue that “current coordinating mechanisms clearly are inadequate.” But Boehlert is retiring at the end of his current term on 31 December. And it remains to be seen whether other congressional science leaders will emerge to pick up the baton.


    Power to the (Poor) People

    1. Robert F. Service

    The estimated 25% of the world's people without electricity face a nearly inescapable cycle of poverty, poor health, and lack of education. Unfortunately, large-scale centralized power plants are expensive, and efforts to tap widespread natural low-temperature heat sources such as geysers, hot springs, and peat bogs to run generators have largely proved ineffective. But new work offers a different way to squeeze at least a modest amount of electricity from those hot spots.

    At the meeting, Roman Boulatov, a chemist at the University of Illinois, Urbana-Champaign (UIUC), reported a new device akin to a fuel cell that uses reactions of charged compounds to create electricity. But whereas a fuel cell mines energy from chemical fuel such as hydrogen gas, the new device—called a thermally regenerative solution concentration cell (TRSCC)—is recharged using freely available heat. The TRSCC is only 1% efficient at turning heat to electric power, so it won't be charging any major cities. But because it can be made from cheap materials, it could be a godsend to those who have few other options. “Clearly, this is an interesting idea,” says Paul Kenis, a fuel cell expert at UIUC, who is unaffiliated with Boulatov's work. “It may be a solution for developing countries or remote areas of the world that lack access to the power grid.”

    Boulatov made the device along with Michal Lahav when both were postdoctoral assistants in the group of Harvard University chemist George Whitesides. Like a fuel cell, it consists of two compartments—each with an electrode submerged in water—separated by a semipermeable membrane. In Boulatov's TRSCC, the side containing the positively charged anode is spiked with a high concentration of sodium iodide (NaI), which in solution dissolves into a mixture of positively charged sodium ions (Na+) and negatively charge iodide (I). The other chamber contains the same salt at thousands of times lower concentration. Both chambers also contain a small amount of neutrally charged iodine (I2). The large I concentration difference drives the solutions to equilibrate. But because the negatively charged I ions can't move through the membrane, they must carry out their task at the electrodes. In the chamber with the high I concentration, the positively charged anode swipes an electron from two iodides, generating an I2 molecule. The electrons then move through a wire, where they can be used to do work en route to the other chamber. When they reach the negatively charged cathode, they combine with a molecule of I2, which splits into two I ions. Meanwhile, positively charged sodium ions move through the membrane to maintain the electrical neutrality of each solution.

    Energy solution.

    Free heat can regenerate iodide (I) ions, which can be stripped of their electrons to create a current.


    On its own, this reaction would quickly balance out the amount of iodide and iodine in each chamber, and the cell would go dead. To keep it running, the researchers heat the high-I chamber. Water and I2 evaporate from the solution and then, by design, condense in the second chamber. As the level of solution in the cathode chamber rises, it flows over a barrier between the two chambers, carrying with it regenerated I as well as Na+ to replenish NaI on that side. The upshot is that the added heat maintains a difference in concentration of I between the chambers, thereby keeping the reaction going. In initial tests, Boulatov says his TRSCC has kept running for 350 hours.

    If several dozen such chambers were set up side by side, they could provide enough power at a high enough voltage to run simple machines such as telephones, water purifiers, and refrigerators, Boulatov says. That could begin to help large numbers of people climb out of hopeless poverty.


    Cut-and-Copy Approach Clones Nanotubes

    1. Robert F. Service

    When Rice University chemist Richard Smalley died of cancer last October, nanotechnology lost one of its most adept practitioners and inspiring visionaries. High on Smalley's wish list for the field was a way to produce just one type of carbon nanotube at a time.

    Nanotubes are made from sheets of carbon atoms rolled up into tubes. But just how those sheets roll up makes some tubes behave like metals and others like semiconductors. That difference is critical for researchers hoping to use nanotubes to make ultrasmall transistors and sensors and long wires and cables. Conventional schemes for making the purest form of nanotubes, called single-walled nanotubes (SWNTs), create blends of all the possible varieties. So Smalley emphasized the need to isolate just one electronic flavor.

    At a 4-day symposium dedicated to Smalley, who shared the 1996 Nobel Prize in chemistry for his discovery of fullerenes, a collaboration of four research groups at Rice in Houston, Texas, reported discovering a way to clone just a single electronic type of nanotube, leaving the others out. If the new scheme can be scaled up to produce industrial quantities, it could open up a wide variety of electronics applications of carbon nanotubes. “It's very promising,” says Jie Liu, a chemist at Duke University in Durham, North Carolina. However, he cautions that the Rice team still has a way to go to prove that the process does indeed produce just a single type of tube.

    The teams took their cue from a technique that uses tiny nano-sized catalyst particles to seed the growth of tubes. About 5% of catalyst particles in a reactor give rise to new tubes, from hundreds of nanometers to micrometers long, says Andrew Barron, a Rice University chemist, whose group, along with those of James Tour, Ed Billups, and Smalley's own former group, carried out the new work. Barron and his colleagues set out to improve the “seeding” step by making better use of the small percentage of tubes that do grow well.

    Variety pack.

    A new technique amplifies a single electronic flavor of nanotubes.


    They started by isolating bunches of SWNTs with different electronic properties. They then cut the tubes into tiny straws, most of which were about 40 nanometers long, and reacted them with a compound that left carboxylic acid groups attached to the ends of the tubes. They added the tubes to a vacuum chamber with catalytic iron oxide nanoparticles, which readily adhere to the carboxylic acids. Once the catalysts were attached, they removed the intervening carboxylic acids by heating the tubes, either alone or with hydrogen. Finally, they added a carbon gas feedstock, such as carbon monoxide, methane, or acetylene, at 900°C. At that temperature, the feedstock gases break apart, and carbon atoms insert themselves between the catalyst particles and the starting portion of the tube, extending the tubes to long lengths.

    Before-and-after images of the tubes using Raman fluorescence, a technique sensitive to the arrangement of carbon atoms in the tubes, didn't detect any change in the type of tubes in the mix or in their relative abundances, Barron says. Still, the technique works only with semiconducting tubes, not the metallic variety, so the Rice team is working to verify that all the tubes are indeed clones.

    If they are, and if the technique can be scaled up, that would be a boon to researchers looking to weave millions of SWNTs together to make long, thick power cables with minute electrical resistance. Well-sorted nanotubes could also make possible a new generation of sensors and tiny nanoelectronic devices. Those would go a long way toward fulfilling Smalley's vision.


    Snapshots From the Meeting

    1. Robert F. Service

    First nanotube textile. Not a bad way to get your career in science off the ground. Plano, Texas, high school student Diane Chen is shown here weaving the first ever carbon nanotube-based textile. For a summer research project, Chen used a conventional wooden loom strung with polyester yarn and wove in 10-ply yarn made from multiwalled carbon nanotubes synthesized in the lab of Raymond Baughman, a materials scientist at the University of Texas, Dallas. Although Baughman said his team has yet to measure the properties of the fingernail-sized swatch of fabric, nanotube textiles are expected to be extremely strong and flexible and have the potential to be used for energy storage, sensors, and even artificial muscles.


    Evolving better hydrogenases. Bacteria figured out the route to clean, green energy more than a billion years ago. Escherichia coli and many other microbes harbor enzymes called hydrogenases that use iron and nickel to split water to generate hydrogen gas, from which they then extract energy. Researchers would love to be able to employ such catalysts on a massive scale to generate fuel to power the hydrogen economy in the future. One big challenge is that hydrogenases don't work in the presence of oxygen, a trait that makes them hard to exploit industrially.

    Researchers at Stanford University in California, however, have begun tackling that problem. They've devised a “cell-free” way to synthesize large numbers of mutant proteins—in this case hydrogenases—and rapidly screen them for catalytic activity. So far, they've turned out some 40,000 mutant hydrogenases, a small percentage of which are able to tolerate the presence of oxygen. However, their catalytic rate is still low. But if scientists find one that's both oxygen-tolerant and highly active, it could help solve one of the biggest stumbling blocks on the road to abundant carbon-free energy.

    Scribble, scribble. Ever since researchers at IBM first used a scanning probe microscope to write their company logo in xenon atoms back in 1990, critics have complained that the probe's single tip made the technique far too slow. Not any more. Chad Mirkin and colleagues at Northwestern University in Evanston, Illinois, and NanoInk Inc. reported at the meeting that they've created an array of 55,000 microscopic tips that can spot down different chemicals simultaneously, covering a full square centimeter at time.

  18. An Enterprising Approach to Brain Science

    1. Greg Miller

    Mobile computing pioneer Jeff Hawkins has had a lifelong fascination with brains. Now he's trying to model the human cerebral cortex—and he's created a software company based on his ideas

    Bold ideas come naturally to Jeff Hawkins. In California's Silicon Valley, Hawkins is well-known as the inventor of the PalmPilot, the first commercially successful handheld computer, and the Treo smartphone. These devices are rarely out of arm's reach for millions of businesspeople, who rely on them to keep track of power lunches and peek at e-mail during meetings. Hawkins, at age 49, could easily retire. Instead, he's on a mission to figure out the brain.

    Hawkins has spent a remarkable amount of time thinking about brains, at least for someone who launched a billion-dollar business and values time with his family. Over the past 20 years, he's spent countless hours poring over research papers, sitting in on neuroscience conferences, and hashing out ideas with academic scientists. “Even at Palm, I had an agreement that I could work part-time on brain research,” he says. “It was in my contract.”

    Hawkins's foray into neuroscience is characterized by the same determination and just-do-it attitude that made him a successful entrepreneur. In the past 4 years, he has founded a small neuroscience institute, published a book outlining his theory on the nature of human intelligence, and founded a start-up company to develop computers that work on the same principles. His aggressive approach disconcerts some scientists, who are used to measuring progress one peer-reviewed paper at a time. Yet several prominent neuroscientists say his ideas on the brain are worth taking seriously, and even skeptics say his enthusiasm and entrepreneurial attitude have enlivened the field.

    “Jeff is a very interesting and dynamic person,” says Michael Merzenich, a neuroscientist at the University of California (UC), San Francisco, who has talked brains with Hawkins over the past 10 years. “He's one of the dozen or so smartest people I've met in my life.” Hawkins brings incredible focus and an entrepreneur's sense of urgency to his endeavors, adds Anthony Bell, a theoretical neuroscientist at UC Berkeley who has worked with Hawkins. “He wants a computer program that works like the cortex, and he wants it now,” Bell says. “He wants the brain in silicon.”

    Academic frustration, corporate success

    Hawkins may have acquired the impulse for innovation from his father, a consummate inventor whose creations included a 16-sided, 50-ton boat that floated on a cushion of air generated by a vacuum cleaner motor. Growing up on Long Island, Hawkins and his brothers helped build the craft, which they nicknamed the Bubble Monster. Like his dad, Hawkins grew up to be an engineer.

    Mobile man.

    After designing hand-held computers and smartphones for 15 years, Jeff Hawkins is on a quest to understand the brain.


    In 1979, fresh out of Cornell University with an undergraduate degree in electrical engineering, Hawkins read a magazine article that he says changed his life. In an essay in Scientific American, Francis Crick, whose interests had recently turned from molecular biology to the mysteries of the mind, lamented the lack of a grand unified theory in neuroscience. Scientists have amassed a wealth of details about brain anatomy and physiology, Crick wrote, but still have no working hypothesis of how the whole thing actually works. To Hawkins, this was a call to action.

    In 1980, he tried to persuade his employer, computer-chip maker Intel, to let him start a brain research group, but the company declined. The following year, the Massachusetts Institute of Technology (MIT) rejected his application to pursue a Ph.D. in its artificial-intelligence laboratory. Although disappointed, Hawkins resolved to figure out a way to one day pursue his interest in the brain. In 1986, he tried the academic route once more. This time, he was accepted to a graduate program at UC Berkeley.

    Once there, Hawkins sketched out a theory of how the cerebral cortex—the thin sheet of tissue on the surface of the brain—gives rise to intelligence, and he submitted this as his thesis proposal. “They said, ‘This is interesting, but you can't do it,’” he recalls. To get a Ph.D., you need a thesis adviser, and no one at Berkeley was doing theoretical neuroscience back then, he says. Hawkins grew frustrated and impatient. “I was technically a student for 2 years, but by the second year, I was just using the library.”

    Hawkins returned to the high-tech industry, where he'd gained expertise and a reputation for creativity in designing portable computers. Thanks in part to work he did at Berkeley on neural mechanisms of pattern recognition, he owned a patent on a handwriting-recognition program that allowed computer users to enter data by writing on a screen with an inkless pen. Determined to incorporate this software into handheld computers, Hawkins founded Palm Computing in 1992.

    In some ways, the timing couldn't have been worse. In 1993, Apple released a handheld computer, the Newton, and it was a colossal flop. Suddenly, no one wanted to invest in mobile computing, Hawkins says. But he stuck with it. One night in his garage, he carved a mockup of the device he envisioned from a block of wood. As his team worked on the interface, Hawkins tested various configurations of buttons and display windows by sticking printouts onto the model. Silicon Valley lore now has it that he would pull the model out of his pocket during meetings and poke at the “screen” with a sawed-off chopstick, pretending to enter appointments. “I just knew it was going to happen,” he says.

    Time ultimately proved Hawkins right: Palm has sold more than 34 million devices. In 2002, Hawkins felt the time had come to get back to brains. Scaling down his hours at Palm, he founded (and funded) the Redwood Neuroscience Institute (RNI) in offices above a popular café in Menlo Park, California. The idea, he says, was to bring together scientists interested in creating computational models of the cerebral cortex. The group ultimately consisted of five principal investigators in addition to Hawkins, plus a handful of postdocs. “He collected a great group of very creative people,” says Anthony Zador, a computational neuroscientist at Cold Spring Harbor Laboratory in New York, who visited the institute several times. “He wasn't just a manager; he was there every day talking to them about their ideas and about his ideas.”

    The institute defied categorization. “The atmosphere was a little bit between a start-up, and a think tank, and a research institute,” says Fritz Sommer, a computational neuroscientist who left a faculty position at the University of Ulm in Germany to join RNI. “I was kind of an old-school guy, raised in the old German academic tradition, so for me this was something much more inspiring.” In the absence of teaching obligations and grant proposals, freeform interactions flourished among the scientists who worked at RNI, says Sommer—and extended to their guests. A visiting speaker was liable to endure a barrage of queries throughout the presentation, making it less like a formal lecture and more like a lively conversation—one that often continued for hours at the café downstairs.

    At one meeting, co-sponsored with the American Institute of Mathematics, RNI brought together anatomists, physiologists, and theoreticians who were all studying the cortex. Hawkins asked each group to go off on its own and come up with a list of things the other groups could do that would aid them in their own work. “The scientists were entirely puzzled because no one had ever asked them to do that,” says Sommer. “That blew my mind,” Hawkins says. “In industry, you're always trying to get help from wherever you can.”

    Making predictions

    Hawkins spent much of his time at RNI fleshing out the ideas on the cerebral cortex he'd conceived at UC Berkeley decades earlier. In 2004, he published them in a book co-written with New York Times science writer Sandra Blakeslee. In On Intelligence, Hawkins argues that the nature of intelligence—and the primary function of the cortex—is predicting the future by remembering what has happened in the past.

    You call that a dog?

    A Numenta simulation program (bottom) recognizes objects even when they're poorly rendered (top).


    A key feature of Hawkins's argument is the idea of a common cortical algorithm, proposed in the late 1970s by Vernon Mountcastle, a neurobiologist at Johns Hopkins University in Baltimore, Maryland. Because anatomists had found that the type and arrangement of cells in any tiny patch of cortex is very nearly the same, Mountcastle proposed that every patch of cortex performs the same basic operation. What makes one swath of cortex a “visual” region or a “language” region is the kind of information it receives, not what it does with that information. “In my mind, this is one of the most fundamental breakthroughs in neuroscience,” Hawkins says.

    He is convinced that the common cortical algorithm performs predictions. In the book, he argues that the anatomy of cortex is well-suited for prediction and describes how circuits of cortical neurons arranged in a hierarchy—in which higher levels constantly feed information back to lower levels—can compare an incoming sequence of patterns (such as a string of spoken words) with previously experienced sequences (“Fourscore and seven years”) to predict what's next (“ago”). This memory-prediction framework has evolved to take advantage of the spatial and temporal structure in our surroundings, Hawkins says, which helps explain why brains easily do certain tasks that give computers fits. “There's no machine in the world that you can show a picture of something and have it tell you whether it's a dog or a cat or gorilla,” he says, but a person can do this in a fraction of a second.

    “His ideas … provide a plausible conceptual framework for a lot of different kinds of data,” says Mriganka Sur, a neurobiologist at MIT who studies the cortex. Yet some theoretical neuroscientists, none of whom would agree to be named, grumble that Hawkins's book merely rehashes other people's ideas and that his model isn't concrete enough to suggest experiments to test it. Although there's some truth to that, says Sommer, Hawkins has tied together several existing concepts in an interesting way: “He makes connections and sees the bigger picture that people who are doing research on a particular system of the brain often lose.”

    Hawkins now believes that the best way to spur interest in his cortical theory is to use it to develop technology. “People work harder and get things done faster if they can see a profit motive,” he says. Last year, he founded Numenta, a for-profit company, and handed off RNI to UC Berkeley, along with an endowment to cover much of its operating expenses. Now called the Redwood Center for Theoretical Neuroscience, it's part of the Helen Wills Neuroscience Institute.

    At Numenta, Hawkins has worked with Dileep George, a former Stanford University electrical engineering grad student, to develop software based on the memory-prediction theory. Hawkins expects the software to be ready for public release next year. In the more distant future, he envisions intelligent computers that will tackle all sorts of problems. In On Intelligence, he imagines feeding real-time data from a global network of sensors into a “weather brain” that tracks weather systems in the same way the brain identifies objects and predicts how they will move across the field of vision. By applying humanlike intelligence to vast amounts of data, such a system could identify previously unknown weather phenomena (along the lines of the El Niño cycle) and make more accurate forecasts, Hawkins speculates. Intelligent systems using Numenta's software might also monitor power grids to help guard against blackouts, or monitor sensors on an automobile and alert the driver to dangerous situations.

    It's far too early to know whether Hawkins's vision will pan out. But regardless of whether he succeeds, Hawkins has helped galvanize the theoretical neuroscience field, Zador says: “The fact that he's setting these wildly ambitious goals and has set about achieving them is actually quite refreshing.”

  19. Vision's Grand Theorist

    1. Ingrid Wickelgren

    Eero Simoncelli has an eye for mathematical truths that explain human vision—and he's adept at translating that knowledge into practical tools such as image-compression techniques

    A great divide traditionally separates theory from experiment in neuroscience. Theorists typically deal in idealized mathematical abstractions far removed from nitty-gritty physiological data. Experimental neuroscientists often view such musings with disdain, considering them irrelevant or too mathematically dense to be of any use.

    Eero Simoncelli, a Howard Hughes Medical Institute vision researcher at New York University (NYU), is one of a small but growing cadre of computational neuroscientists bridging this divide. Forty years after researchers revealed the cellular fundamentals of vision, how the electrical signals delivered by the eye's rods and cones assemble into full-scale visual perceptions remains largely an enigma. To sharpen the picture, Simoncelli is working to make neuroscience more like physics, a field in which theory and experiment more easily blend. Just as physicists replaced loose, qualitative descriptions of the physical world with mathematically precise language, Simoncelli aims to devise fundamental equations of vision. “I'm working to encapsulate the conceptual principles used by the brain in precise mathematical terms,” he says.

    Simoncelli's analyses have already solved several longstanding mysteries in visual science: for example, how the brain assembles a moving picture of the world and why humans drive too quickly in the fog. He's also helped explain how evolution may have sculpted the brain to respond ideally to the visual environment on Earth. On a more practical side, Simoncelli has developed novel methods for image compression and for cleaning up visual noise, such as TV snow. “Eero can hang out with the people who make JPEGs look better or compress info onto DVD,” says NYU neuroscientist Anthony Movshon, who collaborates with Simoncelli. “But to make this fit to the biology is a unique skill.”

    Simoncelli even hopes that his work will lead to insights into consciousness. His peers say that's not arrogance but quiet confidence. “Eero's work is … both powerful and simple,” says Matteo Carandini, a neuroscientist at the Smith-Kettlewell Eye Research Institute in San Francisco, California. “His group is the best thing around.” Bruno Olshausen, a computational neuroscientist at the University of California (UC), Berkeley, adds that Simoncelli's work “has been very inspirational to lots of people, including me.”

    Brain as machine

    Simoncelli has wanted to study the brain since childhood. But he could not relate to—or remember—the piles of facts he was asked to learn in his introductory biology course at Harvard University. So he decided to major in physics instead of biology and later got a Ph.D. in electrical engineering while working in Edward “Ted” Adelson's visual science laboratory at the Massachusetts Institute of Technology. In his Ph.D. thesis, Simoncelli mathematically described a network of neurons that processes visual motion. His simulated brain cells performed computations mimicking the responses that neurophysiologists had recorded from cells in their laboratories. “He has brilliant intuitions about images and vision,” Adelson says. “Combining engineering principles and biological insights, he's developed models of visual processing that are among the best in the world.”

    Perception problem.

    By understanding mathematically how the brain perceives texture, Eero Simoncelli has developed software that can synthesize the textures in an image. It works best when the object has a regular pattern.


    Simoncelli's Ph.D. analysis of visual motion captured a vexing oddity that other researchers had glossed over: the nonlinearity of vision-processing neurons. Engineers favor linear systems because they behave according to a simple law: If two stimuli are combined, the system's response to the combination is equal to the sum of its responses to each separate stimulus. By contrast, nonlinear systems generate more complex responses. “One of the reasons we have so much trouble trying to understand the brain is that it doesn't behave according to the rules of our standard engineering toolbox,” Simoncelli says.

    In the mid-1990s, as a computer science professor at the University of Pennsylvania, Simoncelli again embraced nonlinearity, producing a novel solution to a classic image-analysis problem: He identified a new set of mathematical regularities in the relations between the pixels that make up photographic scenes. His pixel analysis led to a state-of-the-art technique for compressing images and a method for eradicating visual noise that remains the best in the world as judged by experimental tests. Such a noiseremoval technique might eventually be used to make crisper, filmlike image sensors in digital cameras or clear up pictures received from TV satellite dishes.

    Bridging the gap

    Next, Simoncelli wanted to link his image analysis to the human visual system. He hypothesized that evolution may have forced the brain to encode the visual world in the most efficient, mathematically optimal way. Using that concept, Simoncelli and his colleagues reported in 2001 that the nonlinear responses of neurons, such as those in the primary visual cortex at the back of the brain, are well-matched to the statistical properties of the visual environment on Earth, that is, the mathematical patterns of lightness and darkness that recur in visual scenes. The result may help explain how evolution nudged certain visual neurons to be acutely sensitive to object edges and contours, for example.

    Last year, Simoncelli and his colleagues reported building an image-compression tool based on his nonlinear model of cortical neurons. Simoncelli reasoned that if the brain's visual cortex is optimally efficient at processing images, it should also do a superior job of compressing them. What's more, any distortions introduced by his compression process should be tolerable. “If the cortical representation is like what's in our brains, we won't notice the difference,” he says. Indeed, the new compression technique's performance far outstripped that of the JPEG standard.

    Visual insights.

    Simoncelli has explained why drivers speed in the fog and how the brain makes sense of moving objects.


    Working with postdoc Javier Portilla, Simoncelli has similarly devised a novel mathematical description of how the brain achieves visual texture perception. That's led to a better way of synthesizing pictures—say, an image of a patch of a certain type of grass or cloth—that maintain a material's distinctive appearance. “The model provides a good description of what a person sees when looking at texture,” Simoncelli says, adding that he and Portilla have tested it on an extensive number of texture images.

    “It does something almost artistic,” says UC Berkeley's Olshausen of Simoncelli's texture model. The model, Olshausen adds, not only points vision scientists to the essential properties of texture, but it also could be useful to filmmakers who would like to paint textures onto computer-generated images.

    Despite the practical relevance of his work, Simoncelli has largely stayed within the ivory tower. Although he has filed for patents in the past, earning three, Simoncelli hasn't applied for any on his new texture work, or for his most recent noise-reduction and image-compression techniques. One reason, he says, is that patenting delays publication of his ideas. Moreover, applying for a patent on software, versus an actual device, “feels like playing the lottery because the chances are low that it's going to hold up. I don't care enough about money to make it a priority.”

    In motion

    Recently, Simoncelli has helped solve several riddles of motion perception. In the April issue of Nature Neuroscience, Simoncelli and his postdoc Alan Stocker explained the Thompson effect, in which motion seems to slow down when the visual landscape lacks contrast. This illusion, first described 25 years ago by psychologist Peter Thompson, helps account for why people drive too quickly in the fog. Simoncelli and Stocker asked five people to judge which of two computer-generated gratings looked like it was moving faster. The researchers varied the gratings' speed and contrast, and each volunteer was asked to make about 6000 separate judgments. Stocker and Simoncelli then analyzed the data using Bayesian statistics, a branch of mathematics that combines expectations with new information, and deduced each person's expectations from his or her speed perceptions. It turns out that people expect slow movement over fast, and that those expectations trump actual perceptions when the perceptual data are sketchy, as occurs in low-contrast situations (Science NOW, 21 March,

    Another 25-year-old motion mystery is also about to succumb to Simoncelli. Scientists have long known that cells in the primary visual cortex process pieces of a visual scene and that those pieces are then assembled into a greater whole by cells in other brain areas. But when an object is moving, it was not at all clear how a brain put the pieces together. Ever since his Ph.D. thesis, Simoncelli has worked on the calculations a computer should perform to mimic a system that can combine pieces of a moving image and spit out a coherent response. Again, he used Bayesian mathematics to try to make sense of people's perceptions of motion and the physiological data from visual neurons. He then mapped all of his computations onto a simulation of neuronal responses that starts in the retina and ends in the visual motion-processing region known as area MT.

    In a paper to appear in Nature Neuroscience this fall, Simoncelli and Movshon along with postdocs Nicole Rust and Valerio Mante offer the first precise mathematical description of how cells in MT translate pieces of a moving scene into the movement of the whole. They vetted their model against new recordings from individual MT neurons in monkeys exposed to a specific set of stimuli: wiggling lines that look like the ripples on the surface of water. From the model, the researchers could extract biological information about MT cells, including which visual cortex cells feed into them. MT neurons are “profoundly nonlinear,” Simoncelli says. “The model explains how that profound nonlinearity can arise from a cascade of very simple nonlinear steps.”

    Movshon, who did the experimental work buttressing the new model, describes Simoncelli's solution as “simple and elegant,” and says the work also gives the field more sophisticated techniques for analyzing and extracting information from recordings of neuronal responses. Moreover, Simoncelli and his colleagues are putting the finishing touches on a set of algorithms that should help neuroscientists better interpret the flood of information that comes from recording large groups of neurons simultaneously in the retina, instead of one at a time as is traditionally done.

    Ultimately, Simoncelli aims to put many of his individual findings, and those of his collaborators, into nothing less than a grand unified theory of visual motion perception. “In 10 years, I think we will have a clean computational model of motion,” he predicts.

    And if that wasn't ambitious enough, Simoncelli is digging for deeper truths. “As we build better descriptions of the brain and test them experimentally, we hope to arrive at fundamental principles that can explain all brain activity, from sensation to consciousness,” he says. “That's going to help us understand who we are.” Now that's a grand vision.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution