News this Week

Science  23 Oct 1998:
Vol. 282, Issue 5389, pp. 598
  1. 1999 BUDGET FINALE

    NIH Wins Big as Congress Lumps Together Eight Bills

    1. David Malakoff,
    2. Eliot Marshall

    It wasn't elegant, but the grand budgetary finale of the 105th Congress last week was generous to many constituents—particularly biomedical scientists. In addition to bestowing billion-dollar subsidies on electric utilities and scores of highway projects on favored regions, White House and congressional leaders negotiated a last-minute deal that gives the National Institutes of Health (NIH) a record $2 billion raise in 1999. They also boosted energy science programs and extended tax breaks for corporate research through the current fiscal year, which began on 1 October.

    The plums are part of a massive, 4000-page omnibus appropriations bill that, as Science went to press, was still being finalized by harried congressional staffers. It tacks together eight of the 13 annual spending bills that had become mired in election-year politics. Legislators had already approved, and the president has signed, budgets for the rest of the government (Science, 9 October, p. 209). The deadlock was broken only by a marathon horse-trading session held behind closed doors. As a result, few details of the package were publicly available before the vote, and many agencies remained uncertain about which of their programs might expand or shrink. Indeed, one veteran lobbyist estimates that this “embarrassing” tangle of legislation could take weeks to sort out.

    Still, biomedical research leaders are delighted with what they've seen so far. Draft legislative documents obtained by Science indicate that Congress has put together a dream budget for NIH that should please even the agency's most ardent supporters. The omnibus bill—expected to be approved on 21 October—boosts total NIH funding by 15%, putting it on track for doubling within 5 years. This matches the optimistic target set by biomedical advocates early this year—and is big enough to prevent destructive competition among disease interest groups.

    The big NIH raise is also a triumph for Senator Arlen Specter (R-PA), chair of the Senate appropriations subcommittee that oversees the agency's budget. By insisting on a 15% increase for NIH, Specter outbid both the Clinton Administration, which had proposed an 8.7% raise, and the House appropriations subcommittee chaired by Representative John Porter (R-IL), which had suggested 9.1%. Clinton had suggested paying for the increase with a new tax on tobacco, while Porter's subcommittee found the money by cutting subsidies for low-income energy assistance and summer jobs. Congress rejected both ideas. Instead, it contrived an “emergency” that allowed it to break existing budget rules and use this year's $70 billion federal surplus to pay for a host of old and new projects. The $21 billion in emergency, “off-budget” spending on military readiness, farm aid, and other programs also freed up funds for increases in programs constrained by budget caps.

    View this table:
    Big step up.

    NIH's budget increase over 1998 dwarfs the raises given to several other research organizations.

    SOURCE: AGENCIES AND LEGISLATION

    The largest chunk of the NIH appropriation, about one-fifth of the total, goes to the National Cancer Institute, whose budget increases by 15% to $2.9 billion. The smaller, but highly visible, National Human Genome Research Institute gets a 22% raise, to $264.9 million. Funding for the office of the NIH director grows by 27%, to $306.6 million, in part to accommodate a new Center for Complementary and Alternative Medicine. A draft conference report sets aside $50 million for this venture, a darling of Senator Tom Harkin (D-IA), and specifies that “not less than $20 million” be spent on “peer reviewed complementary and alternative medicine research grants and contracts … as proposed by the Senate.” The draft report also gives its blessing to a cancer control program for minorities, a Cooley's anemia clinical network, an Alzheimer's disease prevention initiative, and others, but doesn't earmark funds for them.

    Biomedical types were already celebrating. “I've been on a high all day,” said William Brinkley, president of the Federation of American Societies for Experimental Biology (FASEB), noting that 15% exceeded his expectations. “We worked hard for his,” he added. “I think we're seeing the fruition of a lot of active participation by individual scientists going to Washington.” Brinkley says FASEB leaders have already held strategy meetings to discuss how best to spend the money, including striking the right balance between individual investigator grants and centers and other large awards.

    Climate change scientists and renewable energy advocates were also celebrating an unexpected $210 million boost in research spending. Earlier this month, in approving budgets for the Department of Energy (DOE) and the Environmental Protection Agency, Congress had severely pruned the Administration's request for funds to fight global warming. At DOE, for instance, lawmakers approved just $275 million of a $357 million request for the solar and renewable energy program. In the closed-door negotiations, however, “the Administration twisted some arms until the global warming money appeared,” said an exuberant but exhausted White House staffer.

    In contrast, officials at the Commerce Department's National Institute of Standards and Technology (NIST) were happy that the news wasn't worse. The agency's core research programs received $280 million, $8 million more than last year, while the Advanced Technology Program—a once-incendiary effort that funds public-private research partnerships—got $204 million, an $11 million increase but $56 million below the request. Although the levels are only about 90% of what had been sought, a NIST source labeled them “excellent.”

    Another Commerce Department agency, the National Oceanic and Atmospheric Administration (NOAA), got a 6% increase, to $2.2 billion. NOAA's Sea Grant program, which funds university researchers, will grow by less than 3%, to $57.5 million, although lawmakers earmarked $4 million of the funds for research on zebra mussels and oyster diseases. The good news is that the earmarks, which aren't popular with many marine researchers, take up a smaller portion of the program's budget than they did last year, says Kerry Bolognese of the National Association of State Universities and Land-Grant Colleges in Washington, D.C. The bad news, he adds, is that “Sea Grant is not keeping pace with inflation.”

    In other moves watched by the research community, negotiators postponed a bitter partisan fight over the use of sampling in the 2000 Census by agreeing to fund the Census Bureau only through 15 June. This spring the Supreme Court is expected to rule on two cases involving the technique, which is aimed at making up for a serious shortfall in the 1990 headcount. Republicans oppose sampling because they say the Constitution requires an actual headcount, while Democrats and statisticians have generally supported the concept (Science, 6 February, p. 798). Use of the technique is expected to increase population estimates in neighborhoods seen as Democratic strongholds.

    Although the ink is barely dry on the new budget deal, science lobbyists are already positioning themselves for the 2000 budget battles, which will formally begin when President Clinton presents his request to Congress next January. “Every scientific society,” says one congressional aide, “is asking itself how it can replicate what the biomedical community did with NIH.”

  2. PALEOBIOLOGY

    Insect Wings Point to Early Sophistication

    1. Gretchen Vogel

    Catching mosquitoes is no easy way to make a living. In pursuit of their darting prey, modern dragonflies hover, fly backward, and zoom around in tight high-speed turns. To execute these aerobatic maneuvers, the insects come equipped with highly engineered wings that automatically change their shape in response to airflow, putting the designers of the latest jet fighters to shame.

    But in evolution, such engineering tricks are apparently old news. In a report on page 749 on a 320-million-year-old dragonfly from Argentina, entomologist Robin Wootton of the University of Exeter in the U.K. and his colleagues describe evidence for a complex airfoil, a structure that forces air to move faster over the top of a wing than underneath it, creating a pressure difference that gives a wing its lift. Not only did evolution come up with such sophisticated flying adaptations very early, but it also produced them more than once. Although the ancient fossil structures have the same effect as the airfoils of modern dragonflies, they are different enough that scientists think the two systems evolved independently. “It's a startling example of convergent evolution,” says evolutionary aerodynamicist Adrian Thomas of Oxford University in the U.K.

    To achieve the airborne agility needed to chase prey such as mosquitoes and houseflies, a dragonfly must be able both to twist its wings and change their shape to alter the airflow around them. Unlike birds and bats, which have muscles that control the shape of their wings, an insect wing is simply a membrane stretched over a series of veins. But in an example of what Wootton calls “smart engineering,” modern dragonflies have a complex system of veins that stabilize and shape the wings without any muscle power. One region, called the basal complex, forms a series of pleats arranged so that when the insect flaps downward, the air pressure on the underside of the wing forces the trailing edge to stiffen and curve down in a classic airfoil shape. Roughly similar to the flaps that open on planes during takeoff and landing, the mechanism allows dragonflies to stay aloft at lower speeds.

    Wootten and paleoentomologist Jarmila Kukalová-Peck of Carleton University in Ottawa noticed a region similar to the modern basal complex as they examined a well-preserved, 8-centimeter dragonfly from La Rioja, Argentina. When Wootton, an expert on the mechanics of insect wings, made a three-dimensional paper copy of the wing region, it responded to a force on the underside of the wing—similar to the force of air as a dragonfly flaps its wings downward—in the same way as the modern dragonfly's. The authors propose that the structure played a similar role in the ancient insects, allowing them to get more efficient lift from a downstroke. Thomas agrees. “I made the cardboard models” from their diagrams, he says, “and they work in exactly the same way” as the modern basal complex.

    Despite the similarity in function, it seems that the two designs arose independently. They use different sets of veins, and the modern basal complex forms a triangle while the fossil one is a parallelogram. In addition, the wings are attached to the body of the fossil insect differently than those of modern dragonflies, and the researchers believe it was a cousin to, not a direct ancestor of, insects alive today.

    Dating from only 10 million years after the oldest known flying insect, the specimen shows how quickly insects evolved sophisticated aerodynamic engineering, says insect flight physiologist Robert Dudley of the University of Texas, Austin. Still, there has been some improvement over the eons. The Argentine fossil is missing another aerodynamic feature present in modern dragonflies—a stabilizing structure called the node, which helps the wings withstand stresses from the twisting required for hovering in place. That may have evolved, Wootton says, as dragonfly prey itself became more aerodynamically adept—adaptations that anyone who has chased a mosquito can appreciate.

  3. PALEONTOLOGY

    Fossils Challenge Age of Billion-Year-Old Animals

    1. Richard A. Kerr*
    1. With reporting from India by Pallava Bagla.

    Three weeks ago in the pages of Science, paleontologists pushed back the origins of multicellular life by 400 million years to a startling 1.1 billion years ago, based on ancient fossilized tracks found in central India. But a paper published about the same time in the Journal of the Geological Society of India may now yank those dates forward again to a more mundane figure of perhaps 600 million years old. The Indian paper, by paleontologist R. J. Azmi of the Wadia Institute of Himalayan Geology in Dehra Dun, presents tiny shelled fossils—unarguably about 540 million years old—from rocks that Azmi claims were laid down shortly after those holding the animal tracks. If so, the spectacularly old tracks would be transformed into simply another example of early animals.

    Although other paleontologists confirm that Azmi's fossils are indeed relatively young, the authors of the original paper on the tracks, Adolf Seilacher of Yale University and his colleagues, are standing by their discovery. “I have strong doubts this one paper will blow us out of the water,” says co-author Friedrich Pflüger of Yale, noting that work by many other researchers supports the billion-year-plus age. To reduce the age, “you have probably 50 papers against you; you need to convince a lot of people.”

    The rocks in question, in the Vindhyan basin of central India, were repeatedly dated in recent decades using radiometric techniques, which rely on the slow decay of radioactive elements such as potassium, uranium, or rubidium. Done properly, this method is considered the gold standard for dating rocks. In dozens of studies, the sandstone holding the trace fossils yielded the age of about 1.1 billion years that is cited in the Science paper (2 October, p. 80). “Nobody expected the ages to be questioned,” says Pflüger. “All the ages seemed to be consistent.”

    But Azmi argues that in this case, the ages estimated from distinctive fossil species known to have lived at certain times in the geologic past are more accurate. In layers of limestone and shale just above the sandstone, Azmi found millimeter-scale “small, shelly fossils,” the remains of unique shelled animals whose appearance marks the explosion of new animal forms in the early Cambrian period 540 million years ago. He says the fossils hadn't turned up in the Vindhyan before because “people have not looked from this point of view.” The rocks were believed to be very old, so no one macerated them in the way that allows small shelly fossils to be extracted, he says.

    Other paleontologists accept the identity of the fossils. “Everybody would say yes, these are small shellies,” agrees paleontologist Douglas Erwin of the National Museum of Natural History in Washington, D.C., who has seen Azmi's paper. The question is what they mean for the age of the tracks.

    In the early 1980s, Azmi found similar fossils in another Indian basin, boosting its accepted age by 400 million years into the Cambrian. He thinks the new fossils hold a similar message about the sandstone layer they overlie. As he argues in a letter on page 627, there's not much rock separating the 540-million-year-old fossils from Seilacher's trace fossils—implying that the tracks must be about 600 million, not 1.1 billion, years old. That would make them no older than other known traces of early animals.

    Azmi and others add that the radiometric dates aren't as impressive as they might seem. As geochronologist Samuel Bowring of the Massachusetts Institute of Technology notes, the dates might accurately reflect the age of individual mineral grains, but those grains may have formed long before they eroded from parent rock and washed into the sea to become part of the Vindhyan sedimentary rocks. Indeed, the radiometric dates of grains from the formation containing the Cambrian fossils are also about 1.1 billion years old, suggesting that the dates may not reflect the age of the rock layer itself.

    Seilacher, Pflüger, and their colleague Pradip Bose of Jadavpur University in Calcutta are just now seeing the details of Azmi's paper, but they already have some reservations. Pflüger speculates that perhaps Azmi's Cambrian fossils are not close in time to the trace fossils after all. Thick layers of sediment may be laid down in one place but not in another, and rocks can be eroded away before the next layer is laid down, making it look as if little time has passed when in fact hundreds of millions of years have gone by. Pflüger also notes that Azmi's fossils come from a part of the basin different from the one that contained the tracks, increasing the chances that fracturing and jumbling of rock layers could confuse interpretations.

    And Indian researchers, including paleontologists Anshu Sinha of the Birbal Sahni Institute of Paleobotany in Lucknow and B. S. Venkatachala of the Wadia Institute, say that they are reluctant to adopt a young age for Vindhyan rocks, given the radiometric dates. They also report signs of pre- Cambrian single-celled algae and other fossils in the rocks. To prove the age of the Vindhyan, geologists may have to find and date rocks such as volcanic ash layers, which offer secure dates because they are deposited as soon as they're formed. Until then, the age of the first animals remains in question.

  4. PHYSICS

    Particle Decays Reveal Arrow of Time

    1. David Kestenbaum

    In the everyday world, time is a one-way street. Unlike characters in Martin Amis's novel Time's Arrow, we never exit a taxi and salute while it retreats down the street or awake in the evening and see our clothes come flying from the corners of the room. The microscopic level where particles collide and decay, however, has seemed indifferent to the direction of time. But two groups of researchers, at Fermi National Accelerator Laboratory (Fermilab) in Illinois and CERN in Switzerland, have now directly detected the forward march of time in the decays of subatomic particles.

    Physicists once thought that the equations of the subatomic world would look the same if time were reversed. A movie of an atom decaying into bits, when run in reverse, would show a process that—although unlikely—still obeys the laws of physics: the bits converging to form a full atom. But they also knew that this time-reversal symmetry was part of a larger, more powerful package known as CPT (for charge, parity, and time reversal) symmetry, which sits at the heart of modern physics: Swap antimatter for matter, view the universe (essentially) in a mirror, and reverse the direction of time, and all the experiments should come out the same way they do in the real world. The CPT theorem (which has now been tested to an impressive 18 decimal places) meant that time-reversal symmetry could hold only if charge-parity (CP) symmetry holds as well.

    In 1964, physicists found that it doesn't. They noted that neutral particles called kaons occasionally decayed in a way that blatantly violated CP symmetry. The CPT theorem could be saved only if making time flow backward changed things in a way that canceled out the CP asymmetry. No one could gather enough data to isolate the rare decays that would show this directly, however.

    Now two groups have finally managed this feat, by measuring the rate of a particular decay and showing that it differs from the rate of the same process done in reverse. “I think it's truly spectacular work,” says Alan Kostelecky of Indiana University, Bloomington. “This is the most important experimental advance since” 1964 for testing time symmetry.

    One of the groups, the CPLEAR collaboration at CERN, collided antiprotons and hydrogen atoms to make kaons and their antimatter counterparts, antikaons. As they travel, antikaons can transform into kaons and vice versa. In results to appear in an upcoming issue of Physics Letters B, the team used a large tracking chamber to count the kaons and antikaons as they decayed—each to an electron, a pion, and a neutrino. The charge of the electron revealed which type of kaon had decayed. The team found that the rate for antikaons transforming into kaons was a fraction of a percent higher than for what would be the time-reversed process—kaons becoming antikaons. “This shows that you can't turn the clock backward” and always get the same results, says CPLEAR spokesperson Panagiotis Pavlopoulos.

    The other group, the KTeV collaboration at Fermilab, also studied kaons, but watched for much rarer events—the 1-in-10-million decay of a single kaon into pairs of electrons and pions. The team, which presented its results at a Fermilab workshop earlier this month, mapped out the directions of the electrons and pions. Here, time asymmetry revealed itself in a subtler way. Because reversing time also reverses a particle's momentum, the team looked for time asymmetries by comparing the rates of some decays to others where the direction of the emerging particles looked as they would if time had been reversed. The rates differed by about 13%. “It's a huge effect,” says Fermilab physicist and KTeV collaborator Vivian O'Dell.

    Both experiments observe time asymmetry at about the level that would compensate for the CP asymmetry first observed over 3 decades ago. “I don't think anyone is surprised, but everybody is very happy,” says University of Chicago theorist Jonathan Rosner. Why the decays should look any different forward and backward is still a fundamental mystery. It's possible, he says, that the reigning theory of the microworld, called the Standard Model, can explain this if some of its parameters are just right. Other possibilities include a new “superweak” force that would break time-reversal and CP symmetry. Eagerly awaited studies of other kaon decays at KTeV or another experiment called NA48 at CERN may reveal which is right, Rosner says. CP asymmetry may also explain why the universe is not filled with equal parts of matter and antimatter.

    Could a microscopic arrow of time also explain why humans perceive a past, present, and future? Maybe, Kostelecky says, but “that's pretty ambitious.” Such questions may be too deep for physics to answer, he says.

  5. PALEONTOLOGY

    Young Dinos Grew Up Fast

    1. Erik Stokstad

    Snowbird, Utah—The giant dinosaurs known as sauropods were the most massive creatures ever to tread on land. Now a detailed look at one species' bones, described here earlier this month at the annual meeting of the Society of Vertebrate Paleontology, suggests that these hulking beasts could grow to full size—tens of tons and longer than a tractor-trailer—in just a decade. By clocking the sauropod childhood, the work “provides a whole new dimension to sauropod studies,” says Philip Currie of the Royal Tyrrell Museum of Palaeontology in Drumheller, Alberta.

    Paleontologists had estimated that it would take more than a century for a modern reptile to reach the size of an adult sauropod. But under the microscope, dinosaur bone seems to tell a different story: It looks more like the fast-growing bones of mammals and birds than that of reptiles. To sharpen the age estimate, Kristina Curry, a graduate student at the State University of New York, Stony Brook, examined forelimbs and shoulder blades from specimens of Apatosaurus (once known as Brontosaurus), a sauropod that roamed North America some 150 million years ago.

    When Curry drilled samples from shoulder blades, she found regular changes in the density of microscopic canals that presumably once held blood vessels. The layers resemble the concentric rings laid down each year in manatee and sea turtle bones, so Curry assumed that they were annual and used them to age the sauropod shoulder blades. Bones from half-sized individuals were 4 to 5 years old, while the largest sauropods had apparently reached full growth in just 8 to 11 years.

    That growth rate may sound extraordinary. But it implies that sauropods deposited about 10.1 micrometers of bone tissue per day—about the same rate as living ducks, which deposit an average of 10.0 micrometers of bone per day. Ducks, however, grow to full size in about 22 weeks, while Apatosaurus apparently kept up its growth spurt for years.

    As a check, Curry used the rate derived from the Apatosaurus scapula to estimate the age of the forelimb bones, which have no rings, and came up with similar numbers. The bone growth rate also fits reasonably well with the lone previous estimate, by Armand Ricqlès of the Université Paris VII, who used faint layers in sauropod humerus bones to clock their growth at roughly 7 micrometers per day. “Even though Apatosaurus may have lived for centuries, they certainly didn't take that long to reach their full size,” Curry concludes.

    The finding makes sense, says Currie of the Royal Tyrrell Museum, as hatchlings wouldn't survive long if they grew slowly. Besides the threat of predators, just living with a 30-ton mother would be dangerous. “You'd probably get stepped on,” he notes. Moreover, if dinosaurs took more than 30 years to mature, their populations could sink to dangerously low levels, according to 1989 calculations by Arthur Dunham of the University of Pennsylvania, Philadelphia, and colleagues. Such an extended childhood would give predators and disease ample time to pick off animals before they reproduced, says Gregory Paul, an independent dinosaur artist and paleontologist in Baltimore, Maryland. But if Curry's rapid growth rate is right, young sauropods probably weren't picked on for long.

  6. BIOMEDICINE

    From Fat-Free Mice, the Skinny on Diabetes

    1. Trisha Gura*
    1. Trisha Gura is a science writer in Cleveland, Ohio.

    When it comes to body fat, extremes can have extreme consequences. Obesity can lead to health problems such as diabetes. And now comes a dramatic illustration of the ills of having no fat at all. Two independent groups have shown that mice genetically engineered to lack fat cells also get diabetes, with symptoms even more severe than those of their obese counterparts. The animals suffer all the signatures of adult-onset diabetes in humans: high blood sugar, high insulin levels, and a ferocious thirst and appetite.

    Reported in last week's issue of Genes and Development by Nobel laureates Michael Brown and Joseph Goldstein and their colleagues at the University of Texas Southwestern Medical Center in Dallas and by Marc Reitman and Charles Vinson's group at the National Institutes of Health in Bethesda, Maryland, the findings may yield clues to the enigma of adult-onset diabetes, also called type 2 diabetes or diabetes mellitus. Body fat plays a role in the illness, which afflicts at least 18 million Americans, but endocrinologists don't know exactly how. The mice also provide the first model ever for a rare human condition known as lipodystrophy, in which patients are born with an extreme scarcity of fat and the symptoms of adult-onset diabetes. “The insights that these models yield may provide more beneficial treatments for both diseases,” says Reed Graves, a chemist at the University of Chicago who helped to pioneer the work in the current papers.

    Graves worked with Susan Ross in Bruce Spiegelman's laboratory at Harvard Medical School, where the trio was trying to figure out how fat influences diabetes and related disorders. In 1993, they successfully depleted fat in mice by engineering in a toxin gene and turning it on in fat cells. The mice developed some diabetic symptoms, but the researchers could not pin down the link between a lack of fat and diabetes because these mice did not lose fat cells until they reached middle age.

    The more recent experiments do a better job of eliminating fat from the animals. Both groups of researchers genetically blocked the growth of fat cells by altering transcription factors, proteins that turn genes on or off and are crucial to cell growth and maturation. Reitman and Vinson's team inactivated the genes for two families of transcription factors that normally help fat cells grow and develop, while Brown and Goldstein altered the gene for another transcription factor so that cells would get an overdose of it. Both mutations were designed to affect only fat cells.

    The changes had the same effect: The mice were born with little or no white fat. The transgenic mice also failed to develop mature brown fat, which normally serves a warming function in hibernating animals. “We had to use heating pads because the fat needed for [heat production] was gone,” recalls Vinson, who notes that tending to the rodents “was certainly not trivial.”

    Many of the animals died before reaching adulthood, but those that survived developed diabetes: Their cells no longer responded properly to insulin, which stimulates cells to metabolize glucose. As a result, insulin levels in the bloodstream skyrocketed—up to 442 times normal levels for some of Reitman and Vinson's mice—and glucose levels at least tripled. Like human diabetics, the animals also had high levels of triglycerides and other fat building blocks in their circulation, and their livers became engorged with triglycerides. “These animals are really sick,” Reitman says. “But they clearly don't get diabetes in the same way as normal type 2 diabetics,” where excess body fat plays a role.

    Brown and Goldstein speculated that the altered transcription factor in their mice might be the key to the diabetes, and they spent a great deal of effort trying to tease apart the molecular pathways triggered by the mutation. But they could find no clear answers. Reitman and Vinson have a different hypothesis, which could explain why both obesity and a complete lack of fat can lead to diabetic symptoms. They propose that excess fatty acids and triglycerides in the circulation and liver might somehow trigger the disease. The compounds might end up in the circulation either because they spill from stuffed fat cells, in the case of obesity, or because there are no fat cells to store them, as in lipodystrophy and the fatless mice.

    If the conjecture can be proven, it could open the way to new therapies for both adult-onset diabetes and lipodystrophy. Indeed, Graves's group at the University of Chicago found that a drug called troglitazone—which helps trigger fat cell maturation—could lower blood glucose levels and reduce other diabetic symptoms in his group's transgenic mouse. That drug is now available as a treatment for human diabetes, and Graves hopes the researchers will explore the effects of similar drugs in the new transgenics.

    For his part, Vinson says there is a message for diabetics and nondiabetics alike: “We learned that too much fat is bad and so is not enough fat. The punch line here is that a little fat is good. As Aristotle said, ‘Everything in moderation.’”

  7. SPACE SCIENCE

    NASA Craft to Take the Controls in Flight

    1. Dennis Normile

    Tokyo—Both science fiction fans and scientists are eagerly anticipating tomorrow's scheduled launch of NASA's Deep Space 1 mission. But it's not the destination—a close encounter with an obscure asteroid—that excites them. What's special about the mission, which begins a series of flights testing new technologies, is the onboard software that will, for the first time, assume complete control of the spacecraft. Computer scientists say it's a step toward a real-life HAL 9000, the fictional cyber-character in Arthur C. Clarke's 2001: A Space Odyssey. Its success, they add, would be a boon to future deep-space probes and to the field of artificial intelligence (AI).

    “This experimental system is a kind of ‘HAL 1000,’” quips Nils Nilsson, a computer scientist at Stanford University. “NASA's willingness to test this technology in Deep Space 1 represents a step forward for AI. If it works, it will most likely be used in future NASA missions and will attract the attention of other potential users.”

    Deep Space 1 is the first mission of NASA's New Millennium Program, which tests risky technologies aimed at making future missions smaller, faster, and cheaper—a mantra of NASA Administrator Daniel Goldin. “We need these technologies for the kind of science missions NASA would like to conduct,” says Marc Rayman, chief mission engineer at the Jet Propulsion Laboratory in Pasadena, California, which is managing Deep Space 1 for NASA. Budgeted at $152 million, Deep Space 1 will fly by a little known asteroid called 1992 KD in July 1999, sending back information about its shape, size, surface composition, and mineralogy. Coming within 5 to 10 kilometers of the asteroid, the flyby will be the closest ever attempted of a solar system body. The mission may be extended to skirt Wilson-Harrington, an object in transition from a comet to an asteroid, as well as the comet Borrelly.

    The autonomous control concept getting its first big test on Deep Space 1 could eventually relieve pressure on ground-based controllers, who will have to cope with a rising number of small missions in an era of restricted budgets. Greater autonomy could also lead to bigger scientific payoffs, for example, by allowing a spacecraft to modify observational plans after it spots a surprising and particularly interesting feature.

    The most straightforward chore Deep Space 1 will handle is navigation. The spacecraft's autonomous navigator will determine the spacecraft's position by comparing observed patterns of asteroids and background stars with patterns stored in its memory. It will plot trajectory adjustments and fire the engines accordingly. A more ambitious experiment involves turning over responsibility for the entire spacecraft to a HAL-like autonomous agent. Given a goal, the remote agent works out the details and executes the operations needed to achieve it.

    The remote agent for Deep Space 1 was jointly developed by NASA's Ames Research Center in Mountain View, California, the Jet Propulsion Lab, and Carnegie Mellon University in Pittsburgh, Pennsylvania. Its software package consists of four modules, dubbed mission manager, planner/scheduler, smart executive, and mode identification and reconfiguration. The work is carried out through operational plans that cover either a length of time or a set of tasks to be accomplished.

    The mission manager begins its planning by reviewing what tasks have been completed and what lies ahead, the craft's location, and the condition of its subsystems. It then compiles a list of goals for the next plan segment—making a course correction, photographing a certain object, or transmitting data to Earth, for example—and sends them to the planner/scheduler. The planner/scheduler works out a sequence for achieving these goals and sends the information to the executive. The executive then expands the sequence of steps into detailed commands for the software controlling each of the spacecraft's subsystems. The mode identification and reconfiguration module monitors systems, identifies problems, and offers alternatives.

    Martha Pollack, a computer scientist at the University of Pittsburgh, says most of these elements are established AI techniques. But Deep Space 1 “breaks new ground in showing that [such techniques] can indeed make the transition from the laboratory to the very complex application of space travel.” Although the agent will control Deep Space 1 for only a week, the experiment “is a great leap for remote agents,” says Barney Pell, a member of the agent design team at Ames.

    Although the remote agent has captured the attention of the AI community, the flashiest of the dozen technologies onboard Deep Space 1 is probably the experimental ion thruster, which for the first time will be used as a spacecraft's primary engine. Ionized xenon atoms are accelerated out into space by charged metal grids at the rear of the engine chamber. The resulting thrust is equivalent to the pressure exerted by a sheet of paper resting in an open hand, according to NASA. But over time the ion engine can deliver almost 5 times the thrust per kilogram of traditional liquid or solid rocket fuels, making it ideal for extended space flights.

    Deep Space 1 will also conduct experiments involving new low-power electronics systems and solar arrays fitted with lenses that concentrate sunlight. Although such technologies may not be the stuff of science fiction, they do add up to increased capabilities for future real space operations. “These will be the tools in the toolboxes of future mission designers,” says Rayman.

  8. COMMUNICATIONS

    Quantum Encryption Takes First Step to Orbit

    1. Andrew Watson*
    1. Andrew Watson is a writer in Norwich, U.K.

    For sending a secret code, nothing beats quantum mechanics—at least in the laboratory. To be useful, however, quantum messages, such as the numerical “keys” required to decode secure messages sent by other means, will have to travel long distances, for example, from the ground to a military or telecommunications satellite. Now a team of nine physicists from Los Alamos National Laboratory in New Mexico have taken a first step in that direction by transmitting a key over a distance of 1 kilometer in the chill night air of the New Mexico desert.

    “What we've done is demonstrate a protocol and the physics of a system that will do that,” says team member William Buttler. The system involves transmitting the key with a broad laser beam that is yet so faint that each bit of data is represented by the polarization of a single photon. “It is really something,” says Nicolas Gisin of the University of Geneva. John Rarity of Britain's Defence Evaluation and Research Agency in Malvern calls the feat, described in the 12 October issue of Physical Review Letters, “a key step on the way to uploading keys to satellites.”

    Quantum mechanics has caught the eye of encryption experts because it offers a way to guarantee the security of the key—the random string of ones and zeros added to a message to make it unintelligible. Before the receiver of the encrypted message can read it—by simply subtracting the key string—the key somehow has to be sent to the receiver. If it is simply sent along the same route as the message it protects, it might be intercepted. In top security situations, says Rarity, the solution “is basically a man on a motorbike”—not a very practical solution if you want to transmit messages to a satellite.

    In the Los Alamos scheme, the sender of the secret information, traditionally dubbed “Alice,” sends a key by dispatching a stream of single photons, whose wave orientation, or polarization, is assigned one of two values. The receiver, known as “Bob,” sets out to detect the photons through a filter system that randomly switches between two other, related, polarizations. Because of the choice of polarizations used, and the vagaries of quantum mechanics, even with a perfect experimental setup Bob would only be able to detect 25% of the photons that Alice sends.

    Alice and Bob can then compare notes, via a public communication channel, on which photons Bob was able to measure. It does not matter if someone eavesdrops on this conversation, because they do not reveal the polarization results, just the occasions when Alice sent a photon and Bob received one. Hence, Alice and Bob now know the sequence of polarization results that Bob detected, and this serves as the key. It won't do a potential eavesdropper any good to tune in to the quantum channel either. Quantum rules mean that anybody who attempts to listen in to the string of photons will reveal themselves, because Bob will notice a rise in his photon error rate.

    Although quantum keys have previously been transmitted across labs and, 2 years ago, over 75 meters in the open, until now nobody has confronted the type of real, swirling atmosphere that would be encountered on the way up to a satellite. In the Los Alamos work, Alice is a source of laser pulses, each just a nanosecond long and so dim that the average pulse contains less than one photon. Once polarized at random in one of two ways, each pulse makes the half-kilometer journey along a disused particle accelerator cutting to a mirror and then back to Bob, a receiving telescope plus optical analyzer stationed alongside Alice.

    The big problem is turbulence, says Buttler, which “causes the beam to wander” and so miss the telescope. To beat turbulence, the team spread the beam to a diameter of 5 centimeters, greatly improving the chance that the telescope will pick up at least some of the beam. The team was able to successfully transmit a key code, at a hit rate of around one key bit for every 400 laser pulses.

    Transmission over a single kilometer may seem a modest achievement, given that satellites are at least 200 kilometers above Earth's surface. But in fact “the real difficulty is the first kilometers,” says Gisin, because above about 3 kilometers the air becomes purer and turbulence is less of a problem. The Los Alamos team is now trying to repeat the feat over longer distances and even in daylight, when the ambient brightness confuses the receiving signal. So far, “our results are encouraging,” says Buttler. The main challenge, says Rarity, will be actually hitting the satellite with enough beam to do the job. “Given that you only want a few kilobits of key, it can be done,” Rarity believes.

  9. FRANCE

    Researchers Rail Against CNRS Reforms

    1. Michael Balter

    Paris—Geochemist Claude Allègre, France's minister of national education, research, and technology, had his hands full last week. While high school students across the country were marching in the streets for improved conditions in schools, some French scientists were threatening to stage their own protests over proposed reforms of the Centre National de la Recherche Scientifique. The CNRS, a huge public agency that employs 11,600 full-time researchers, is the bedrock of French science, and—as Allègre quickly learned—government ministers meddle with it at their peril.

    The controversy began on 10 October, when physicist Edouard Brézin, president of the CNRS's executive board, presented an early draft of the reforms to a board meeting. The confidential document, which Brézin had prepared in collaboration with Allègre and other ministry officials, outlined a number of proposed changes in the agency's statutes, most of which were designed to create closer ties between the CNRS and university labs. The document was quickly leaked to researchers' unions, who raised the alarm about what they saw as a threat to the independence of CNRS labs, despite the fact that most of the agency's research units are already located on university campuses. In an interview with the daily newspaper Le Monde, Jacques Fossey, secretary of the National Union of Scientific Researchers, accused Allègre of trying “to turn the CNRS into a granting agency for university research,” which many CNRS researchers believe is inferior.

    Another noteworthy feature of the reforms was a provision that would strengthen the role of the executive board and place it squarely in charge of CNRS's overall scientific direction, a responsibility it now shares with the organization's director-general. It is no secret that Brézin has not seen eye to eye with CNRS Director-General Catherine Bréchignac, who has been much more lukewarm about moves to reform the organization. Brézin says that this power struggle within the agency is of “little interest to people who don't work at the Rue Michel-Ange [CNRS's Paris headquarters] and will have no effect on the labs.” Bréchignac was unavailable for comment.

    Vincent Courtillot, Allègre's chief adviser, says that the unions have misunderstood the intentions behind the reforms. “We want to bring [CNRS labs] closer to university [labs], not to dissolve one or the other,” he says. Courtillot also argues that 90% of CNRS labs already have the status of “associated units,” meaning that they include scientists from universities, other public agencies, and industry. The remaining 10% are made up solely of CNRS researchers. However, Brézin told Science that a key feature of the reforms would ultimately be to give all CNRS labs associated status.

    But chemist Henri-Edouard Audier of the école Polytechnique near Paris, a member of the CNRS executive board, says that protests arose because the government was trying to push the reforms through without sufficient debate. With 90% already associated, he asks, “why are they making such a big deal out of the other 10%? … We are all for reinforcing links between the CNRS and the universities, but CNRS labs must retain their own mission.”

    In response to scientists' criticisms, Brézin has drawn up a second draft of the proposals, which would allow unassociated CNRS labs to be created and continue to exist for 4 years if there is no obvious partner lab for them. And as Science went to press, Courtillot was due to meet with researchers' unions this week to calm the waters. “We are not against reforms,” says Audier. “But we must first have a discussion of the principles behind them. We need to understand where we are going.”

  10. SCIENCE AND BUSINESS

    Chemical Industry Rushes Toward Greener Pastures

    1. Robert F. Service

    Big chemical firms are trading in their reactors and refineries for research labs and test plots. But will the bet on life sciences pay off?

    When Howard Schneiderman joined the chemical giant Monsanto in 1979 as head of research, he took over a program focused on plastics and other petrochemicals. But he quickly began planting the seeds of the company's revolution. Trained as a geneticist, Schneiderman rapidly thrust Monsanto researchers into the burgeoning realm of genetic engineering. Within a few years company researchers had produced the world's first genetically modified plant—a petunia. It was just the first step on the road to engineering crops by adding genes that make them resistant to weed-killing herbicides or insect pests. The journey took more years, and sweat, than anyone had thought it would: Expressing herbicide-resistance genes in crops, showing that they worked, and then convincing regulators, farmers, and consumers that engineered plants were safe all became major hurdles.

    Blossoming fortunes.

    Monsanto's share price has boomed as it refocused on life sciences.

    PHOTO: OSBORN AND BARR COMMUNICATIONS; SOURCE: EXCITE INC.

    Although Schneiderman died in 1990, his efforts are certainly bearing fruit now. In 1996, Monsanto began selling soybean seeds resistant to the company's leading herbicide, Roundup; farmers can apply the weed killer without fear of wiping out their budding crop. This year, U.S. farmers planted an estimated 25 million acres (10.1 million hectares) with the herbicide-resistant seeds, nearly one-third of the nation's soybean farmland. Herbicide-resistant corn and cotton and insect-resistant cotton and potatoes have followed. This year, of the nearly 70 million acres (28 million hectares) planted with genetically modified crops worldwide, Monsanto varieties account for over 70%. “We could have sold a lot more if we had the seed,” says Gary Barton, a longtime Monsanto biotechnologist.

    These new ventures are not just a sideline for Monsanto. In the mid-1980s Monsanto execs bet the farm: They began steering their entire $9 billion company toward engineered products for agriculture, as well as animal and human health and nutrition. They unceremoniously jettisoned slumping chemical manufacturing enterprises and plowed the cash into life sciences projects. Wall Street smiled on the move. The ratio of the company's stock price to its earnings per share shot up to around 100:1, although it dropped to around 65:1 last week with the announcement that merger talks with American Home Products had broken down. More traditional chemical companies continue to lag around 16:1.

    With the scent of that kind of growth in the air, other lumbering chemical giants have begun retooling their business plans as well, creating a rush to the life sciences that DuPont research chief Joseph Miller calls “pervasive.” Although the chemical industry has been edging this way for some time, “it's really starting to sprout now,” says William Young, a chemical industry analyst with the Wall Street firm of Donaldson, Lufkin & Jenrette.

    The past year alone has seen a flurry of activity (Science, 14 August, p. 925). Last week, Germany's Hoechst sold off its worldwide polyester business. In July, DuPont completed its buyout of Merck's portion of their joint pharmaceutical venture; in May, DuPont announced plans to begin selling off Conoco, its oil company, a move that's expected to give the company $20 billion to reinvest in the life sciences. Novartis has committed some $850 million to build a pair of new life sciences research facilities in La Jolla, California. Even the stately old Dow Chemical Co. recently spent nearly $1 billion to buy out Eli Lilly's 40% share of a joint venture to modify crops and formed a wide-ranging research alliance with France's Rhône-Poulenc Agro. “Virtually everybody is staking a claim,” says Ed Wasserman, science adviser at DuPont and the president-elect of the American Chemical Society.

    But the chemical industry's high-stakes bets could be risky. Monsanto's successful products are based on a simple change that adds a single gene; future crops may involve more complex genetic manipulation whose success is not yet demonstrated. Using plants as biological “factories” for other chemicals must get around problems of low yield and difficult extraction processes. Then there is the industry's image problem: Public opposition to genetically modified crops is strong in some parts of the world, particularly Europe, and governments are beginning to listen to popular concerns. Ultimately, Barton believes, genetically modified organisms will be universally accepted. But he adds, “In the short run there are bumps.”

    Manipulating margins

    These industry-wide changes, say Young and others, are being driven by a powerful combination of economic and scientific forces. Increasing global competition is threatening to hit traditional chemical companies hard. The industrial chemicals business is a $340 billion a year venture in the United States and $1.4 trillion worldwide. With continued development around the globe, its growth is projected to be brisk. But it is also a mature industry with razor-thin profit margins, and it is notoriously prone to cycles of boom and bust—the economic crisis in Asia prompting the latest slump. Those trends are sending investors looking for fatter returns and executives looking for other ways to boost their company's bottom line, says Young.

    Meanwhile, the life sciences have all the hallmarks of a boom industry. Take agriculture, for example. This once sleepy, albeit profitable, sector has long relied on plant breeding programs to improve crop traits such as yield and pest resistance. “But it takes a long time to see improvements using this strategy,” says Nordine Cheikh, a biotech researcher at Monsanto. “Biotech … allows you to achieve those things much faster” by selecting just the genes you want. “It's a much more laserlike approach to improving traits.”

    That approach led to the success of pioneers such as Dekalb Genetics—a seed producer that has pushed its way into agricultural genomics—in producing crops such as a herbicide-resistant corn that have a single added gene that confers protection. Although these first-generation transgenics are successful, they are only the beginning. “These are Model T's,” says Jerry Caulder, chief executive of Xyris, a San Diego-based agricultural genomics start-up. Now that researchers are gearing up to sequence entire genomes and decipher the function of entire families of genes, they hope to gain the ability to rewrite entire metabolic pathways, to improve numerous traits in tandem. Adds Michael Edgerton, director of genomics research at Dekalb: “Instead of gene by gene, you can look at the whole system. This means your chances of success are much greater.”

    To companies, that potential could lead to a whole new array of products. Whether it is a new drug that prevents disease in people, a new animal feed that improves the health of livestock, or a drought-resistant corn crop, in each case companies foresee being able to charge more for their engineered products than standard commodities and possibly increase their market share.

    With all the big chemical companies wanting a piece of this cake, some of the most active dealmaking has involved agricultural genomics. Last year DuPont paid $1.7 billion for a 20% stake in Pioneer Hi-Bred International, the world's largest seed company, which already has at least some sequence data on 80% of the genes in corn. Monsanto agreed last year to pay genomics pioneer Millennium Pharmaceuticals $118 million to help it set up Cereon Genomics, an agricultural genomics subsidiary, and the two partners intend to spend up to $100 million over 5 years to fund research at the new company. For its part, Dow embarked last month on a 3-year alliance with Biosource Technologies of Vacaville, California, to use functional genomics to improve crop traits.

    Engineering plants to be better food crops is not the only avenue open: Plants and microorganisms can also be modified to produce chemical feedstocks. This long-term goal, says Edgerton, raises the possibility of “breaking out of commodities and animal feeds and looking at new markets to replace petroleum products.” Organisms, adds Caulder, “are the best chemists in the world. Why not use them to produce the chemical feedstocks you want rather than using petroleum?”

    DuPont executives find this logic particularly enticing. “In the 20th century, chemical companies made most of their products with nonliving systems,” said DuPont board chair Jack Krol in a speech last year. “In the next century, we will make many of them with living systems.” The company already has efforts under way to use microbes and plants to manufacture everything from plastics to chiral compounds often used in drug synthesis. Working with researchers at Genencor International, for example, DuPont scientists have engineered a yeast strain to convert sugar into trimethylene glycol, a building block they plan to use to make a polymer for a wide variety of products, such as upholstery fabric and carpets in automobiles.

    “The bigger dream is to do this in green plants,” says Philip Meredith, DuPont's head of biochemical sciences and engineering. In addition to using less energy, and therefore cutting costs, growing chemicals in plants also hits on another theme of the push toward life sciences: sustainability. DuPont's Miller and others argue that by genetically manipulating crops to do some of the chemical synthesis, biotechnology may reduce the environmental “footprint” of the chemical industry. “Intrinsically, biological processes are sustainable at heart,” says Michael Montague, Monsanto's current research chief.

    An example is Dekalb's effort to commercialize low-phytate corn. Phytate is a naturally occurring compound that helps plants store phosphorus. But when corn is fed to animals such as pigs and chickens, phytate inhibits absorption of phosphorus by the animals, requiring farmers to add expensive phosphorus supplements. This helps the animals grow, but it also increases the amount of phosphorus in farm runoff, which collects in streams, bays, and estuaries, promoting fish-killing algae blooms. Low-phytate corn, developed as a hybrid by researchers at the U.S. Department of Agriculture, would both reduce the need for farmers to spend money on phosphorus supplements and reduce unwanted runoff as well, says Edgerton.

    Risks and return

    The effort required to develop such new products is dramatically boosting the chemical industry's R&D budgets. Chemical companies traditionally spend only about 3.5% of their revenues on R&D, while that number can be as high as 20% for pharmaceutical firms, says Stephen Dahms, who directs the molecular biology institute at San Diego State University and keeps close tabs on movements in biotechnology. Where will life sciences companies fall on this line? “I would say it's going to be on the pharmaceutical end of the spectrum,” says Montague. The shift in the industry already seems to be changing the dynamics of the job market, adds Edgerton. “You're seeing a large increase in people with plant and molecular biology [backgrounds] being hired.”

    However, this transformation still faces major challenges, both scientific and societal. One of the biggest is that traits in agricultural products, such as yield and drought resistance, are complex and controlled by the action of many genes and their proteins. That can be both good and bad, notes Edgerton. Although there are lots of potential genes to which improvements can be made, understanding the interactions between all the genes can be difficult, he says.

    Other parts of the strategy face potential difficulties as well. Even if plants can be engineered to produce useful chemicals, such as polymer precursors, there's no guarantee that this can be done economically. Plants normally produce only a small amount of any given product, points out Angelo Montagna, manager of external technologies at Exxon Chemicals in Baytown, Texas. As a result, “a lot of those monomers from plants will be expensive,” he says. Making it profitable to grow chemicals requires boosting the yield of those compounds. But increase it too much and you kill the plant, says Montagna. Even if you overcome this problem and produce large amounts of a chemical, you still have to separate it from the rest of the plant, a task that itself may require capital-intensive extraction equipment. “Biotech is a very sexy area,” says DuPont's Miller. “[But] there's a huge amount of science to be done.”

    By far the trickiest problem facing the new life sciences giants involves widespread public fears of genetically engineered products. Whereas such fears are only moderate in the United States, they resonate elsewhere, particularly in Europe, where much of the public remains skeptical of the safety of genetically modified organisms, particularly agricultural crops (Science, 7 August, p. 768). A Europe-wide poll published last year found that 53% of those surveyed said that current regulations are insufficient to protect people from the risks of biotechnology.

    This public opposition is registering with politicians. Last week, the European Parliament's environment committee called on the European Commission to impose a moratorium on approvals to market genetically modified organisms. In July, the French government announced a temporary ban on commercial growing of genetically modified crops, and there has also been talk of a moratorium in the United Kingdom.

    The atmosphere has grown so tense that last month a U.K. printer pulped the entire run of the September/October issue of the campaigning Ecologist magazine—a special issue focusing on Monsanto—reportedly fearing a libel suit by the company. (The issue has since been reprinted elsewhere.) Environmental organizations also continue to raise concerns that modified crops could cause unforeseen turmoil, such as invading new territory, passing on key genes to weeds, and contributing to the degradation of valuable ecosystems such as salt marshes by allowing farmers to grow salt-resistant crops and therefore plow up the land.

    Gary Jacob of Monsanto says that “the fate of this technology has to be made by society in general.” But such concerns raise questions about the wisdom of the chemical companies' bet on the life sciences. “They increase the risk,” says Montague. Faced with these risks, DuPont, unlike Hoechst, Monsanto, and others, has decided to retain some of its chemicals business. “It's a matter of hedging our bets,” says Miller. “We need a strong and healthy chemicals and materials business. But at the same time we're going to develop our capability in the biological sciences.” But Montague argues that at this critical time, some boldness is necessary: “Unless you begin on the road, you'll never get anywhere.”

  11. NOBEL PRIZES

    NO News Is Good News--But Only for Three Americans

    1. Nigel Williams

    The work honored by this year's crop of Nobel Prizes was done years ago but shows no sign of dating. The physiology prize went for the identification of a signaling molecule whose roles are still being explored; the chemistry prize for work enabling chemists to exploit quantum mechanics; the physics prize for a still-mysterious quantum “fluid”; and the economics prize for studies of poverty that remain all too relevant.

    The surprising discovery that the simple gas nitric oxide (NO) is a powerful messenger molecule in the body—a find that Science honored 6 years ago as “Molecule of the Year” and that helped spawn the impotence drug Viagra—has earned three U.S. researchers this year's physiology or medicine prize. The $975,000 prize was divided equally among pharmacologists Robert Furchgott at the State University of New York, New York City, Louis Ignarro at the the University of California, Los Angeles, and Ferid Murad at the University of Texas Medical School in Houston for identifying the first known gaseous signaling molecule and triggering a surge of further work on NO's diverse roles in the body. But the Nobel committee's omission of a fourth researcher, pharmacologist Salvador Moncada of University College London, drew fire from several senior scientists, including Furchgott himself.

    “I'm delighted for the nitric oxide field, which Furchgott created, but I'm very disappointed Moncada has not been included,” says pharmacologist John Vane of the William Harvey Research Centre at the University of London, a 1982 Nobelist for work on prostaglandins. Strict rules allow the Nobel committee to divide a prize among no more than three researchers. But Vane (who once worked with Moncada) and others say that this was the year for an exception, because Moncada carried out some of the key work showing that NO is released by cells.

    Many researchers agree that Furchgott founded the nitric oxide field in the 1980s by recognizing that a mysterious signaling factor was at work in blood vessels. He wondered why drugs acting on blood vessels often gave contradictory and confusing results, sometimes causing a contraction and sometimes a dilation. He went on to show that the endothelial cells lining the inside of the vessels must be intact in order to receive a signal from compounds such as acetylcholine, which causes vessels to dilate. He concluded that the endothelial cells produce some unknown factor that relaxes smooth muscle and causes dilation. He called this factor endothelium-derived relaxing factor (EDRF).

    Then, in 1986 Furchgott and Ignarro independently reported at a conference that EDRF is NO. The finding startled scientists because it showed that a simple gas—one best known at the time as a component of smog—can carry important information in the body, and it triggered a flurry of research worldwide. Over the last decade, researchers have confirmed that NO signals blood vessels to relax, which lowers blood pressure. Murad, working independently, discovered that nitroglycerin, a long-standing treatment for heart disease, works by releasing NO.

    Other researchers showed that the gas triggers erection of the penis by relaxing smooth muscle cells and allowing blood to engorge the organ—a signaling effect enhanced by the drug Viagra. The gas also turns out to play both beneficial and harmful roles in the immune system: It may defend against tumors and infection by killing bacteria and parasites and inducing programmed cell death, but it can also trigger inflammatory diseases when overproduced in lungs and intestines.

    Some researchers say Moncada deserves as much credit for this explosion of research as the Nobelists. Moncada's paper in Nature on the function of NO appeared in 1987, 6 months before Ignarro's own paper in the Proceedings of the National Academy of Sciences. The 1996 Albert Lasker Basic Medical Research Award—often seen as a harbinger of a Nobel Prize—stirred protests when it left Moncada out while honoring Furchgott and Murad for their NO work. The Nobel committee, which declined to comment on its decision, has now inflamed the controversy.

    “Many of the fundamental discoveries have been made by Moncada, and he thoroughly deserved the award. The committee have made a mistake,” says Max Perutz, winner of the 1962 Nobel Prize for structural studies of globin proteins. He and Vane had already complained to the Lasker committee about its failure to include Moncada in its 1996 award. Several other prominent scientists contacted by Science echoed these feelings. NO researcher Rudi Busse of the University of Frankfurt in Germany said he was “exceedingly surprised” that Moncada was not included.

    Because Moncada is originally from Honduras, his exclusion has particularly angered Spanish researchers. The Spanish Cardiac Society, at its meeting this week, plans to consider a protest to the Nobel committee. Pharmacologist Pedro Sanchez Garcia at the Autonomous University of Madrid welcomed Furchgott's recognition but says he is “very sad” about the decision not to include Moncada.

    Moncada himself told Science that he was surprised by the Nobel committee's decision. “One wonders what criteria they use,” he says. Furchgott, although delighted to win, said that he wished the Nobel committee had been able for one year to change the rules and include four people. “I think very highly of the work of the other winners, but I'm unhappy Salvador is not included.”

  12. NOBEL PRIZES

    Quantum Chemistry for the Masses

    1. Ivan Amato*
    1. Ivan Amato is a correspondent for National Public Radio and the author of Stuff, a book about advances in materials science.

    The work honored by this year's crop of Nobel Prizes was done years ago but shows no sign of dating. The physiology prize went for the identification of a signaling molecule whose roles are still being explored; the chemistry prize for work enabling chemists to exploit quantum mechanics; the physics prize for a still-mysterious quantum “fluid”; and the economics prize for studies of poverty that remain all too relevant.

    To chemists, quantum mechanics was once the scientific equivalent of a Hope diamond—beautiful and priceless, but essentially out of reach. This mathematical framework for understanding the behavior of atoms and their electrons—and hence of the chemical bonds they form—in principle opens the way to a complete understanding of chemical reactions. But as Paul Dirac, one of the theory's founders, put it in 1929, “the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved.”

    The Nobel Prize in chemistry this year goes to physicist Walter Kohn at the University of California, Santa Barbara, and to mathematician and chemist John A. Pople of Northwestern University in Evanston, Illinois, for helping bring the diamond within reach. “Doing computational quantum chemistry is no longer a pipe dream,” says Henry (Fritz) Schaefer, a longtime quantum chemist at the University of Georgia, Athens. Over the past few decades, Kohn and Pople independently developed theoretical shortcuts and computational methods that have created a foundation for the burgeoning field of quantum chemistry, enabling chemists to tap the vast theoretical power of quantum mechanics to understand, and even predict, the behavior of atoms, molecules, and the materials made of them.

    Beginning in the mid-1960s, Kohn, a Vienna-born physicist who escaped as a teenager to England when the Nazis were overtaking Austria, began unlocking quantum mechanics for chemists by providing an alternative to the theory's central equation—Schrödinger's equation. The famous equation's so-called wave function describes the behavior of electrons around atomic nuclei, and it's relatively easy to solve for very simple bits of matter like a hydrogen atom, with its one proton and one electron. Even for small molecules with only 10 or 20 atoms, though, solving Schrödinger's equation can become computationally impractical. And biological molecules are in another realm altogether. “Imagine many … biological molecules or organic molecules with maybe hundreds of thousands of atoms with millions of electrons,” Kohn said at a press conference last Tuesday. “In the Schrödinger picture,” he continued, “we have a function that depends on millions of electrons.”

    Kohn's Nobel-caliber discovery was that a computationally much simpler accounting and mapping of the spatial distribution, or density, of the system's electrons can take the place of the mathematically intractable wave function. From the density map, chemists can go on to infer the stability, shape, and reactivity of the system. “It is astonishing that such a simple quantity like density can take the place of a wave function, which might be a function of a million variables,” says Kohn.

    Observers say that at least half of all research papers incorporating computational chemistry now use density functional theory—and many apply it using computational tools designed by Kohn's fellow Nobelist, Pople. “Pople has been the master builder, who has made it possible for chemists to use quantum chemical methods as day-to-day laboratory tools along with their experimental equipment,” said the Royal Swedish Academy of Sciences in the announcement of the prize.

    Pople, who heard about the award while breakfasting with his wife at a Houston hotel, developed a range of mathematical tools and computational methods that opened up quantum chemistry for the broad mass of scientists. For example, he developed and validated an ever- improving and expanding library of “basis functions,” which describe an electron's different energy components—such as its kinetic energy, nuclear attraction energy, and electron-electron repulsion energy—in specific molecular settings.

    He incorporated these and other quantum-chemical tools into his GAUSSIAN computer program in 1970. Its successors have become as standard a tool for chemists as hammers are for carpenters. Users can, for example, stipulate a particular set of atoms and have the programs calculate the most stable molecular structure and geometry they can assume. The programs can also trace out detailed reaction mechanisms that would be extremely hard or impossible to discern experimentally.

    With powerful and easy-to-use tools like these, quantum chemistry has infiltrated every nook and cranny of the chemistry community and beyond: “Atmospheric scientists, astrophysicists, geologists, and even neurologists are using it,” says Mark Ratner, a colleague of Pople's at Northwestern. Chemists in search of new fuels for aerospace, for example, rely on quantum-chemical methods to screen candidate molecular structures for their energy content and stability, while astrophysicists trying to make sense of the radio emissions from interstellar matter use the methods to simulate the radio spectrum of candidate molecules. From being an unattainable diamond, says Ratner, “quantum chemistry has become a black-box procedure that anyone can use.”

  13. NOBEL PRIZES

    A Prize for Quantum Trompe l'Oeil

    1. James Glanz

    The work honored by this year's crop of Nobel Prizes was done years ago but shows no sign of dating. The physiology prize went for the identification of a signaling molecule whose roles are still being explored; the chemistry prize for work enabling chemists to exploit quantum mechanics; the physics prize for a still-mysterious quantum “fluid”; and the economics prize for studies of poverty that remain all too relevant.

    “An electron is an electron is an electron,” Horst Störmer says cheerfully, a day after he and two colleagues won the 1998 Nobel Prize in physics for work that seemed to show just the opposite. Störmer—at Columbia University in New York and Lucent Technologies' Bell Laboratories in New Jersey—shared the prize with Daniel Tsui of Princeton University and Robert Laughlin of Stanford University for discovering a seeming exception to the textbook rule that every electron has the same electric charge. Sandwiched in a semiconductor at close to absolute zero and saturated in a powerful magnetic field, they found, electrons can perform an elaborate dance in which they act as if they had just fractions of that indivisible charge.

    Called the fractional quantum Hall effect, this dance of fractional charges reflects the properties not of single electrons but of a “quantum fluid” melded from the electrons and the lines of magnetic force. “It's so totally unlike states of matter that we had previously encountered,” says Princeton's Philip Anderson, who won a physics Nobel in 1977. The finding is so suggestive that its implications could reach outside semiconductors to the swirling, evanescent particles of “empty” space itself.

    The fractional quantum Hall effect is the high-tech grandchild of an effect observed in 1879 by Edwin Hall. He applied a magnetic field at right angles to a current-carrying gold plate and found that a voltage drop developed across the plate. Rather than flowing straight down the plate, the electrons tried to orbit the magnetic field lines, causing electrons to pile up on one side and produce the voltage drop.

    The voltage drop across the plate, simply called the Hall effect, is directly proportional to the strength of the magnetic field. But in 1980 a German physicist, Klaus von Klitzing, tried the same experiment at low temperatures and high magnetic fields in high-quality silicon—a semiconductor in which electrons can be induced to move only along a two-dimensional (2D) surface. Under those conditions, the electrons could execute their magnetic orbits almost undisturbed by collisions with imperfections or thermal vibrations in the material. This freedom allowed a new effect to emerge: The voltage drop changed in steps, rather than smoothly, as the field was cranked up.

    The steps, von Klitzing realized, showed that the electrons in the 2D layer had become quantized. Like electrons in an atom, they could only swirl around in orbits at particular energies, determined by the equations of quantum mechanics. The lowest of these energies, called the ground state, is the one a cold electron naturally falls into. But if the ground states are all occupied, the electron goes into the first available excited state, corresponding to a larger orbit.

    The stronger the magnetic field von Klitzing applied, the tighter all of those orbits became and the more of them could fit across the face of the silicon sample. At relatively weak fields, ground state orbits were scarce, and the electrons piled up in the excited states. At higher fields, however, more electrons could settle into the ground state, lowering the “filling factor”—the ratio of electrons in excited states to electrons in the ground state. Each time the filling factor dropped by one, meaning that another excited state had emptied, the Hall voltage suddenly shifted. “As you hop from one quantum level to the next one, you see the steps,” says Störmer. This quantum Hall effect won von Klitzing a Nobel Prize in 1985—and it had the strange offspring being celebrated this year.

    In 1982, Störmer and Tsui were studying electronic effects in a new gallium arsenide-based semiconductor made by Arthur Gossard, now at the University of California, Santa Barbara. The new device allowed the electrons even more freedom than they had in von Klitzing's silicon-based one. When the researchers boosted the field strength to levels above those that von Klitzing had explored, they saw further steps. The first new step popped up at a filling factor of one-third—just what would happen if particles with a third of the electron's charge were lurking in the sample.

    “We've discovered quarks,” Tsui quipped at the time, referring to elementary particles with a one-third charge that can be detected only in particle accelerators. Laughlin, a theorist, later worked out what was really going on. The electrons' high mobility in the new semiconductor had freed them to interact in new ways, producing “quasi-particles”: collective effects that mimicked the pip-squeak particles.

    The quasi-particles are quirky cousins of an effect that occurs when a charged particle zips through an ordinary semiconductor or an ionized gas: Particles of the opposite charge (or “holes,” where same-charge particles are absent) cloud around it and shield it from the outside world. In the cold, magnetized quantum sea of electrons in gallium arsenide, the shielding happens most easily when three vortices, each with a positive one-third charge, cluster around an electron, although the vortices and electrons can also congregate in other ratios. Sometimes those vortices float freely, and they are the quasi-particles.

    The phenomenon shows, says Laughlin, “what the laws of quantum mechanics can do that you would never in your wildest dreams have thought of.” And Tsui's quark quip might turn out to have more substance than it seems, says Laughlin: As real as ordinary quarks appear to be, they could conceivably be the manifestation of a related quantum-mechanical trompe l'oeil, conjured up by the evanescent particles in the continuum of free space. But most physicists are still mulling over the strangeness of the Nobel-winning fluid. Says Steven Kivelson of the University of California, Los Angeles: “It's really deep, and it's really quantum mechanical.”

  14. NOBEL PRIZES

    Famine Survivor Wins Economics Prize

    1. David Malakoff

    The work honored by this year's crop of Nobel Prizes was done years ago but shows no sign of dating. The physiology prize went for the identification of a signaling molecule whose roles are still being explored; the chemistry prize for work enabling chemists to exploit quantum mechanics; the physics prize for a still-mysterious quantum “fluid”; and the economics prize for studies of poverty that remain all too relevant.

    In 1943, despite a robust economy and an ample harvest, a famine engulfed the Indian state of Bengal and killed up to 3 million people. Last week, a child survivor of that disaster who went on to develop an economic explanation of how starvation could occur amid plenty won the Nobel Prize in economic sciences. In awarding the prize, the Royal Swedish Academy of Sciences cited 64-year-old Amartya Sen for his contributions to the field of welfare economics and for restoring “an ethical dimension to the discussion of vital economic problems.”

    Sen, who earned a doctorate from Cambridge University in 1959 and recently returned to its Trinity College to teach, has spent the past 3 decades on problems ranging from how government spending choices influence individuals to how researchers should best calculate poverty statistics. But he is perhaps best known for work that offered a fresh look at the economics of famine. Studies of disasters in India, Bangladesh, Ethiopia, and Saharan Africa led Sen in the 1970s to challenge the conventional view that famines are caused solely by food shortages. Instead, he showed that other factors—such as declining wages and rising food prices caused by bad weather or flawed government policies—influence the distribution of food and aggravate famine conditions for the poorest people. “Many past famines have been associated with high inflation, making the groups that fall behind in the inflationary race selected victims of starvation,” he wrote in a 1996 paper for the journal Development.

    “[Sen] achieved something very rare in economics,” says a former student, economist Prasanta Pattanaik of the University of California, Riverside, who praised Sen's ability both to develop highly abstract theory and apply it to real world problems. In studying poverty, for example, Sen helped devise measures that do more than just show how many people fall below a nation's poverty line. The new indices, now in wide use, show how far below the line people fall and also identify social factors—such as poor health and limited education—that reduce economic mobility. The approach, described in dozens of books and papers, has provided policy-makers with useful information for devising solutions, including the knowledge that policies designed to help individuals just a few dollars below the line may do little for those deeper in poverty.

    Sen has “pushed people to think about poverty in much broader dimensions” and influenced international development policies, says economist Lyn Squire of the World Bank in Washington, D.C. In particular, he says, “the development community has moved away from looking at poverty with a narrow focus on income and [toward] ways to empower the poor to make choices.”

    The ethical implications of economic policy have long concerned Sen, who until recently held chairs in both economics and philosophy at Harvard University. “Economic analysis,” he argued in a 1990 speech to Italy's Agnelli Foundation, “has something to contribute to substantive ethics in the world in which we live.”

  15. NEUROSCIENCE

    Researchers Go Natural in Visual Studies

    1. Marcia Barinaga

    Artificial stimuli are usually used to probe visual processing, but recent work with natural images is providing some surprising new insights

    When you look out your living room window, chances are you will see a scene that's a lot more complex than bars of light or fields of moving dots. But those are the kinds of visual stimuli neuroscientists have used for decades to understand how the brain interprets the visual world. “The idea is that what you learn from these simple stimuli is going to generalize and tell you how [the visual system] would respond to a real scene,” says neuroscientist Bruno Olshausen of the University of California (UC), Davis. “But that is just an assumption … that has never been tested”—until now, that is.

    Within the past decade, a small cadre of neuroscientists has begun to determine how the visual system responds to “natural scenes,” defined as images from the real world, whether they depict jungle foliage or a city street. The work is providing new insights into why visual neurons have evolved the properties they have, what controls the responses of individual neurons, and how our brains process the images we actually see. It turns out, for example, that when the visual system responds to complex natural scenes, interactions between neurons are much more important than had been previously known. Researchers also hope that natural scenes will help them understand the functions of some visual neurons whose activities until now have been a complete mystery.

    Among the first neuroscientists to experiment with natural scenes were computer modelers, in part because it is easier to analyze a model's response to a complex scene than that of a real brain. In one striking example, Robert Barlow and his colleagues at the State University of New York Health Sciences Center in Syracuse used a computer model to ask how a horseshoe crab's eye responds to a natural scene. Neuroscientists have characterized the response properties of the crab's visual neurons in such fine detail that they can simulate every neuron in a computer model, and the model will process an image just as the animal's eye would.

    Barlow and his colleagues gained a “crab's eye” view of the world by mounting little movie cameras on the shells of horseshoe crabs and recording what the animals saw while exploring their natural underwater environment at the Marine Biological Laboratory in Woods Hole, Massachusetts. They then fed the movie into their model to see how it would respond. One clear finding: Objects resembling potential mates sparked the most robust responses from the neurons. “The eye of this animal is tuned to best detect objects that are matelike in terms of size, contrast, and motion,” Barlow says. “We would never have learned this had we not [recorded] what the animal sees in its natural habitat.”

    Other researchers have taken a different modeling approach to explore how the more advanced visual systems of mammals operate. This work began with an analysis of what sets natural images apart from artificial stimuli. Natural scenes tend to have smooth transitions in space and time. “If one point [in a natural image] is white,” notes UC Berkeley neurophysiologist Yang Dan, “the next point will probably be white also. If there is a wall, the next moment the wall is still likely to be there.” That regularity, which is true for all natural scenes, can be described mathematically. In contrast, artificial stimuli have more abrupt changes. In some cases, these are even random—the visual equivalent of white noise.

    The regularity of natural scenes means that, as far as the brain is concerned, some of the information coming from them is redundant. Theorists, including Olshausen, David Field of Cornell University in Ithaca, New York, and Joseph Atick of Rockefeller University in New York City, have used computer models to predict the properties of visual neurons that would minimize this redundancy and encode information from a natural scene most efficiently. Their models have come up with neural properties like those of actual neurons in the visual system.

    Visual tracks.

    The yellow dots trace the eye movements of a monkey viewing a natural scene. The green circles show the patches of the image that fall in the receptive field of a neuron as the eye moves; the brightness of the circles reflects that neuron's activity.

    CREDIT: JACK GALLANT/UC BERKELEY

    For example, in the late 1980s, Atick, then at the Institute for Advanced Study in Princeton, New Jersey, used information theory—a type of mathematical analysis used by communications engineers—to ask how early levels of the visual system might process natural scenes efficiently. The answer: The neurons could remove redundancy by converting the signal into one in which each element varies independently of the elements that precede or follow. In engineering parlance, such a signal is known as a “white” signal.

    Atick then used a computer model to ask how neurons would whiten a signal. It predicted very closely the response properties of neurons in the retina and the lateral geniculate nucleus (LGN), the brain area that first receives visual signals before they enter the visual cortex. These properties cause the neurons to respond most strongly to contrast changes in either space or time and diminish their responses when there is no change in contrast.

    In 1995 Dan, then a postdoc with Clay Reid at Rockefeller University, took the next step. She recorded directly the response of LGN neurons to natural scenes to see whether they do indeed whiten incoming visual signals as predicted by the model. Dan did her experiment on anesthetized cats whose eyes were trained on movies rented from a video store and found the expected randomness in the neurons' responses. “From the signal at any moment, you can't predict what is going to happen at the next moment,” she says.

    But did those responses depend on the characteristics of the natural scene, she wondered, or did the LGN neurons “just want to fire white?” To answer that question, she showed the animals visual white noise: checkerboard patterns that changed randomly from one moment to the next. The signal that came out was “definitely not white,” she says. Rather than being unpredictable, the neuronal firing patterns were filled with negative correlations. “If at one moment there is a high firing rate,” says Dan, “it is more likely that in the next moment you will have a low firing rate.” That confirmed that LGN neurons don't simply whiten all the signals but transform them in a way that appears specialized to handle natural scenes.

    Olshausen and Field did a computer-modeling analysis similar to Atick's for the primary visual cortex, the visual area that receives information from the LGN, and their model evolved response properties just like those of primary visual cortex neurons. Field and Olshausen haven't yet verified the results with actual recordings, but their results, along with Dan's and Atick's, indicate how neurons have evolved to process visual information efficiently. The work, says Olshausen, illustrates “the importance of considering what the system was designed to do” when trying to understand it. Or as neuroscientist Bill Geisler of the University of Texas, Austin, puts it: “If you want to understand the functioning of the visual system, it makes sense to look at its functioning in the environment in which it evolved.”

    Neuronal mysteries

    Researchers hope to put that principle to work to learn more about some of the more complex properties of visual neurons that they've glimpsed but don't yet fully understand. One such property concerns how a neuron's responses might be altered by the activity of other neurons responding to different parts of the same visual scene. These neuronal interactions may be masked in traditional experiments, in which the animals' eyes are fixed at a point on the screen while stimuli are flashed within the small part of the visual scene surveyed by a particular neuron, known as its receptive field. In these experiments, areas that fall within most other neurons' receptive fields generally contain little information. Natural scenes, on the other hand, usually fill the eye's entire field of vision, so areas outside a neuron's receptive field are as crowded with visual information as those within. In addition, subjects freely viewing a complex natural scene flick their eyes from spot to spot, exposing each neuron to a procession of complex image patches.

    Researchers knew that these differences would make neurons respond differently to natural scenes than to simple artificial stimuli. But it was not clear just how significant those differences would be, says UC Berkeley neurophysiologist Jack Gallant. Gallant's team has been exploring this question by showing a monkey a natural scene while recording from individual neurons in the animal's visual cortex.

    Gallant's team creates “review” movies, which show the sequence of the scene's patches that fell in and around a neuron's receptive field as the animal looked around a natural scene. The team can play these movies back repeatedly to test the consistency of the neuron's responses. Their experiments already point to some intriguing effects that natural scenes have on the firing of individual neurons.

    Gallant's team first characterized the responses of a neuron with gratings—a common form of artificial stimulus that looks somewhat like a small patch of corrugated tin roof—and then allowed the monkey to view a natural scene freely. Most of the neurons' responses to the natural scenes were muted compared to their responses to gratings. That was no big surprise, because previous work had shown that features that lie just outside a neuron's receptive field—and natural images are loaded with such information—tend to damp its response. The effect is thought to be caused by the activity of neurons that respond to those features. But additional work indicated that this neuronal interaction is more complicated than simple damping, says Gallant. Parts of the image outside a neuron's receptive field “are sculpting the responses of the neuron. They make the cell respond to fewer things, but the things it responds to, it responds to better.” These effects are bigger than researchers had suspected from experiments with artificial stimuli, says Field. “Now that they seem to play a major role in the cell's activity,” he adds, “we need a rigorous approach to actually finding out what they are doing.”

    Researchers hope that experiments with natural scenes will also help them solve the mystery of dormant neurons. “Periodically [you] record from cells that are just silent,” says Field. Most often encountered in visual processing areas beyond the primary visual cortex, these inert neurons probably detect features or combinations of features that no one has thought of testing. Field and others hope that recording from the neurons while the animal views natural scenes will reveal features that make the neurons fire, providing clues to their normal roles. “If you use natural images, at least you are in a domain that the animal evolved to deal with,” Gallant says.

    Among those taking this approach are Dario Ringach and Robert Shapley at New York University. After finding neurons in the primary visual cortex of monkeys that didn't respond to any of the standard stimuli, Ringach tried movies. Where gratings and bars had failed, Sleeper and Goldfinger brought the neurons to life. “The most striking thing so far,” says Shapley, is “that you can actually get responses from cells [that are silent] with the usual battery of tests.” The finding means, he says, that you can check back to see what images were on the screen before, during, and after the moment when the neuron fired and look for patterns that may reveal what the neuron responds to.

    The technique requires some intuition. For example, if a neuron seems to fire whenever there is red on the screen, Ringach reanalyzes the movie for when red is or isn't in view and checks whether there is any correlation with the neuron's firing.

    But the associations may not be that simple: The neurons may be responding to combinations of features, and those combinations may be spread out in space or time. Such associations are unlikely to jump out at a human observer. “The real problem” says Ringach, is to use computer analysis to “tell what the cell is responding to without guessing.” To scan all possible sets of events that might have triggered a neuron “isn't in principle impossible,” says Shapley; “it just takes a lot of computing time.”

    Some neuroscientists, such as David Hubel of Harvard University, question whether the effort is worth it. Hubel, who received a Nobel Prize for his work using artificial stimuli to characterize visual neurons, argues that natural scenes are too complicated and too loosely defined to provide useful information about the mechanisms of vision. “It is all very well to say that there is something magic about a natural scene,” he says, but it makes more sense to test the visual system instead with “more elaborate artificial scenes” that can be carefully designed and controlled.

    “Nothing is magical about natural scenes,” responds Gallant. “They are just another tool” to be used in addition to artificial stimuli, because they are useful in revealing adaptations our visual system has made to interpret the natural world. Eventually, Olshausen predicts, more neuroscientists are bound to find uses for this tool: “This is just the beginning. Five years from now, there will be entire sessions of the annual neuroscience meeting talking about natural images.”

  16. ECOLOGY

    The Great DOE Land Rush?

    1. Kathryn S. Brown*
    1. Kathryn S. Brown is a writer in Columbia, Missouri.

    The department is considering selling off land around its national labs that has been undisturbed for decades; these zones have become protected havens for wildlife and valuable locations for ecological research

    Since its creation in 1948, Oak Ridge National Laboratory (ORNL) in Tennessee has buffered itself from civilization with a natural security system: a tract of wilderness covered with oak, hickory, and pine forests. Ecologists have had a field day in Oak Ridge's 14,000-hectare protected zone—home to peregrine falcons, cerulean warblers, and 18 other rare animal species—and in similar swaths of wilderness surrounding six other Department of Energy (DOE) national laboratories. These zones have yielded data on everything from biological invaders to soil carbon levels that inform the climate change debate. “This land is incredibly valuable,” says ecologist James Ehleringer of the University of Utah, Salt Lake City.

    But these ecological havens are fraying at the edges. As part of an effort to cope with post-Cold War budget cuts, DOE has been quietly divesting itself of wilderness parcels no longer deemed essential to safeguarding the nation's weapons labs. So far, three labs have given up more than 1200 hectares (ha) of buffer zone to local governments and the U.S. Bureau of Land Management, which in turn have sold much of this land to developers for house lots, landfills, and commercial construction. Another 5200 ha may soon be put on the auction block. Scientists acknowledge that they have been slow to take up the cause. “These are federal reservations, and many researchers consider them inviolate,” says James MacMahon, an ecologist at Utah State University in Logan. “I don't think it's on anyone's radar.”

    But the deals are setting off alarms among environmental groups. The Nature Conservancy (TNC), for instance, plans to complete a biodiversity database this month that documents roughly 1500 species, including rare or endangered animals and plants, in ORNL's buffer zone. This and other grassroots efforts are gelling into a campaign to persuade DOE to set aside as much of the buffer zones as possible for wildlife and research. “DOE should take stock of what it has before making any decisions” on land use, says Curt Soper of TNC in Seattle.

    The disputes involve lands originally valued for their emptiness. Early ecological studies done in the buffer zones tracked the fate of radioactive waste dumped or leaked by the secretive labs, says Steve Hildebrand, ORNL's environmental sciences director. Then in the 1950s, outside researchers began discovering the lands as valuable spots to study broad ecosystem questions. “Scientists came to collaborate, and the momentum grew,” Hildebrand says. In 1972, DOE's predecessor, the Atomic Energy Commission, moved to protect such areas as wildlife and research refuges by designating the first National Environmental Research Park (NERP), at the Savannah River site in South Carolina. Over the next 20 years, six more parks were established (see map).

    Diversity.

    DOE's research parks are set in a variety of ecosystems.

    CREDITS: (CLOCKWISE FROM LEFT) THE NATURE CONSERVANCY; FERMILAB VISUAL MEDIA SERVICES; OAK RIDGE NATIONAL LAB; SOURCE: SREL

    The parks quickly became a favorite haunt of ecologists. At the Idaho National Engineering and Environmental Laboratory in Idaho Falls, for example, researchers have compiled a stark picture of a non-native weed called cheatgrass, which has choked out sagebrush and fueled disastrous fires across much of the Great Basin. In the lab's buffer zone, cheatgrass is being held at bay by a hardy stand of native plants, says Jay Anderson of Idaho State University in Pocatello. “Our research shows just how much cheatgrass relies on overgrazed land, where you don't have these viable populations of native species,” he says. And at Fermi National Accelerator Laboratory (Fermilab) outside Chicago, ecologists have spent 25 years restoring 500 ha of tallgrass prairie. The young grassland hosts research on how plants cope with grazing, for instance.

    At ORNL, lab lands became a “field of dreams,” says atmospheric scientist Dennis Baldocchi of the National Oceanic and Atmospheric Administration's office in Oak Ridge, attracting scientists who study climate change and nutrient cycling. One resource is a 30-year record of soil carbon levels at ORNL. One of only a few carbon histories taken in a southern temperate forest, the project offers insights into how carbon cycles between forests, soils, waters, and the air. “A lot of work on the carbon budget depends on long-term soil histories that you only get at sites like Oak Ridge,” says Susan Trumbore, a biogeochemist at the University of California, Irvine. The National Science Foundation's Long Term Ecological Research sites provide similar data, she says, but many NERP studies have been running years or decades longer. What's more, Trumbore says, ORNL forests are protected from tourists, who sometimes disturb long-term experiments in national parks. Most projects in Oak Ridge's buffer zone are threatened by pending land deals, adds Pat Parr, ORNL's land area manager.

    But some DOE officials argue that ecosystem studies fall outside the agency's purview. In a January 1997 audit, the agency's Inspector General (IG) asserted that DOE should stick to its central post-Cold War missions—stockpile stewardship, environmental cleanup, technology development, and research. The report recommended selling roughly a quarter of NERP lands—125,000 ha valued at $126 million—at ORNL, Idaho, and the Pacific Northwest National Laboratory in Richland, Washington. DOE managers agreed in principle, although they say that a higher, undetermined percentage of NERP territory should be kept. “We need a lot of that land,” says Andrew Duran of DOE's field management office, partly because scattered patches are contaminated with radionuclides or toxic chemicals and are serving as test-beds for cleanup technologies. In response to the IG report, his office has asked each lab to assess how much land it needs.

    Observers say the IG report essentially gives a green light to land deals. In Oak Ridge, city administrators have already received about 2400 ha of ORNL land for businesses, homes, schools, and other facilities. They hope to buy 3200 ha more. “We have to have room to grow,” says Robert McDaniel, Oak Ridge city manager. The IG report deems expendable roughly 70% of the ORNL buffer zone used for research. Other labs have similar stories: At Los Alamos National Laboratory in New Mexico, 1800 ha—about 15% of the lab's land—could be transferred as early as 2001 to the county and to the Pueblo of San Ildefonso for commercial and residential use. In Richland, a 36,500-ha area known as the North Slope hangs in the balance, as congressional representatives squabble over the county's agricultural needs. And at Fermilab, county officials have aired the possibility of running a four-lane highway through wetlands on the 60-ha NERP.

    The stakes are higher now that conservation groups have begun documenting the ecological richness in the buffer zones. Three years ago, the Audubon Society, TNC, and other organizations mounted a campaign to persuade DOE to refrain from selling the 30,000-ha Fitzner-Eberhardt Arid Lands Ecology Reserve at the Pacific Northwest lab, one of the few pristine shrub-steppe ecosystems left in North America and a hotbed for research on everything from Canadian geese and salmon spawning to air and groundwater quality. “To think that DOE would suddenly decide it was no longer important to protect those lands just boggled my mind,” says Idaho State's Anderson. But the nature groups' strategy—writing editorials for local newspapers and organizing town meetings—paid off. Last year, DOE signed a contract with the U.S. Fish and Wildlife Service, which will manage the reserve as a wildlife refuge.

    At Oak Ridge, TNC hopes its new database—which tracks roughly 400 animal and 1100 plant species, including rare migratory birds, salamanders, fish, and bats, on ORNL's 14,000-ha reservation—will spotlight the most vibrant areas in the buffer zone. Parr says the lab strongly advocates protecting the reserve for research. But others add that the labs face an uphill battle unless ecologists begin to raise their voices in protest. Says Janet Anderson, a DOE adviser, “It falls on [them] to support the integrity of these lands.”

  17. PLANT GENOMICS

    A Bonanza for Plant Genomics

    1. Elizabeth Pennisi

    A new initiative could provide the biggest ever pot of government money for plant genomics

    When the National Science Foundation (NSF) announced last year that it had some $40 million in its 1998 budget to launch a plant genome initiative, Virginia Walbot sent an e-mail to nine colleagues asking if they would be interested in applying jointly for a grant to analyze the corn genome. Ninety seconds later, she had her first affirmative response; within 2 hours, all had agreed to team up with Walbot, a plant molecular biologist at Stanford University. Similar partnerships were forming throughout the plant biology community to take advantage of this windfall. Last month, the networking paid off when NSF announced that Walbot and her colleagues, along with 22 other groups,* would receive the first grants from the new initiative: an assortment of projects from mapping the corn genome to determining the genes responsible for fruit development in tomato plants.

    These investments are just the initial installments of what promises to be a new bounty of plant research funds. Already, Congress has approved another $50 million for NSF's plant genome program in 1999, and there's talk of increasing funding for plant genomics—and adding an animal component—to $85 million in the year 2000. Plant scientists are thrilled by this new largess. Until now, U.S. government support for crop plant genomics has been sparse; the U.S. Department of Agriculture (USDA), for example, has provided only a few million dollars a year for such work. “We've been rather impoverished,” says plant geneticist Chris Somerville of the Carnegie Institution of Washington at Stanford University. Now, says USDA's Ed Kaleikau, NSF's initiative will provide “the biggest infusion of money for plant biology, perhaps ever.”

    The initiative is the outgrowth of a proposal, originally made in 1995 by the National Corn Growers Association, that the U.S. government put $143 million into sequencing the genome of corn, the number one crop plant in the United States. Researchers welcomed the idea, but they worried that such an effort would sap funds from other genomics projects. In response, an Office of Science and Technology Policy committee—with representatives from NSF, USDA, the Department of Energy, the National Institutes of Health, and the White House Office of Management and Budget—held meetings with scientists from academia and industry and other experts to map out a structure for the effort (Science, 27 June 1997, p. 1960).

    As a result, when Congress approved the $40 million add-on to NSF's fiscal year 1998 budget, it simply required that at least three-quarters of the funds be spent on genetic studies of economically important crops. That's what NSF has now done. In the first round of grants, $30 million will be devoted to work on key crops—primarily corn but also sorghum, tomato, cotton, and soybeans—while the remaining $10 million will speed up the sequencing of the genome of Arabidopsis, the favorite model plant for lab scientists. “If the [federal agencies] maintain this momentum and interest, which is long overdue, I think the benefits to researchers are going to be phenomenal,” says Thea Wilkins, a molecular geneticist at the University of California (UC), Davis.

    Already, successes with genetically engineered strains of pest- resistant corn and herbicide-resistant cotton have convinced many of the value of using such technology to improve crop plants. Indeed, many companies are positioning themselves to cash in on the fruits of plant genomics (see p. 608). But further advances will depend on identifying genes for useful traits, such as the ability to survive stresses such as drought or to produce higher crop yields. The mapping and sequencing efforts should accelerate the discovery of such genes. “We think that [with genomics] higher yields will be possible with lower production costs,” says Ryland Utlaut, president of the National Corn Growers Association, which calculates that each 3% increase in U.S. corn production leads to an increase of $1 billion in the U.S. economy.

    Focus on corn

    Although the new initiative is broader than it originally envisioned, the corn growers association has much to be pleased about. NSF has committed $37 million in the new initiative to corn genomics over the next 5 years. Some will go toward mapping and cloning efforts that set the stage for sequencing the genome; other work will help identify corn's genes or evaluate which genes are responsible for desirable traits, such as oil content.

    The job of developing the infrastructure to analyze key parts of the corn genome will go to Edward Coe, a USDA plant geneticist at the University of Missouri, Columbia. Coe's team, including colleagues at Clemson University in South Carolina and at the University of Georgia, Athens, will make libraries of corn DNA fragments that can be used to build a physical map and as the raw material for sequencing. W. Zacheus Cande, a cell biologist at the University of California, Berkeley, has developed a labeling technique that makes specific genes visible on the long corn chromosomes, pinpointing their position. Meanwhile, Ron Phillips at the University of Minnesota, Minneapolis, will work on another type of physical map, called a radiation hybrid map.

    This work, a prelude to large-scale sequencing, will take more than 5 years to complete. Many researchers think, however, that rather than sequence the full corn genome—which has a whopping 3 billion base pairs and is full of repetitive DNA that is hard to sequence—they will focus on regions likely to contain the most valuable information.

    Meanwhile, Walbot's team—this year's biggest NSF winner with a grant of $12.6 million—will take a more direct approach to identifying corn genes. The researchers have genetically engineered corn plants with a piece of mobile DNA, called a Mu transposon, that jumps about the genome, disrupting the genes on which it lands. The team will then look for mutations such as stunted ears, superlong tassels, or purple-colored kernels in the offspring of the engineered plants.

    Walbot's team uses a neat trick to identify the genes responsible for these changes: They tag the transposon itself with an antibiotic-resistance gene. The Stanford researchers extract DNA from a mutated plant, chop it into small pieces, add the pieces to bacteria, and grow the bacteria in a culture medium containing an antibiotic. The only bacteria that survive are those that take up a piece of DNA including the transposon and its antibiotic-resistance gene, all embedded in the gene they are looking for. Walbot eventually expects to have mutations in all the estimated 50,000 corn genes, together with seeds from the affected plants and bacteria containing the gene responsible. This approach “allows simultaneous study of gene sequence and function in a living corn plant,” says Walbot. “It's a way to get more quickly into functional genomics.”

    The NSF initiative will support several variations on the transposon technique. Hugo Dooner, a geneticist at Rutgers University in Piscataway, New Jersey, plans to pull out about 100 genes using a transposon called Activator. At Cold Spring Harbor Laboratory in New York, Rob Martienssen will use a transposon system to create some 40,000 lines of corn mutants, each associated with a piece of altered DNA. Although his approach is similar to Walbot's, “having several groups use complementary approaches increases the likelihood that every important gene will be identified,” says NSF's David Meinke.

    A view across the field

    Corn is the only crop plant in which transposons can be easily used to pull out genes. But researchers think they will be able to combine what they learn about the corn genome with data coming from the Arabidopsis sequencing project and also a rice genome project expected to be under way in Japan and other countries within the year (see sidebar). There appear to be enough similarities between the gene arrangements in different species that locating a particular gene in one will point to counterparts in the others. But first, says Cornell University plant molecular geneticist Steven Tanksley, “we need to find ways to connect [Arabidopsis and rice] genome information to other species.”

    To find those connections, Andrew Paterson, a plant molecular geneticist at Texas A&M University in College Station who is moving in January to the University of Georgia, Athens, will look for similar DNA landmarks in sorghum, rice, and corn. And Tanksley's team will be looking at genes involved in fruit development in wild and domestic tomato plants and comparing them with Arabidopsis, with an eye to evaluating how evolution has reshaped genomes. “All of these factors will merge into a picture of the interrelatedness that will tie one crop to another,” Coe says.

    While these groups are exploring the fundamental structure of plant genomes, others will jump into functional genomics—determining how patterns of gene expression vary under different conditions or in different mutants. Among other things, this should help identify genes that affect plant yields or responses to stresses such as high salt concentrations or infection by pathogens.

    For example, plant geneticist Bertrand Lemieux of the University of Delaware, Newark, wants to find the genes that enable some corn varieties to produce more oil than others, and UC Davis's Wilkins will try to track down all the genes important to cotton fiber formation—information that could ultimately lead to improved cotton varieties. A team coordinated by Hans Bohnert, a biochemist at the University of Arizona, Tucson, will focus on identifying genes involved in salt tolerance, while Nina Fedoroff of Pennsylvania State University in University Park and her colleagues will look for genes that turn on or increase their activity when plants are subjected to high concentrations of ozone and damage by pathogens. “Rather than providing just [DNA] sequence, we're attacking a biological problem,” Bohnert says.

    Once identified, such genes might be used to genetically engineer plants with improved yields or resistance to the various stresses. Fedoroff hopes eventually to create inexpensive monitors that will let farmers detect when their crops are at risk. It may take years to achieve these goals, Fedoroff and Bohnert note. But in the meantime these projects will invigorate basic research. Genes involved in fiber formation, for example, will help plant physiologists understand cell growth in general, and there should soon be a wealth of new genes of all kinds to study in corn. Says Tanksley, “plant biology, like all biology, has embarked on a golden age.” Or, as Gerald Tumbleson, a Minnesota corn farmer, said at a press conference announcing the NSF awards, “With this season of biology, we're going to be able to do things that we only dreamed of before. I just wish I was 20 years old, because I think this is fantastic.”

  18. PLANT GENOMICS

    Slow Start for U.S. Rice Genome Project

    The United States has long been a leader in efforts to sequence the human, Escherichia coli, and Arabidopsis genomes. But it appears to be taking a back seat in an international effort to determine the genetic makeup of one of the world's most important crops, rice. U.S. researchers last year urged the government to pay half the cost of sequencing this plant's genome, but federal agencies now seem barely able to come up with the 10% (about $20 million) expected from the U.S. Department of Agriculture (USDA). This shortfall could undermine a cooperative effort to make rice genome data freely available to researchers, and it could cause an incipient international consortium to “fall apart as originally conceived,” says Susan McCouch, a rice geneticist at Cornell University in Ithaca, New York.

    The latest blow came last week, when the U.S. Congress killed a $120 million initiative that would have supported rice sequencing projects (Science, 16 October, p. 392). USDA, the National Science Foundation (NSF), and the Department of Energy are now cobbling together about $4 million for a small U.S. rice sequencing effort later in 1999, but “additional funding” will be needed to reach the $20 million level, says Ed Kaleikau, who runs USDA's plant genome program.

    Plant biologists are not pleased. The loss of the USDA funds came 2 weeks after it became clear that none of the funds from NSF's new $40 million plant genomics initiative would support sequencing the rice genome (see main text). “I and my colleagues in this international effort truly believed that NSF [would] give top priority to the proposals for rice genome sequencing,” says Takuji Sasaki of the National Institute of Agrobiological Resources in Tsukuba, Japan. “So many researchers in crop genomics were disappointed by the decision.”

    As with microorganisms and mammals, the prospect of having one plant genome in hand—Arabidopsis by the year 2000—has made researchers eager for more. Although a U.S. corn-lobbying group has been trying to get corn sequenced next, among researchers worldwide, “there's unified agreement that if [we're] going to sequence a second plant, it should be rice,” says Chris Somerville, a plant geneticist at the Carnegie Institution of Washington lab at Stanford University. A key food for much of the world, rice has a relatively small genome—just 430 million bases, compared to corn's 3 billion or so. Like corn and other cereals, it is a monocot, and deciphering its genome could make gene-hunting easier in other crop plants.

    Eager to get started, rice researchers had formed an international consortium last September* to sequence a Japanese rice cultivar called Nipponbare (GA3), the focus of 7 years' work by Sasaki. This group had finished an extensive physical map and, with $10 million per year from the Japanese government for the next decade, were poised to start sequencing. In the United States, Rod Wing and his colleagues at Clemson University in South Carolina had begun building a DNA library of bacterial artificial chromosomes (BACs) and sequencing their ends to determine which would be the best to use for genome sequencing. Novartis supported some of this work, but agreed to allow the BAC sequences to be freely available. Sasaki's team planned to do the same. Representatives from the United Kingdom, Korea, Japan, the United States, France, and China tentatively agreed to coordinate their efforts and to put sequence data in public databases.

    The United States seemed ready to join the team. In January 1998, a U.S. interagency committee recommended that the United States contribute $40 million and sequence 20% of the rice genome over the next 5 years, formally acknowledging the government's interest. And when NSF got $40 million in fiscal year 1998 for plant genomics, representatives of the consortium, who had been in close contact with NSF officials, assumed rice was a top priority and began building blue-ribbon scientific teams and applying for grants. But although the grants came through for corn, rice lost out. NSF program officer Machi Dilworth says no special consideration was given to any proposals based on plant species. Ben Burr, a plant geneticist at Brookhaven National Laboratory in New York, feels that the U.S. effort has been left with “egg on our face.”

    These setbacks make some researchers worry about the fate of the nascent international rice genome consortium. Thus far, only Japan has put substantial money behind this effort. Although France seems eager to support some work, the European Union will not consider substantial support for sequencing the rice genome for a year. Even then, “we've not got a firm commitment,” says Michael Gale, a plant geneticist at the John Innes Centre in Norwich, U.K. Korea would like to be involved, but national economic problems may limit its participation. And China has decided to sequence a different rice cultivar, raising a question about its willingness to share data.

    If the United States doesn't set up a rice- sequencing program in the next month or so, “we will have lost a lot of ground,” warns McCouch. Japan began its sequencing program in April and could lose interest in an open-data policy if no one else is contributing. Even if Japan's commitment remains unchanged, the funding setbacks will have a chilling effect, giving the decade-long project a slow start. Given the importance of rice as a crop and as a model for studying other plant genomes, says Ronald Phillips, a plant geneticist at the University of Minnesota, Minneapolis, “it's really too bad that [rice] is not going to be the front and center of our plant genomics program.” -E. P.

  19. CULTURAL ANTHROPOLOGY

    DNA Studies Challenge the Meaning of Race

    1. Eliot Marshall

    Genetic diversity appears to be a continuum, with no clear breaks delineating racial groups

    Last year, the U.S. Office of Management and Budget (OMB) completed a contentious 4-year review of the racial and ethnic categories that will be used to define the U.S. population in federal reports, including the 2000 census. It finally settled on seven groupings: American Indian or Alaska Native; Asian; Black or African American; Native Hawaiian (added after OMB received 7000 postcards from Hawaiians) or Other Pacific Islander; White; Hispanic or Latino; and Not Hispanic or Latino. The categories could have enormous implications—from the distribution of government resources to political districting to demographic research. But as far as geneticists are concerned, they're meaningless.

    “Ridiculous” is the word cultural anthropologist John Moore of the University of Florida, Gainesville, uses to describe such racial typing. This view is based on a growing body of data that indicates, as Moore says, that “there aren't any boundaries between races.” Geneticist Kenneth Kidd of Yale University says the DNA samples he's examined show that there is “a virtual continuum of genetic variation” around the world. “There's no place where you can draw a line and say there's a major difference on one side of the line from what's on the other side.” If one is talking about a distinct, discrete, identifiable population, Kidd adds, “there's no such thing as race in [modern] Homo sapiens.” Indeed, the American Anthropological Association urged the government last year to do away with racial categories and, in political matters, let people define their own ethnicity.

    You might think that this emerging view of genetic variation would help lower the temperature of discussions about race and ethnicity. But, ironically, researchers who want to extend their studies of genetic diversity are being stymied by the intense sensitivity surrounding the topic. A major international project to survey genetic diversity around the globe is on hold, having been opposed by activists. Moreover, a planned database of genetic polymorphisms is being constructed in a way that will prevent comparisons between different population groups, making it useless for exploring the gene frequency variations that do exist, according to researchers.

    Anthropologists have long objected to the stereotypes that are used to classify human populations into racial groups. But the most potent challenge to such groupings has come from genetic studies of human origins. The field was “transformed” in the late 1980s, says anthropologist Kenneth Weiss of Pennsylvania State University in University Park, by an analysis of variations in mitochondrial DNA (mtDNA) begun by Rebecca Cann of the University of Hawaii, Manoa, Mark Stoneking of Penn State, and the late Allan Wilson of the University of California, Berkeley. These researchers reported that diversity in mtDNA genes was two to three times greater in Africa than in Europe or the rest of the world. Assuming that the rate of change in mtDNA was fairly constant, they concluded that Africans' mtDNA was older than that of non-Africans, and that modern humans originated from a small population that emerged from Africa and migrated around the globe.

    Since the 1980s, other researchers have extended these studies by looking at diversity in nuclear DNA. Two years ago, for example, Kidd and his Yale colleague Sarah Tishkoff reported patterns of variation in the CD4 gene locus on chromosome 12 among 1600 individuals chosen from 42 populations from around the world (Science, 8 March 1996, p. 1380). They have since looked at 45 short tandem repeats across the entire nuclear genome in multiple populations. What they found, says Kidd, is “a lot of genetic variation in Africa, decreasing genetic variation as you go from west to east across Eurasia, and decreasing more into the Pacific, and separately decreasing into North America and South America.” The best explanation for this pattern, Kidd argues, is the same one Wilson and his colleagues put forward: A small group of people moved out of northern Africa to colonize the rest of the world.

    Not all the studies of nuclear DNA have been consistent. For example, research on the Y chromosome by Michael Hammer of the University of Arizona, Tucson, and others found greater genetic diversity for this chromosome in Asia than in Africa. And some studies—such as those by Henry Harpending and colleagues at the University of Utah, Salt Lake City—have found that the difference between Africa and the rest of the world in the amount of variation in nuclear DNA is much smaller than reported for mtDNA. But the new data generally point in the same direction: Human genetic diversity is greatest in Africa, and the genetic heritage of modern humans is largely African.

    Although researchers are far from unanimous, even some who have been cautious about interpreting regional patterns of mtDNA variation seem ready to accept the out-of-Africa thesis today. Lynn Jorde, a colleague of Harpending at the University of Utah, says, “All of us have been a little suspicious of the mitochondrial DNA data because it is such a small part of our genome.” But the totality of evidence—particularly studies showing that common variants found outside Africa are mainly “subsets” of those in Africa—persuade Jorde that the out-of-Africa theory is right. Indeed, Jorde says, this hypothesis was accepted by most of the 100 or so geneticists, physical anthropologists, and linguists attending a conference on human origins at Cambridge University, in Cambridge, U.K., last month. “There's enough agreement to tell us we're on the right track, but enough disagreement to keep things interesting,” Jorde says.

    In 1991, population geneticist L. Luca Cavalli-Sforza of Stanford University and his colleagues proposed an ambitious plan to probe this genetic continuity more deeply by collecting and analyzing DNA samples from thousands of populations around the globe. The effort, called the Human Genome Diversity Project (HGDP), ran into heavy fire on ethical grounds and fears that it might violate indigenous peoples' rights, however, and it remains stalled for lack of funds (Science, 24 October 1997, p. 568).

    With the HGDP mired in controversy, many population geneticists were hoping that some of its goals might be achieved through a database proposed by the National Human Genome Research Institute (NHGRI) of the National Institutes of Health. In January, NHGRI director Francis Collins launched a scheme to create a collection of samples of human DNA containing point mutations called single nucleotide polymorphisms (SNPs), which can be used as markers in the study of inherited diseases. This $30 million project aims to gather a set of 100,000 SNPs representing human diversity (Science, 19 December 1997, p. 2046).

    The plan ran into difficulties when officials had to decide which populations should be included in the 450 DNA samples to be analyzed for SNPs. Federal guidelines then recognized four races (black, white, Asian or Pacific islander, and Native American or Alaskan native), as well as an ethnic type (Hispanic). But because the categories aren't based on genetics, NHGRI sought advice on how to structure its groupings from Weiss and anthropologist Jonathan Friedlaender at Temple University in Philadelphia, among others.

    Weiss and Friedlaender say the federal race and ethnic categories are useless for a scientific sampling program. “Take a term like ‘Hispanic,’” says Weiss: “It's biologically a very bad term,” because it lumps together people from Cuba, Puerto Rico, and Mexico who have fundamentally different histories. “So you're labeling people the wrong way” if you try to use these labels for a balanced DNA collection, says Weiss.

    In the end, NHGRI opted for a quick solution. It is using samples from U.S. residents grouped under broad geographical ancestry headings: African, Asian, European, North and South American. For convenience, nearly all the samples are coming from an existing set of DNAs, supplemented with some extra samples from Native Americans. “This is a first step,” says program director Lisa Brooks: “It covers a huge amount of human variation without claiming to cover everything.”

    Those categories might provide a basis for population comparisons, but NHGRI made a second broad decision that, according to some scientists, will preclude such studies. “We're not identifying who these individuals are [in the SNP database] by ethnicity, or sex, or anything else,” says Brooks. She adds that, “We've gone to great pains to ensure that people who use these resources will not identify ethnicity” of the DNA they study. Research on alcoholism or schizophrenia, for example, could cause offense if linked to a specific group, and NHGRI wants to avoid any “group stigmatization.”

    “As far as I'm concerned,” the removal of population source data from a DNA sample “means the sample is useless,” says Kidd. “I won't use it.” Kidd insists that genetic markers such as SNPs are valuable only if they can be understood within the context of the population from which they're drawn, and for this, one must know the source. Florida's Moore agrees. Making the SNPs completely anonymous “drives a wedge” between anthropology and the new genetic database, he says. Kidd says he will have to rely on his own 15-year-old collection, which includes the DNA source information. Cavalli-Sforza's group and others are also making do with independent data collections to continue their own, small-scale versions of the HGDP, tracing broad patterns of human genetic variation.

    At a meeting this year, Kidd predicted, “One of the benefits that's going to come from [studies of genome diversity] is an even greater understanding of how similar we all are in our marvelous variation.” For now, however, that's still a dream.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution