News this Week

Science  30 May 1997:
Vol. 276, Issue 5317, pp. 1324
  1. Health Policy

    How Much Pain for Cardiac Gain?

    1. Marcia Barinaga

    GOVERNMENT EXERCISE GUIDELINES SAY THAT MODERATE ACTIVITY SPREAD THROUGHOUT THE DAY IS ENOUGH. BUT SOME RESEARCHERS SAY THE SCIENCE DOESN'T SUPPORT THAT CONCLUSION

    For a nation of couch potatoes, the news seemed too good to be true. For years, the prescription for maintaining healthy hearts had been vigorous exercise—running, swimming, aerobic dancing—whatever it took to get the heart rate up and keep it there for 20 to 30 minutes at least three times a week. But in July 1993, that message changed.

    A panel of exercise researchers convened by the Centers for Disease Control and Prevention (CDC) and the American College of Sports Medicine (ACSM) reported that people needn't exercise vigorously to improve their health. The panel concluded that moderate levels of moderate activity—walking, housework, gardening, or playing with children—broken up over the course of the day, provide the bulk of exercise-related health benefits. “Still Don't Exercise? No Sweat,” cooed one reassuring headline that followed; “A Little at a Time Now Called Enough.”

    Since then, the message conveyed by those first headlines has become official U.S. policy. The National Institutes of Health and the Surgeon General's office have weighed in with similar recommendations, as has the American Heart Association. But despite this apparent consensus, there is considerable disagreement in the exercise research community about whether the recommendations are amply supported by scientific data.

    Policy-makers caught in the middle of this disagreement are in a difficult position. How they interpret the conflicting data could affect the lives of millions of people: Recommend too rigorous a regimen and people may be scared off; recommend easier goals and many may be deterred from getting the full benefits of harder exercise. It's a classic dilemma confronting health experts in areas ranging from mammography to diet, where the scientific data are not clear-cut.

    The current guidelines “overemphasize the benefits of moderate exercise to make it more palatable to the public,” says statistician and exercise researcher Paul Williams of the Lawrence Berkeley National Laboratory in Berkeley, California, the most vocal critic of the recommendations. He maintains that neither his own research nor a reanalysis of the data on which the CDC/ACSM panel and other groups based their conclusions supports the idea that moderate amounts of activity confer the bulk of health benefits, nor do they support the argument that those moderate amounts can be equally effective when split into small blocks of time during the day.

    Williams is not alone in his concerns. “I'm not convinced that the ‘exercise-lite’ routine really makes a difference,” says cardiologist Paul Thompson of Hartford Hospital in Hartford, Connecticut. “There are very few solid data,” he adds, to support the reports' recommendations for exercise amounts and intensities. Even some of the members of the CDC/ACSM panel have reservations about the final recommendations. “I suppose I should have produced a minority report,” says epidemiologist Jeremy Morris, of the London School of Hygiene and Tropical Medicine in the United Kingdom. Morris signed the report, although his own studies have concluded that sustained, vigorous activity is necessary to ward off heart disease.

    No diminishing returns.

    In the analysis by Paul Williams, heart-death risk (meta-analysis result, dashed red line; simple average, solid red line) dropped steadily as exercise increased. The black lines depict the results of the individual studies on which the analysis was based.

    P. T. WILLIMS, J. AM. MED. ASSOC. 274, 533 (1995)

    Defenders of the guidelines say the data are sound enough to back their recommendations. “I stand by the conclusion that we made,” says University of South Carolina exercise physiologist Russell Pate, former president of the ACSM and lead author on the CDC/ACSM report. He cites what he calls “considerable consistency” in epidemiological studies suggesting that moderate amounts of activity provide benefits. Another defender is University of Minnesota epidemiologist Arthur Leon, an author on the CDC and other consensus reports. He also directed the Multiple Risk Factors Intervention Trial (MRFIT), which looked at activity levels in its study of risk factors for heart disease in more than 12,000 men over a period of 7 years. Leon says that study suggests health benefits from moderate levels of activity, and “the [recommendation report] was based on the majority of studies saying the same thing.”

    This is no mere academic dispute. According to the CDC, inactivity contributes to more than a third of the nearly 500,000 annual heart-disease-related deaths in the United States. And policy-makers are desperate to find a way to get people to exercise. “The old message of ‘burn, baby, burn,’ to do 1 hour of vigorous activity three times a week, turned people off,” says Leon. He and his co-authors hope the 1993 guidelines are less formidable and therefore have more effect.

    Williams counters, however, that no psychological testing was done to see whether people would respond as expected to the recommendations. Critics also worry that even if the recommendations have the desired effect, they can't deliver the benefits they promise. “People who are thinking about exercising might get the impression that if they do just a little bit, they'll get all these wonderful benefits,” says exercise researcher Peter Wood of Stanford Medical School. “The evidence does not support that.” What's more, he adds, “people who are already doing substantially more [exercise] … might get the impression that they are wasting their time,” and quit beneficial exercise routines.

    The moderation messageThe notion that moderate levels of moderate activity could provide protection against heart disease came from large epidemiological studies. In one of these, epidemiologist Ralph Paffenbarger of Stanford Medical School and his colleagues surveyed nearly 17,000 male Harvard alumni, aged 35 to 74. The subjects filled out a questionnaire about their regular physical activities; the researchers then tracked them for 12 to 16 years, logging heart attacks and deaths. They found that men whose reported activities burned more than 2000 kilocalories (kcal) per week—which can be done with a brisk daily walk of 45 minutes or so—had a 28% lower death rate than those who burned fewer calories.

    A different type of study, conducted by Steven Blair at the Cooper Institute for Aerobics Research in Dallas, came to a similar conclusion. Based on their performances on a treadmill test, Blair assigned more than 10,000 men and 3000 women to fitness categories, which his work had shown to reflect, roughly, a person's level of physical activity. Blair found that the least fit 20% of his subjects were most likely to die over the 8-year course of his study, and that the greatest reduction in the risk of death was between the least fit and the next-highest category. Other studies have shown that regular exercise raises a person's fitness level, and Blair concluded that even moderate activity such as a brisk daily walk would be enough to lift the least fit out of their high-risk status.

    Blair's methods have been somewhat controversial because fitness is influenced by genetics as well as by activity level. Moreover, Williams urges caution in using a study that measures fitness but not activity to support recommendations for the activity levels necessary for health. “If you are making recommendations on physical activity, it seems to me you should emphasize studies that measure physical activity,” says Williams.

    Nevertheless, when Blair's results were considered along with Paffenbarger's and those from other large studies based on activity questionnaires, the collective impression was, says Blair, that “you get considerable protection from moderate amounts and intensities of exercise.” While the studies suggested that more activity produced additional benefit, he says, the returns appeared to diminish at higher levels of exercise. These conclusions became the centerpiece of the CDC/ACSM consensus report and subsequent reports as well.

    But Williams, who was not part of any of the panels that produced those reports, takes issue with the claim that moderate amounts of activity—amounts that would burn just a couple of hundred kcal a day—provide the bulk of the protection against heart disease. Even the studies cited in the consensus report don't support that claim, he says. Williams subjected the data in those studies, plus some from a similarly designed study published after the report, to a statistical technique known as meta-analysis, which allows the averaging of data from multiple studies. He did not find the initial benefits from exercise followed by diminishing, as the report claims, but instead a linear decline in the risk of dying from heart disease. “This argues that there isn't the dose-response relationship the government is putting forward,” he says.

    What's more, Williams's own studies, reported in the past year, suggest that the benefits of exercise keep accruing linearly up to quite high levels of exercise. He related heart-risk indicators, such as HDL cholesterol levels, to miles run per week in 10,000 male and female runners whom he recruited at races and through ads in Runner's World magazine. After controlling for some differences in lifestyle, such as diet and smoking, he found that the risk factors improve linearly with the runners' weekly mileage up to 40 to 50 miles per week, a result suggesting that their risk of heart disease declined as their exercise level increased. Those runners were expending up to 5500 kcal a week, compared to the mere 1500 kcal a week recommended by the guidelines. The fact that they still saw linear increases in benefits at the upper end of that range makes it impossible, Williams says, for the report to be correct in saying that the preponderance of benefits comes from expending 1500 kcal/week.

    Other researchers question whether Williams's findings with runners can be extended to the general public. “Paul found that runners who run a marathon were better off than runners who run half a marathon,” scoffs Minnesota's Leon. “I don't think that pertains to too many Americans.” What's more, says Leon, the MRFIT study suggested that vigorous exercise may actually increase the risk of a heart attack in sedentary men who have a high risk of heart disease.

    Blair suggests that Williams is comparing apples and oranges when he contrasts his findings on risk factors such as blood lipid concentrations to the consensus report's conclusions, which relied on heart attacks and deaths. “I don't think it is impossible to have a [linear] dose-response relation between running and the risk factors that he looked at,” says Blair. But, he adds, that doesn't mean such a relation will translate into proportionally reduced death rates.

    In addition to Williams's arguments over the quantity of calories that must be spent for the most benefit, there is another debate simmering over the manner in which those calories are spent—in vigorous or just moderate exercise. The guidelines say the exercise need be only of moderate intensity—which generally means not intense enough to make you feel winded or break into a sweat from the exertion. Indeed, the recommended activities include mowing the lawn with a power mower or painting the house.

    But even some of the members of the original CDC panel don't agree with that part of the recommendation. For example, London's Morris studied roughly 28,000 male British civil servants who filled out questionnaires about their leisure-time activities. He found that only those who regularly performed sustained vigorous activities, such as jogging, swimming, cycling, or vigorous sports, such as refereeing soccer matches, showed reduced risks of heart attacks. “We found very little benefit with what we called recreational work: gardening, working on the car, working around the house,” says Morris.

    Nor, added Morris, was there any benefit from “ordinary walking”; only those who reported walking at the vigorous rate of more than 4 miles per hour saw significant benefits. Similarly, Harvard exercise researcher I-Min Lee, a collaborator with Paffenbarger, did a recent analysis of the Harvard alumni data, from which she concluded that exercise had to be vigorous to protect against heart attacks. But Lee points out that she grouped moderate with light exercise as “nonvigorous” activity, raising the possibility that including the less active subjects may have masked any protective effects conferred by moderate exercise. She plans to address that issue in future work.

    Minnesota's Leon says the flaw in Morris and Lee's studies was their classification of swimming and brisk walking as “vigorous.” Leon contends that they are “moderate”; if Morris and Lee had classified them as such, he says, they might have found moderate exercise to be beneficial after all.

    Researchers on both sides of the argument acknowledge that exercise intensity is a continuum, with brisk walking and swimming close to the border between moderate and vigorous activity. But Williams says brisk walking is at the vigorous end of what the government guidelines suggest. “If you look at the list [of recommended activities], you are talking about home care, playing with your children … there is a lot more at issue here than just walking briskly.” Pate points out that the recommendations suggest that activities other than walking should be of an intensity similar to a brisk walk at 3 to 4 miles an hour.

    Breaking it upBeyond the issue of intensity is the perhaps more crucial question of whether exercise must be done for a sustained time to be effective, or whether it can be broken up into short bouts with equal benefit. Many of the activities suggested by the guidelines, such as cleaning the house or taking the stairs rather than the elevator, are not done in the sustained way that intentional exercise is. Blair says that's irrelevant: “If you spend 200 [kilo]calories a day, you get 200 [kilo]calories worth of benefits. It doesn't matter very much if you get that all at one time, whether you get it from moderate or high intensity, or if it is discontinuous.” The evidence for that view, he says, comes in part from epidemiological studies such as those by Paffenbarger, Morris, and MRFIT. Pate agrees: “If you look at how the activity is measured and reported in those studies, I think you would be hard pressed to conclude that what people are reporting is for the most part prolonged and continuous activity.”

    But the authors of some of the studies disagree. “The word ‘sustained’ is important,” says Paffenbarger, noting what his own work showed: Among men who expended the same amount of calories weekly, those who did some form of sustained exercise had significantly lower death and heart attack rates than those who didn't. Like-wise, Morris says the activities that provided protection from heart attacks to the men in his study were those typically done for a sustained time, such as swimming, bicycling, or playing a sport. “Our men just didn't [break up their exercise],” Morris points out. “By the time a middle-aged man swims on the way to his office or on his way home, for the sake of his health, he swims for a reasonable time.”

    But there is also nonepidemiological evidence that exercise can be broken up, says Pate. The consensus reports cite two small studies in which subjects were assigned to exercise for 30 minutes a day—either in one stretch, or in two or three bouts of 15 or 10 minutes—for a period of 8 to 10 weeks. Both studies showed improvements in fitness, as determined by treadmill or equivalent tests. But they did not demonstrate that the improved fitness paid off in improved cardiac risk factors. And, as Williams notes, neither was a true test of moderate exercise: One specified running; the other included jogging. “You seem to get the same effects [on fitness] with smaller bouts as with a single bout,” says Williams, “but that doesn't imply the same will be true for intermittent bouts of moderate activity affecting risk of heart disease.” Paffenbarger, who was an author of the CDC/ACSM guidelines, agrees with Williams. “There are no data to indicate that three short bouts of activity are equivalent to one large bout in terms of reducing disease risk, disease incidence, or mortality,” he says. “That is a guess that is built into the CDC guidelines.”

    Supporters of the guidelines say that acting on a few such guesses is justified, given the public health stakes. They note that the Surgeon General's report is more carefully stacked with caveats than the earlier CDC/ACSM report was. It points out repeatedly, for example, that additional benefits can be gained by more activity, and it soft-pedals the issue of breaking up exercise with the statement that “strictly speaking, the health benefits of such intermittent activity have not yet been demonstrated.”

    To remove some of the guesswork from future recommendations, Thompson and others advocate balancing the epidemiological studies with more trials in which subjects are placed on specific exercise regimens, to answer questions about intensity, duration, and amounts of exercise necessary to produce specific results. While we wait for these results, Pate pleads that we “not obscure the big conclusion here, which is that we are paying an enormous public health cost for our sedentary lifestyle in this country. We have an awful lot of very inactive people. I don't hear anybody saying [that we should] just leave them where they are while we settle this.”

  2. Physiology

    How Exercise Works Its Magic

    1. Marcia Barinaga

    Although researchers are arguing about how much exercise you need to reduce your risk of heart disease (see main text), there is no doubt that at some point, physical benefits do kick in. “Exercise has lots of different effects” that can help protect your heart, says cardiologist Paul Thompson of Hartford Hospital in Hartford, Connecticut. Among them: It lowers blood pressure, boosts blood volume, and consumes “bad” fats in the blood (such as triglycerides), while also raising levels of the so-called good cholesterol carried in the blood's high-density lipoprotein (HDL) particles.

    One of the best documented effects of exercise is on blood pressure. Even a single bout of moderate exercise can help. In a 1991 study, for example, Linda Pescatello, at the University of Hartford in Connecticut, found that blood pressure reductions of 6 to 10 millimeters of mercury could be detected immediately after hypertensive men bicycled at a moderate level (less than half the total intensity they were capable of) for 30 minutes. What's more, the reductions lasted for up to 13 hours. Exercise decreases blood pressure at least in part by turning down the activity of the sympathetic nervous system, which in turn relaxes the tension in artery walls, says physiologist Ethan Nadel of the Yale University School of Medicine.

    Exercise not only lowers the pressure in arterial vessels, it also increases the volume of blood coursing through the entire vascular system. Thompson injected tracer substances into the bloodstreams of highly fit male distance runners; using the resulting concentration of the tracer to determine blood volume, he found that runners have nearly a liter more blood than average men.

    As in the case of blood pressure, some of this effect can be seen immediately. Nadel found that one exercise session will raise blood volume, although the subjects had to exercise for 30 minutes at 80% of their maximal aerobic power, an exercise intensity “at which you feel winded,” says Nadel. Exercise causes the surge in blood volume by turning down the sensitivity of the volume-control system: a sensor in the right chamber of the heart that monitors blood volume and tells the kidneys to remove fluid when that volume gets high.

    An expanded blood volume has multiple benefits, says Nadel. It boosts the volume of blood that fills the heart between contractions and thus the amount of blood the heart pumps with each stroke. “That makes the heart pump more efficiently,” he says, and contributes to the characteristic low resting heart rates of athletes. The extra blood volume also dilutes bad actors in the blood, such as lipids that can produce fatty deposits on blood-vessel walls. “If [your blood is] dilute, it is less likely that cholesterol will bump up against your artery wall,” says Thompson.

    But a workout does more to the fats in your blood than simply dilute them: It directly changes them as well. High blood concentrations of triglyceride fats have been linked to an increased risk of heart attack, and exercise reduces blood triglyceride levels. One way it does so, Thompson says, is by making muscles “hungry for fat.” To satisfy their hunger, the muscles crank up an enzyme called lipoprotein lipase (LPL), which chews up triglycerides for the muscles to use as fuel. The weight loss associated with regular exercise also raises LPL activity in fat cells.

    This increased LPL activity leads to another benefit: It helps to reduce blood cholesterol. As triglycerides are consumed, there is shrinkage of the very low density lipoprotein particles (VLDL), which store the fat in the blood. This causes some of the cholesterol stored on their surface to be jettisoned and picked up by the HDLs, which modify the cholesterol and deliver some of it to the liver for elimination from the body. Which of these many different effects is most significant for cardiac health is not clear, Thompson says, but together “they add up to a lot.”

  3. Asthma Genetics

    A Scientific Result Without the Science

    1. Gretchen Vogel

    In the old days—say, 2 or 3 years ago—breakthroughs in basic research were almost always announced at scientific meetings or published in peer-reviewed journals. No longer. Last week, Sequana Therapeutics Inc., in San Diego, issued a press release declaring that the company had “discovered a gene responsible for asthma.” The three-page release contained little data of use to other researchers—such as where the gene is located, what it might do, or how many sufferers might carry it. Nor is anyone likely to find the answers in journals or at meetings anytime soon. Sequana and its collaborators are in “the very early stages” of preparing a manuscript describing the finding, says geneticist Mary K. McCormick, head of Sequana's asthma division. She says it might be published “within a year.”

    The reason Sequana preempted the traditional scientific publication process has little to do with science. The announcement alerted investors that the discovery will earn the company a $2 million “milestone payment” from Sequana's collaborator, pharmaceutical giant Boehringer Ingelheim. Indeed, Sequana's stock rose from $13 1/8 to $14 the day after the announcement. And if the company had kept the news to itself, its employees and collaborators would risk insider-trading charges if they bought or sold Sequana stock, says company CEO Kevin Kinsella. At the same time, Sequana does not want to disclose details until it has filed for a patent and given Boehringer Ingelheim “some lead time” to develop treatments based on the gene, says Sequana's chief scientist, Tim Harris.

    Sequana isn't the only biotechnology company to announce a major basic research finding by press release. Last November, Cambridge, Massachusetts-based Millennium Pharmaceuticals claimed that it had found a diabetes gene, and in January, Salt Lake City-based Myriad Genetics announced that its researchers had bagged a gene linked to a type of brain cancer—both without scientific specifics (Science, 28 March, p. 1876). And these surely won't be the last such announcements: “I'm not a fan of genetics by press release,” says Harris, “but it's an inevitable part of life at a biotech company that finds genes for a living.”

    It's becoming a part of life in academic genetics, as well. Untangling the complicated genetics of diseases such as diabetes or asthma “is very expensive research,” says geneticist William Cookson of Oxford University. “It is difficult to imagine all the loci being identified without some commercial funding.” One of Sequana's academic collaborators, pulmonologist Arthur Slutsky of the University of Toronto, agrees. Boehringer Ingelheim and Sequana have spent more than $10 million to find this gene—more than the Canadian government has spent on the entire human genome project in the last 2 years, he says. But the price for that support is the secrecy imposed by for-profit funding sources, says Cookson.

    Sequana's results have been eagerly anticipated. The Toronto group, including Slutsky and Noe Zamel, published a paper last June in the American Journal of Respiratory and Critical Care Medicine describing their work with the residents of the South Atlantic island of Tristan da Cunha. Most of the nearly 300 residents are descendants of 15 settlers from the early 1800s, and 57% have at least partial evidence of asthma. The researchers later said that they had found two chromosomal linkages, and that one was narrowed to a few hundred thousand base pairs. “A few weeks ago,” the team was confident enough of their data to say they had a gene, says Slutsky.

    The press release quotes pediatrician Richard O'Connor of the University of California, San Diego, as saying the discovery is “this century's most important finding in the etiology of asthma.” But other researchers are less exuberant. “It is unlikely that this is the major genetic effect in asthma,” says Cookson, who, with others, has found several chromosomal linkages to allergy and asthma. “It's definitely an impressive piece of science,” he says, but until a more traditional scientific announcement is made “its overall value is impossible to judge.” That judgment is months away. Don't hold your breath.

  4. National Academies

    NRC Lets a Little Sun Shine In

    1. Andrew Lawler

    Change is hard for any organization, but officials at the National Research Council (NRC) have decided that, if it is inevitable, they'd rather be calling the shots. Faced with the prospect that the courts eventually could force it to abide by strict government rules on openness, the council recently approved new guidelines intended to open its inner workings “to the greatest extent possible.” But the new rules fall far short of the government rules, and they appear unlikely to quiet critics.

    The new policy has been in the works for more than a year at the NRC—the operating arm of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine that produces authoritative reports for those who make public policy (Science, 9 May, p. 900). But progress toward openness had been slowed by internal dissent. Early this year, however, environmental and animal-rights groups scored victories in two court cases that challenge the traditional secrecy with which the council does business (Science, 17 January, p. 297).

    The groups want the NRC to abide by the Federal Advisory Committee Act (FACA), which specifies policies that government agencies using outside counsel must follow to ensure public input. In one case, a federal court refused to allow the U.S. Department of Energy to use an NRC report it requested; in another, the court has agreed that the council should have abided by FACA in conducting an animal case study for the National Institutes of Health. NRC officials intend to appeal the latter case to the Supreme Court, says Executive Officer William Colglazier. The officials worry that the cases might end with a ruling forcing them to adhere to FACA.

    Given these external threats, “this time there was very little opposition” to the openness guidelines, says Colglazier. The council's governing board adopted the measures on 14 May.

    Until now, meetings to discuss or prepare NRC reports typically were closed to all but committee members and staff. The rationale was that publicity could damage the institution's reputation for independence and fairness. The new policy, however, says that the council's work “can benefit from increased public access and increased opportunities for public input” at those meetings in which panel members are gathering information. That openness must be balanced by assurances that “committees and panels are shielded from undue pressures.”

    “The institution retains the right to close meetings as appropriate,” the policy states, “to conduct work free from external influences.” But Colglazier says there must be compelling reasons for a committee to operate in private. “We will make it extremely rare that information-gathering meetings are closed,” he says. Panel members also will be expected to discuss their potential biases during an open session at the start of their work.

    The policy went into effect immediately. Last week, the NRC set up a World Wide Web site to provide up to 2 months' notice of open meetings.

    While the new rules reflect a major change from past practices, they fall far short of the FACA requirements. Under that law, all sessions of advisory panels must be open, unless they involve classified or proprietary material or personnel matters. Agency chiefs cannot overrule the law, although federal advisory committees often skirt the rules by holding closed-door executive sessions.

    Colglazier says the new rules are not designed to placate the courts or critics, but he hopes they “will buy us some goodwill” among opponents. However, that might be wishful thinking. “The effect [of the new policy] is minimal,” says Valerie Stanley, legal counsel for the Animal Legal Defense Fund, which is suing the National Institutes of Health over its sponsorship of an NRC study on animal protection that followed the usual council procedures. “The meetings in which they set policy won't be open, and that's at the heart of what they do.”

  5. 1998 Budget

    Five-Year Plan Squeezes R&D

    1. Andrew Lawler

    The dust surrounding the historic budget agreement between the Administration and Congress is starting to settle, and the emerging picture is not a pretty one for science and technology spending. A long-term budget plan based on that agreement was approved last week by the House and Senate, and it leaves no room for an R&D funding increase in the next 5 years. While the projections are far from immutable, they are raising concerns among R&D supporters in Congress.

    The budget resolution, which sets broad spending guidelines for the next 5 years, is the result of a bipartisan attempt by President Bill Clinton and Republican leaders to cut taxes and eliminate the federal deficit by 2002. That political consensus makes the resolution a more significant document than previous versions, which were based on one party's view of the future. And its message to scientists is that civilian R&D does not fare well. “They protected a lot of things, but R&D was not one of them,” says Al Teich, science policy director at the American Association for the Advancement of Science (AAAS, which publishes Science). Of course, such projections are notoriously changeable, and the appropriators who actually allot program funding have substantial freedom each year to fund what they see fit.

    SOURCE: HOUSE BUDGET COMMITTEE

    If the numbers in the resolution come to pass, warns House Science Committee Chair James Sensenbrenner (R-WI), “we'll be spending less in 2002 on scientific research … than we did in 1991” after taking inflation into account. That reduction is the result of a decision to erase the deficit largely by reducing domestic discretionary spending, the account which includes all civilian science and technology. The budget resolution calls for a freeze or slight decrease in most R&D-related accounts as part of that effort. The only R&D-related area that the Administration and Congress singled out to protect is the Commerce Department's National Institute of Standards and Technology, which oversees the controversial Advanced Technology Program. ATP has been the object of a tug-of-war between some Republicans, who see it as corporate welfare, and the president, who regards it as a vital link between government and industry.

    Funding for the natural sciences, including research at NASA, the National Science Foundation, and physics programs within the Department of Energy (DOE), would take a “pretty significant hit” under the plan, says Sensenbrenner, who told a recent meeting of science writers that he was “dismayed” by the numbers and will put up a fight. Funding for the bulk of science and technology efforts at those agencies would fall $400 million in 1998 to $16.2 billion, and continue dropping until it reached $15.6 billion in 2002.

    The account that includes the National Institutes of Health would also decline from $24.9 billion to $24.4 billion. But biomedical research has numerous and powerful supporters in Congress who will seek to turn those numbers around. Last week, the Senate unanimously approved a nonbinding resolution drawn up by Senator Connie Mack (R-FL) that the “federal commitment to biomedical research should be doubled over the next 5 years.” It also calls for an immediate down payment of an additional $2 billion for 1998. However, 2 days later, the same body voted 63-37 to kill an amendment to the budget bill that would have increased NIH funding by $1.1 billion in 1998 by taxing the administrative budgets of other agencies. That sets the stage for an intense battle over health funding later this year. “We are disappointed” by the budget bill, says John Suttie, president of the Federation of American Societies for Experimental Biology, which hopes that legislators will deliver on earlier promises for a bigger increase.

    Civilian DOE spending, including nonphysics work sponsored by DOE at labs and in academia, also suffers a decrease in the plan, falling from $3.1 billion in 1998 to $2.8 billion in 2002. Funding for natural resources and environmental research would rise from $22.2 billion in 1998 to a peak of nearly $24 billion before returning to $22.2 billion by 2002.

    R&D advocates generally put on a brave face last week, saying they will fight to prevent the cuts outlined in the resolution from becoming a reality. “Science will not become the type O [universal] blood donor,” says the science chair, who recently took his case to House Speaker Newt Gingrich (R-GA). On the Senate side, Senator Phil Gramm will “forge ahead with” his plan to double the amount of civilian government research over 10 years, from $32.5 billion to $65 billion in 2007, says his press secretary, Larry Neal. But the resolution “will make our job more difficult,” he admitted.

    For all its sobering news, the budget resolution hasn't created panic in the R&D community because it is unlikely to be followed to the letter. “There's a fair amount of flexibility” in how Congress ultimately allocates taxpayer dollars, says Teich. And the vagueness of the plan makes it hard to tease out its possible effects on individual programs. But one thing is clear: R&D will face an increasingly hard struggle to hold onto its share of the federal spending pie over the next 5 years.

    With additional reporting by Eliot Marshall.

  6. Conservation Biology

    Can Cloning Help Save Beleaguered Species?

    1. Jon Cohen

    When Kurt Benirschke launched a program at the San Diego Zoo in 1975 to freeze cells from endangered species, he assumed that his colleagues would use the collection to unravel complex issues such as the genetic similarities among animals. Never did he imagine that scientists might one day pluck cells from the “frozen zoo” to grow new animals from scratch. But since February, when researchers in Scotland reported they had cloned a lamb named Dolly from the cells of an adult sheep, the notion of cloning a Przewalski's horse, Sumatran rhinoceros, or one of the other rare species whose cells are banked at the San Diego Zoo's Center for Reproduction of Endangered Species (CRES) has suddenly left the realm of science fiction.

    “The possibilities for zoos are enormous,” says Benirschke, a reproductive biologist who now is vice president of the zoo. Like other zoologists, he recognizes that many scientific hurdles stand between a fibroblast–a tissue-repairing cell that makes up the bulk of the frozen zoo's collection–and, say, a healthy infant rhino (see sidebar). But he thinks the field has seen so many remarkable advances in recent years that the obstacles, for some species at least, are likely to fall. Says CRES geneticist Oliver Ryder, “I think [cloning] is going to produce a paradigm shift. It offers the potential for a better safety net than we thought we had.” Adds Benirschke, who began working with colleagues in China after Dolly's creation to save cells from the endangered Yangtze River dolphin, “I would love to excite the international community to save as many cells as they can from as many animals as possible.”

    But even if the technical hurdles do fall, many conservation biologists argue that efforts to clone endangered species would be so expensive that they could derail other conservation efforts. “In the end, the very finite resources that conservation has are better directed elsewhere,” contends Michael Bruford, a molecular geneticist at the Zoological Society of London's Institute of Zoology. Adds David Wildt, head of reproductive physiology at the U.S. National Zoo's Conservation and Research Center in Front Royal, Virginia, cloning should be viewed only as a “last, desperate attempt to try to preserve a given species.”

    Ryder argues, however, that cloning may offer benefits that are not immediately obvious. When people think of cloning, they often imagine legions of genetically identical individuals. But Ryder contends that the technology actually could be used to increase the genetic diversity of a dwindling species–a proposition that has taken some of his colleagues by surprise. Population geneticist Robert Lacy of the Brookfield Zoo in Illinois, for instance, says he was skeptical that cloning could enhance genetic variability, which, he notes, is “the primary thing we're trying to do with endangered species.” But he was persuaded, he says, after reading Ryder's ideas on a private Internet chat group for population biologists.

    Ryder reasons that for species that are down to just a few surviving individuals, clones grown from frozen fibroblasts could provide an invaluable source of “lost” genes. Suppose scientists could clone Asian wild horses, South China tigers, or Spanish ibex from cells in the CRES collection that were gathered from long-deceased animals, says Ryder. The clones theoretically would then be able to breed, reintroducing the lost genes back into the population. “It might allow you to go back and recover the genetic diversity,” he says.

    Ryder also argues that cloning could be an especially useful tool for biologists trying to save species that don't breed well in captivity, such as giant pandas. The more offspring an animal has, says Ryder, the more of its genome it will pass on. If a giant panda in a zoo has only one offspring, one half of the panda's genes are lost. But if biologists could clone the panda 10 times and each one produced an offspring, in effect, the original panda would have produced 10 offspring, and fully 95% of its genetic information would have been “captured.” (The equation is 1 - 1/2n, where n equals the number of offspring.)

    Cloning might even serve a useful purpose with species that have never bred in captivity, such as the giant armadillo, by allowing biologists to asexually reproduce the creatures. This scenario, which would require implanting a cloned embryo of a giant armadillo in a more common relative, adds to the already formidable list of scientific obstacles. Still, says Ryder, “it could possibly guarantee genetic immortality.”

    In a commentary in press at Zoo Biology, Benirschke and Ryder contend that if cloning endangered species does become a reality, zoos may one day be able to breed fewer animals and retain smaller herds without losing genetic diversity. This is an important advantage, they argue, because most zoos already are short on space.

    Intrigued as he is by these ideas, Brookfield's Lacy says cloning is so expensive and technically challenging that it should be used only with “a fairly narrow window” of species, those with “five, 10, or 15 animals.” In most cases, he says, “with a little foresight, we'd be able to set up a breeding program that didn't cost millions.”

    The National Zoo's Wildt concurs, adding that lower tech, “assisted breeding” methods such as artificial insemination can often achieve the same goals as cloning. A few years ago, the black-footed ferret, for example, was down to as few as six individuals. But last year, Wildt and his colleagues successfully used artificial insemination to birth 16 kittens. He stresses, though, that even something as well understood as artificial insemination can be a big challenge in a new species. “We do a lot of work with assisted breeding,” says Wildt. “What we've learned from working in this field for 20 years is it's really difficult.”

    Michael Soule, an emeritus population geneticist at the University of California, Santa Cruz, worries that cloning endangered species could distract people from saving habitats. “I don't want people to think that [cloning is] a solution to a major problem,” says Soule. He heads the Arizona-based Wildlands Project, which aims to improve habitats in North America. “We've only got a few years before most of the biodiversity on the planet goes down the sink.”

    As they explain in their Zoo Biology commentary, Ryder and Benirschke do not want cloning “to minimize or supplant” current conservation efforts. “This discussion is not being advocated in lieu of saving species the only way they can be saved–in their habitats,” says Ryder.

    Conservation biologists will, for the first time, have a chance to discuss the promise and pitfalls of cloning endangered species at a Berlin meeting in August sponsored by the Minnesota-based Conservation Breeding Specialist Group. Ryder and Benirschke urge their colleagues to think seriously about cloning's potential. “The future for clonable species would clearly be better than that for animals that cannot be cloned,” they conclude in their Zoo Biology commentary. Surely, that's a definition of “fit” that Charles Darwin never imagined.

  7. Conservation Biology

    Would-Be Cloners Face Daunting Hurdles

    1. Jon Cohen

    It took Ian Wilmut and his colleagues at Scotland's Roslin Institute 277 attempts to clone one lamb, the now-notorious Dolly, from adult mammary cells. For conservation biologists who ponder the possibility of applying this advance to endangered species (see main text), those 276 failures in sheep—a species whose reproductive biology is well understood—only underscore the technical hurdles they face. Even Wilmut, who first published his results in the 27 February issue of Nature, points out, “The success rate is so low that you would do better to breed naturally. You would get far more offspring!”

    The first challenge, says Oliver Ryder, a geneticist at the San Diego Zoo's Center for Reproduction of Endangered Species (CRES), is to see whether fibroblasts—cells made during wound healing—could be used instead of mammary cells. This question is critical because CRES's collection of cells—the world's largest—is made up of fibroblasts, stored in liquid nitrogen.

    Assuming fibroblasts from adult animals could work, researchers face another challenge: harvesting eggs in a “ripened” state during ovulation. The Scottish group made Dolly by planting mammary cells from one sheep into another animal's egg that they had modified by scooping out its gene-carrying nucleus. Harvesting ripened eggs from sheep is routine because the animal's reproductive cycle is well understood. But plucking eggs from, say, a Sumatran white rhino is quite another matter, says David Wildt, head of reproductive physiology at the U.S. National Zoo's Conservation and Research Center in Front Royal, Virginia. “We know basically nothing about their reproductive physiology,” says Wildt. “You'd have to have a rhino docile enough to allow ultrasound [to know when it is ovulating].” And once eggs are harvested, Wildt notes, different species usually require different nutritive media in laboratory cultures—media that scientists have yet to define for most endangered species.

    Now, assume the transfer of fibroblasts into enucleated eggs worked and embryos developed. The next challenge would be implanting an embryo into a female that could carry it to term. Reproductive biologists say they would prefer to use females of a related, unendangered species as surrogate mothers so that females from the highly endangered population would be available for natural breeding. But it is not at all clear that the placenta carrying genes from the fibroblasts of a Rwandan mountain gorilla, for instance, would take in the uterus of a captive gorilla of a different subspecies. “I think it likely that there are sufficiently species-specific factors to limit mixing,” says Wilmut.

    Kurt Benirschke, a vice president of the San Diego Zoo who started the CRES collection, notes that some such transfers have worked. For instance, Douglas Antczak, a veterinarian at Cornell University in Ithaca, New York, and W. R. Allen at the Thoroughbred Breeders Association in Suffolk, England, have successfully grown a zebra embryo in a horse. Antczak suggests that others might build on these results by implanting into a horse an embryo from the endangered Przewalski's horse. “It would be a good example species,” says Antczak.

    Betsy Dresser, a reproductive physiologist at the Audubon Center for Research of Endangered Species in New Orleans, suggests that many of the next steps may be taken by researchers who work with domestic animals. “The domestic-animal field has tons of animals to work with, and money,” she says. And Dresser says if they make headway, conservation biologists will surely take advantage of cloning. Says Dresser, “If we can use it as a tool to save an endangered species, you'd better believe we will.”

  8. Paleoanthropology

    A New Face for Human Ancestors

    1. Ann Gibbons

    AN 800,000-YEAR-OLD SPECIES FROM SPAIN TAKES ITS PLACE ON THE HUMAN FAMILY TREE, AND THESE FIRST EUROPEANS MAY BE ANCESTRAL TO BOTH MODERN HUMANS AND NEANDERTALS

    More than 780,000 years ago, a boy with a remarkably modern face lived near a warren of caves in the red limestone Atapuerca hills of northern Spain. He died young, possibly the victim of cannibalism, and today only part of his face remains. But that part is stunning, because despite its antiquity, it “is exactly like ours,” says paleoanthropologist Antonio Rosas of the National Museum of Natural Sciences in Madrid, Spain. On page 1392, Rosas and his colleagues, who found the remains of the boy and five other early humans in a railway cut, suggest that these people—the oldest known Europeans—were members of a new species of early humans directly ancestral to us.

    A new relative.

    Homo antecessor claims a key spot on the human family tree.

    J. BERMÚDEZ DE CASTRO AND J. L. ARSUAGA

    The Spanish team has named this first new member of the human family in over a decade Homo antecessor, from the Latin word meaning explorer or one who goes first. They say that the species's unusual mix of traits—in particular, the boy's modern face set between a primitive jaw and brow—shows that it gave rise to both modern humans and Neandertals, the heavyset species that lived in Ice Age Europe until about 28,000 years ago.

    Other paleoanthropologists are impressed by the finds—more than 80 fossils, including skulls, jaws, teeth, and other parts of the skeleton—that offer new insight into a mysterious time and place in human evolution. “We now have a better window on the first peopling of the European continent,” says paleoanthropologist Philip Rightmire of the State University of New York (SUNY), Binghamton. But identifying these people as a new species, not to mention claiming them as a key human ancestor, is highly controversial. “I think many of my colleagues will be uncomfortable with creating a new species, because it is mainly based on the facial features of one juvenile,” says paleoanthropologist Jean-Jacques Hublin of the National Center for Scientific Research (CNRS) in Paris. What's more, if H. antecessor is indeed the last common ancestor of Neandertals and modern humans, it could bump two other favored contenders—H. erectus and H. heidelbergensis—off the main line of descent leading to modern humans, making them side limbs on an increasingly bushy human family tree.

    That's too drastic a revision for many researchers to swallow. Some, however, think the new family tree with all its offshoots helps explain an increasingly diverse fossil record. “It's further evidence for the complexity that we're finding all the way down the story of the evolution of Homo,” says Chris Stringer, a paleoanthropologist at the Natural History Museum, London.

    Seeing a familiar face

    Only a decade ago, the textbook view of the evolution of Homo was that of a gradual, straightforward progression, with one species unfolding into another—a pattern quite different from the diversity seen in other animals. First came H. habilis, the toolmaker, arising more than 2 million years ago from apelike australopithecines. Then came H. erectus, the upright walker who trekked across Africa and Asia about 1.8 million years ago. It gave rise in the past 500,000 years to a relatively robust ancestor called archaic H. sapiens, which led to both our species and Neandertals.

    Bone cache.

    Researchers have found teeth and skull and limb bones of Homo antecessor.

    J. TRUEBA

    But in the past decade, new fossils and reanalysis of old ones have prompted researchers to rewrite this script and add more characters, including several different types of early Homo in Africa (Science, 22 November 1996, p. 1298). And they also have changed the cast in the final acts. Notably, half-million-year-old African and European fossils once described as archaic H. sapiens now are attributed to the species H. heidelbergensis, which many think was ancestral to Neandertals and modern humans.

    This view has been hard to test, however, because of huge gaps at critical times in the fossil record of Europe. From the time the first humans left Africa about 1.8 million years ago until some 500,000 years ago, not a single bone had been found in Europe. Then, in 1994, new excavations in Spain uncovered fragments of hominid bones and teeth at a site called Gran Dolina, where railroad workers blasting through Atapuerca Hill in the 19th century had exposed cross sections of bone-filled limestone caverns. The layers containing human fossils were dated using periodic shifts in Earth's magnetic field to more than 780,000 years old (Science, 11 August 1995, pp. 754, 826, and 830). That makes these “exciting because they are the earliest well-dated fossils from Europe,” says University of Michigan paleoanthropologist Milford Wolpoff.

    Subsequent field seasons yielded simple stone tools and more fossils from at least six individuals. And as soon as the Spanish scientists cleaned up the fossils—particularly the face of the boy—they knew they had found something special. The face had familiar modern features, such as sunken cheekbones with a horizontal rather than vertical ridge where the upper teeth attach, and a projecting nose and midface. “We realized right away it was modern looking,” says paleoanthropologist Juan Luis Arsuaga of the Universidad Complutense in Madrid, co-leader of the team with paleoanthropologist José Bermúdez de Castro of the National Museum of Natural Sciences, Madrid, and archaeologist Eudald Carbonell of the University of Tarragona.

    But the fossils also had primitive features, such as a prominent brow ridge and multiple roots for premolars. It all added up to an unusual mosaic of modern and primitive features that just didn't fit any known species. “We tried to put them in H. heidelbergensis, but they were so different that we could not,” says Arsuaga. And so they set the fossils apart as a new species.

    Next, the team tried to solve the problem of where H. antecessor sits in the human family tree. And here the researchers go out on a limb, relying on a few dental and cranial features to suggest that H. antecessor is close kin to 1.6-million-year-old fossils from East Africa, which some researchers identify as H. ergaster. This species resembles H. erectus—indeed, some consider it part of H. erectus—but others have proposed that only the African H. ergaster is ancestral to modern humans, while the Asian H. erectus went down a different evolutionary path. The Spanish team supports this view by noting traits that link H. antecessor to H. ergaster and other traits that separate it from H. erectus. That would bump H. erectus—or at least the Asian form—off the direct line to modern humans, making it a separate lineage that went extinct without descendants.

    At the same time, H. antecessor shows enough similarities to fossils identified as H. heidelbergensis to be an ancestor of that species, which most paleoanthropologists agree led to Neandertals. Yet H. antecessor also shares more traits with modern humans than do European H. heidelbergensis fossils. The Spanish team therefore argues that H. antecessor is a key central player that ultimately gave rise to modern humans and to Neandertals—thus deposing H. heidelbergensis from its position as the last common ancestor of both (see family tree). The new species's midface traits are “exactly the morphology we would imagine in the common ancestor of modern humans and Neandertals, if we were to close our eyes,” says Rosas.

    To make sense of these clues, the Spanish team proposes that H. ergaster gave rise to H. antecessor, probably in Africa, although the new species has only been found at Atapuerca. They speculate that members of H. antecessor began to spread out about 1 million years ago and eventually headed north to Europe, leaving the 800,000-year-old fossils found at Atapuerca. As time passed, some members of the species evolved into H. heidelbergensis (and may have left the 300,000-year-old fossils at Atapuerca; see sidebar). These humans headed farther north into Europe, where they in turn led to Neandertals—but not to modern humans.

    Meanwhile, the southern members of H. antecessor, probably still in Africa, gave rise to modern humans by way of another, as yet unidentified transitional species, according to the Spanish team's view. This middle player may include fossils already discovered in Africa that look ancestral to modern humans and are now attributed to H. heidelbergensis. These include a massive skull from Bodo, Ethiopia, dated to 600,000 years, and a more recent cranium from Kabwe, Zambia.

    Face-off

    That scenario is speculative, however, and not everyone welcomes the entrance of the new species—and its retinue of still-unknown relatives. “Given the evidence presented here, I'm reluctant to endorse a new species,” says SUNY's Rightmire. Otherwise, “you end up littering the taxonomic landscape with all sorts of names that may turn out to be less useful later on.” Most troublesome to Rightmire, CNRS's Hublin, London's Stringer, and Michigan's Wolpoff is that the designation of the new species rests primarily on the modern features found in the boy's face. They worry that some of those features are juvenile traits that weren't present in adults, and perhaps were also found in the young of other species. More comparison of the boy's face with Atapuercan adults and juveniles of other species is needed, they say.

    Rosas responds that fragmentary facial bones from Atapuercan adults do show some of the modern-looking features found in the boy's face, such as hollowed cheekbones. And the Nariokotome boy who lived 1.6 million years ago in Kenya and is often identified as H. ergaster does not share these modern traits. Nor do the faces of 300,000-year-old H. heidelbergensis youths from the younger beds at Atapuerca. “We think we have enough information to define it in the proper sense of a new species,” says Rosas. “But people are probably going to need some time to accommodate this proposal.”

    It may take more than just time, however, to convince other paleoanthropologists that H. erectus and H. heidelbergensis are not on the line to modern humans. For one, researchers such as Rightmire think fossils designated as H. ergaster in Africa are really H. erectus—and so are ancestral to modern humans in almost any scenario. But even if H. ergaster is considered a distinct species, says Rightmire, its link to H. antecessor rests on thin evidence—“the morphology of the root system of premolars, and that's just one trait,” he says. Nor does Rightmire think H. heidelbergensis should be removed from our ancestral lineage, because he believes it includes the Bodo skull and other recent African fossils that have ties to modern humans. “I'm going to stick to my guns and support H. heidelbergensis [not the new species] as the antecedent to Neandertals and recent humans.”

    On the other hand, others find the new order a pleasing solution to the fact that H. hei-delbergensis is something of a “wastebasket taxa” that includes widely varying African and European fossils, as Leslie Aiello, a paleoanthropologist at University College, London, describes it. Reserving the name H. heidelbergensis for the European fossils and considering African fossils to be the as yet unnamed descendants of H. antecessor “make things nice and neat,” she says.

    Regardless of where the new fossils fit in the family tree, Wolpoff and others hope the site will eventually reveal what kind of technology or behavior allowed these early humans to persist in the hostile European climate before 500,000 years ago. So far, it's hard to tell, because the tool kit found with them included only simple cutting flakes, not the more sophisticated tools found elsewhere at this time. One additional, bizarre clue is that the bones were covered with cut marks, indicating that their bodies were defleshed and processed like those of animals killed for meat (Science, 19 January 1996, p. 277). Bermúdez de Castro and his colleagues have suggested that this could be cannibalism, but researchers such as Peter Andrews of the Natural History Museum, London, warn that cut marks alone don't prove cannibalism.

    So although the fossils give paleoanthropologists a new view of an obscure time in history, they also raise a whole crop of new questions. “That's the main contribution of the Atapuerca fossils,” says Hublin. “They give us an idea of the amazing variation in Homo.” And that diversity, notes Arsuaga, shows “that human evolution is like that of other groups. We're not so different.”

  9. Paleoanthropology

    Into the Pit of Human History

    1. Ann Gibbons

    In 1976, Spanish paleontologist Trinidad Torres was searching for bear fossils in well-known beds at Atapuerca, near the city of Burgos in northern Spain, and found a human bone instead. This search had uncovered what turned out to be the world's largest known repository of fossil humans from the period 780,000 to 127,000 years ago, the Middle Pleistocene. The locality's importance for human prehistory became clear in the early 1990s, after additional excavation at one particular site, a 14-meter shaft inside a cave known as Sima de los Huesos (Cave of Bones).

    Inside this pit, researchers have found at least 32 individuals who lived 300,000 years ago. In a special 300-page issue of the Journal of Human Evolution, to be published in August, the Spanish team suggests that these skeletons are from a species called Homo heidelbergensis—a group that many paleoanthropologists regard as ancestral to both Neandertals and modern humans. Much older fossils from another part of Atapuerca, however, may have bumped H. heidelbergensis off the line to modern humans, says team co-leader Juan Luis Arsuaga of the Universidad Complutense in Madrid (see main text).

    The fossils in the pit present a mystery. They come mainly from male and female teen-agers and young adults, who were generally in good health when they died, although one remarkably complete skull is scarred with osteitis, a bone inflammatory disease, and another is from a person who apparently was deaf. What—or who—killed them? Many bones show evidence of chewing by carnivores, but animals would not selectively kill young adults and no other prey are in the pit. And the animal bones show that it's not a burial site, although Arsuaga speculates that other humans might have dumped the remains there. Researchers are now doing detailed analyses of the ages of the individuals at death, to see if they can tell whether all died in a single catastrophe.

  10. Planetary Science

    Spots Confirmed, Tiny Comets Spurned

    1. Richard A. Kerr

    Lou Frank isn't the only one seeing spots anymore. More than 10 years ago, the University of Iowa space physicist proposed that house-sized comets are pummeling Earth 20 times a minute. Frank estimated that since the planet formed, these tiny comets have dumped enough water into the upper atmosphere to fill the world's oceans.

    NASA

    It was a provocative hypothesis from a highly regarded researcher, but the whole idea drew scorn from the rest of the earth and planetary science community. Researchers couldn't imagine where all that water could be hiding in the inner solar system, which in all other measurements seems pervasively dry. And only Frank could see the traces of these tiny objects: The dark spots formed, he said, as gassy clouds of water dispersed in Earth's high atmosphere (Science, 10 June 1988, p. 1403). Other researchers looking at the same satellite images, however, saw only meaningless instrument noise.

    Now, in a stunning turnabout, Frank has used a satellite camera with sharper resolution to produce more detailed images that confirm the existence of these dark spots to the satisfaction of other scientists. The new data even seem to show an influx of water. “Now, you're faced with overwhelming evidence,” says Frank. “We've verified [the spots] from five different viewpoints.”

    Even Frank's more vocal critics agree. “He's clearly seeing something, but I don't know what,” says space physicist Robert Meier of the Naval Research Laboratory (NRL) in Washington, D.C. “We're all going back to the drawing boards to figure out what's going on here.”

    Although Frank's observations are being vindicated, he has a long way to go toward persuading the community that these black dots are actually the remains of midget comets. “There are two quite separate questions,” says atmospheric physicist Donald Hunten of the University of Arizona, another early critic. “One is, are the spots real? Okay, they're real. The next question is whether Lou's explanation is valid. No, it certainly isn't valid. It is very easy to put forward five objections to the small-comet explanation, any one of which rules it out.”

    Frank's new data, reported this week at the spring meeting of the American Geophysical Union, come from the Polar satellite, launched in February 1996 to study magnetic fields and charged particles over the poles. This spacecraft carries ultraviolet cameras far better than the one aboard the Dynamics Explorer satellite, which took Frank's first images in the 1980s. Images from the older ultraviolet camera showed dark spots—Frank calls them “atmospheric holes”—no larger than a single picture element or pixel. Everyone except Frank and his University of Iowa colleagues John Sigwarth and John Craven, who is now at the University of Alaska, thought the single-pixel spots were instrumental noise, like snow on an ultraviolet television. Frank and his colleagues, though, interpreted them as places where 80 tons of water had absorbed enough ultraviolet to darken the UV glow of the upper atmosphere.

    Other researchers are now accepting the reality of the spots, if not Frank's explanation, because the ultraviolet images taken by the Polar satellite have much smaller pixels, and in these views the 50-kilometer-wide spots are 10 to 20 pixels across. The odds that so many randomly darkened pixels could come together to form a spot, all researchers agree, are nil. What's more, the spots show up under different imaging conditions, bolstering the case for their existence. In some cases, Frank and Sigwarth found, the Polar ultraviolet camera caught the same spot in consecutive exposures as the spot moved across the field of view. In other images, spots appeared doubled—as they should have—because Polar wobbled enough that the same object was recorded twice in one exposure. A random dark pixel would appear only once. And one particular spot, says Frank, was caught by both his ultraviolet camera and another on Polar of a different design.

    Frank also presented observations of a new phenomenon high above the atmosphere that is presumably linked to atmospheric holes: bright trails of water debris. “I just happened to be looking through the images,” says Frank, “and all of a sudden saw these bright oxygen trails. We were shocked.” About 10 times a day, Frank concludes, an incoming small comet between 5000 and 50,000 kilometers leaves enough water in its wake that sunlight dislodges a trail of oxygen atoms from the water. Frank's final line of evidence is visible-light images showing hydroxyl, another fragment of water. These trails appear at altitudes of 2000 to 3000 kilometers, just above where small comets are supposed to disrupt to form atmospheric holes, and the trails seem to be about as abundant as atmospheric holes, says Frank. “That's totally independent verification of the ultraviolet measurements,” he says.

    “It's very impressive observational work,” acknowledges atmospheric physicist Thomas Donahue of the University of Michigan, “that I don't think leaves us much room for doubt. There are little somethings releasing a lot of oxygen, and they show the signature of hydroxyl in emission. It's hard to imagine what other than water” they would be. But Donahue has by no means come around to the idea that these clouds of water were left by Frank's small comets. “I still have all the problems I ever had with the amount of water involved,” because no one has seen it elsewhere. He ticks off the problem areas: Venus is dry, Mars is dry, Earth's upper atmosphere is dry, and the space between the inner planets is “dry” in that it has no excess of the hydrogen that small comets would leave.

    Indeed, if these midget comets exist, they are surprisingly well cloaked. If a grain of sand enters the uppermost atmosphere, for example, it burns for all to see as a shooting star. “The idea that a 10-meter meteoroid could enter Earth's thermosphere at night without causing a big flash is very difficult to accept,” says Arizona's Hunten. Plunging in at 65,000 kilometers per hour, “it would light up the whole sky.” Similarly, seismometers left on the moon by the Apollo astronauts have detected no trace of the 1500 small comets that Frank predicted should hit its surface every day.

    Space physicist Alexander Dessler of the University of Arizona sees these and other problems as overwhelming. “The small-comet hypothesis fails to agree with physical reality by factors that range from a thousand to a billion,” he says. Dessler was the Geophysical Research Letters editor who, against reviewers' advice, boldly published Frank's first papers, only to become one of Frank's most persistent critics later (Science, 31 July 1992, p. 622).

    In response to such criticisms, Frank has suggested over the course of the debate that small comets have various properties that would minimize some of these anomalies. For example, comets with extraordinarily pure interiors would create less flash on atmospheric entry; a fluffy, snowdrift structure would enhance their disruption at high altitudes and help create the atmospheric holes. “I think there'll be lots of objections,” says Frank, “but they're all based on a knowledge of rock [rather than low-density] impacts or the desire to not have our planet be exposed to a continual cosmic rain.” Frank's colleagues are still not persuaded. “Lou has of course proposed rebuttals to all these criticisms,” notes Hunten, “but I don't believe they're valid.”

    If the atmospheric holes aren't the debris of small comets, then what are they? The one-time critics don't know and aren't even ready to speculate. But Frank is now forming collaborations with Donahue, NRL's Meier, and others to, as Donahue puts it, “understand these things in a way that meets all the constraints,” such as a dry inner solar system. Researchers also will be looking at other means of detecting and quantifying the high-altitude water and its source, including high-tech telescopes that ought to be able to pick up even dark bodies 10 meters in size. Most encouraging, says Donahue, is that Frank and the rest of the community are no longer at odds. “Last time, Lou was taking on the world,” he says. “This time, he seems to be asking the world for help.”

  11. Astronomy

    Gap in Starbirth Picture Filled

    1. Gretchen Vogel

    Like historians trying to piece together events from fragmented records, astronomers attempting to reconstruct the story of the stars and galaxies in the universe must rely on observations that only reveal bits and pieces at a time. Take their efforts to trace the history of star formation. Because of a quirk in the way astronomers measure galaxies' distances to learn where each one fits in cosmic history, their picture of the starbirth rate over time has had a crucial gap: the middle section, when the universe was turning gas and dust into stars at top speed.

    Stellar birthrate.

    The peak—at a redshift of 1.25—falls two-thirds of the way back to the big bang. (Higher redshifts correspond to earlier times in cosmic history.)

    SOURCE: CONNOLLY ET AL.

    Now, a team of astrophysicists has made a first stab at directly charting the stellar baby boom. New observations, combined with a new trick for estimating a galaxy's distance from its colors, have allowed Andrew Connolly and Alexander Szalay of Johns Hopkins University, Mark Dickinson of the Space Telescope Science Institute (STScI) in Baltimore, and their colleagues to calculate that starbirth peaked at about 12 times the current rate when the universe was about a third of its current age. Because this peak is both higher and later than many astronomers had suspected, it's sending the theoreticians back to revise their models of galaxy formation.

    Still, astronomers are pleased to see this filling in of history. The result, which the team presented at a symposium last month at STScI, “connects the other two sets of data in a nice, smooth way,” says Simon White of the Max Planck Institute for Astrophysics in Garching, Germany. “It's nice to actually see the peak now.”

    Previous work had traced the two slopes of the peak. By measuring ultraviolet (UV) light—the hallmark of newborn stars—from galaxies in a census they compiled, astrophysicist Simon Lilly of the University of Toronto and his colleagues had estimated starbirth from the present back more than halfway to the big bang. The data suggested that the universe is winding down—that star formation has been decreasing for at least the latter half of the universe's lifetime. But farther back, galaxies become very dim, making it difficult to detect the spectral signatures—the so-called redshift—that astronomers commonly use to measure distance.

    Another stratagem allowed Piero Madau of STScI and his colleagues to identify some galaxies at much greater distances. The light from those galaxies must travel through so much hydrogen gas on its way to Earth that the ultraviolet end of its spectrum is essentially erased. The expansion of the universe shifts this UV decrease—called the Lyman break—into the blue part of the spectrum. That made the break easy to identify in the galaxies of the Hubble Deep Field, an image from the Hubble Space Telescope that includes some of the farthest reaches of the universe. This analysis added two more points to the graph, showing a steady increase in star formation from the time when the universe was only 10% of its current age until it was almost a quarter of the way through its history.

    Combined with Lilly's data, that increase suggested a peak somewhere in the middle, when the universe was one-quarter to half its present age. But galaxies in that middle range are too close for the break to be displaced into visible wavelengths.

    To fill in the gap, Dickinson and his colleagues took another look at the Deep Field. Examining galaxies in the Deep Field with the 4-meter telescope at Kitt Peak National Observatory in Arizona, they captured the infrared light that the Hubble's cameras had missed. The extra data helped Connolly and Szalay derive a mathematical formula for how a galaxy's colors should shift as it gets farther away—a formula they used to identify the galaxies in the middle range. They then determined the rate of starbirth in those galaxies, picking up the expected peak. A good indication that the new data are correct, Szalay says, is that their lowest point is a “spot-on” match with Lilly's highest.

    In fact, the newly charted peak also matches some earlier predictions, based on theoretical models and on observations of gas and heavy elements in galaxies. But birthrates give only limited information; astronomers would now like to know where and how the stars were born. The rate of starbirth is “a step along the road toward understanding star formation and galaxy formation,” says astrophysicist Michael Fall of STScI. But it cannot tell astronomers what kind of galaxies spawned these stars. “This is a valuable average over all the details,” Dickinson says. “The challenge now is breaking it down again.”

  12. Ecology

    New Model Charts Swings in Crab Populations

    1. Kathryn S. Brown
    1. Kathryn S. Brown is a science writer in Columbia, Missouri.

    It's not easy being a crab. Hatched in the dead of winter, crab larvae mature at sea for 3 months before swimming to shore to settle. And once there, they not only have to compete for food, they run the risk of becoming food for other creatures—fellow crabs included.

    But if you think living a crab's life sounds hard, ecologists joke, try modeling it. Crab populations are notoriously erratic, skyrocketing one year and tumbling the next. Catch figures in Eureka and Crescent City, California, for the Dungeness crab, for instance, have been known to plunge from 7 million to 130,000 kilograms in just 4 years. For decades, ecologists have wondered what forces drive these swings: Is it biology—behaviors such as competition and cannibalism, which depend on the density of crabs in a given locale? Or is it environment—random changes in water temperature, ocean currents, and other aspects of the crabs' habitat?

    Now, on page 1431 of this issue, Kevin Higgins of the University of Helsinki, Alan Hastings of the University of California, Davis, and their colleagues report that the answer may well be both. With a new computer model that incorporates the effects of both biological and environmental forces on the Dungeness crab, the researchers find that biological feedbacks can amplify even small changes in the crabs' physical environment, resulting in huge swings in the population.

    Scientists say that this ecological model is among the first in the field to evince the chaotic behavior often seen in complex physical systems, such as global climate. In these systems, the impact of small, random events—called noise—can be greatly amplified, leading to huge, systemwide events. “This [paper] is the best example in a natural system that this is what's going on,” says Stephen Ellner, a mathematician at North Carolina State University in Raleigh. He and other researchers add that the model will help scientists understand the population dynamics of a host of animals, from pest insects such as gypsy moths to endangered butterfly species. “This work is like the tip of the iceberg; you can expect a lot to follow it,” says Ellner.

    Biologists have long marveled at the volatility of catch figures for the Dungeness crab, the most heavily harvested crustacean on the West Coast of the United States. Says David Hankin, a biologist at Humboldt State University in Arcata, California, “When I first came here in 1976, I thought I'd arrived in paradise. Fishermen were literally giving away crabs. Then just a half a dozen years later, they couldn't catch a thing.”

    Intrigued, he and others began studying feedback mechanisms that might affect crab populations. For example, using mathematical models, they explored the link between the number of females and population size. They found that as the number of females rises, so does the number of crab eggs, which initially causes the juvenile crab population to jump. But fights over limited food and living space—as well as cannibalism—can quickly kill off crabs, setting back the population. Because such self-regulating mechanisms are nonlinear—that is, their response to perturbations in the system isn't proportional to the size of the perturbations—these ecologists reasoned that the mechanisms alone might account for the wild fluctuations in crab numbers.

    While Hankin and others were exploring biological feedback mechanisms, other ecologists were trying to understand the effects of environmental changes, such as an increase in ocean temperature, on populations. These scientists reasoned that physical perturbations—although often tiny—were more likely to account for the observed population swings because they affected a big proportion of the population at once.

    In the last few years, the theoretical divide between the two camps has narrowed, with most ecologists conceding that both biology and environment are probably at work. And now the new model is providing some supporting evidence. To build their model, the researchers first collected basic biological data on the crabs, including larval, juvenile, and adult survival rates; the age of female crabs when they first lay eggs; and how often juvenile and adult crabs eat smaller crabs. These interactions became the “deterministic skeleton” of the model.

    To get at the impacts of environmental changes, the researchers worked indirectly. They entered crab population numbers for a single year into their skeletal model, let it run for several “years,” and compared the model's output to actual crab-catch numbers. The pattern of population changes in the model looked nothing like the real-world fluctuations. In fact, in the absence of environmental perturbations, crab populations in the model strongly self-regulated. Environmental variance (noise) that randomly boosted or depressed the ranks of crabs had to be playing a role.

    When they fed various amounts of noise into the model, the team members found that the biological feedbacks no longer had a stabilizing effect. Instead, they appeared to act like “deterministic noise amplifiers,” inflating small environmental changes into huge population swings. One such change, for instance, might be a delay in a nearshore current's annual shift from its winter to summer pattern, the researchers say. While that wouldn't affect adults on land, it could wipe out a whole class of youngsters trying to get to shore, decimating the population. “It appears that even very small [environmental] shocks can excite huge population fluctuations,” Higgins says.

    Researchers caution that although the model offers intriguing clues about the interactions between biological feedback mechanisms and the environment, many questions remain. For example, the model doesn't incorporate the effects of specific environmental variables, such as temperature shifts. Says Peter Turchin, a physicist at the University of Connecticut, Storrs, “They've developed a sophisticated algorithm for estimating populations, and it's a valuable step forward. Now we need to go beyond this very limited model to more detailed models of [environmental] noise.”

    Indeed, ecologists are working on models of voles and flour beetles (Science, 17 January, p. 389), among other species. And crab model co-author Alan Hastings says his team may eventually try to tease apart the precise effects on crab populations of different environmental forces. “Our ultimate goal is to really understand what regulates these populations,” Hastings says. “We can't do that yet.”

  13. Astronomy

    Worlds Around Other Stars Shake Planet Birth Theory

    1. James Glanz

    The life history of stars, from their birth in collapsing clouds of gas to their old age and death as supernovae or slowly cooling white dwarfs, is the topic of nine Articles in this special issue of Science (see p. 1350). This News story looks at a subplot in stellar histories: the formation of planets.

    What if some Charles Darwin tried to build a theory of evolution and the only creatures he had ever seen were bears? “You'd naturally figure there was a good reason why everything had to be furry and have big teeth,” says Scott Tremaine of the Canadian Institute for Theoretical Astrophysics (CITA), at the University of Toronto. Theorists trying to understand the birth of planets in the maelstrom of dust, gas, rock, and ice spinning around young stars have been in a position a bit like that of the fictional Darwin. Their imaginations suffered, says Tremaine, because they had just one planetary system to study: our own. Then, starting just 20 months ago, observers began opening a window onto the planetary fauna around other sunlike stars. And theory, suddenly confronting other types of worlds unknown in our solar system, has been in turmoil. It's as if that hypothetical Darwin had suddenly learned of birds, tortoises, and insects, and his old world view became untenable.

    A wide swathe.

    In computer simulation, the gravity of a giant planet (white) tears a gap (blue) in the disk of material around a star.

    LIN ET AL.

    The first detection of a planet around a sunlike star was already enough to shake up theorists: Michel Mayor and Didier Queloz of the Geneva Observatory in Switzerland had found a Jupiter-sized object in an orbit less than one-sixth the radius of Mercury's (Science, 20 October 1995, p. 375). That planet, around the star 51 Pegasi, turned out to be the first of a series of “hot Jupiters”—giant planets far closer to their parent star than standard theory predicted they should be. The nine new planets found so far also include some objects so massive, in orbits so eccentric, that theorists are hard pressed to picture how they could form at all. To muddy the waters even further, still other discoveries “almost smell like the planets in our own solar system,” in the words of Geoff Marcy, a prolific planet searcher at San Francisco State University.

    “The tremendous advantage of the new observations is that they're giving us some insight into the variety of planetary systems that are possible,” says Tremaine. That insight is prompting what Frederic Rasio of the Massachusetts Institute of Technology (MIT) calls “quick-response theory.” Some theorists are coming up with ways for giant planets to form at a more seemly distance from the star, then migrate inward; others are exploring how interactions between several giant planets—or between a giant planet and the two stars in a binary system—could stretch planetary orbits into eccentric paths. Still others are proposing formation mechanisms that would flout all the standard assumptions about planet size, proximity to the parent star, and orbital eccentricity. “It's been a revolution,” says Stephen Lubow of the Space Telescope Science Institute in Baltimore.

    Like naturalists catching their first glimpse of a new species, astronomers can't be sure all these objects really are what they seem. The techniques for detecting planets around other stars are indirect, and some astronomers contend that at least one of the planets may not exist at all (see sidebar). Other putative planets—especially the most massive objects in eccentric orbits—could turn out to be the dim “failed stars” called brown dwarfs. Observers also worry that they are getting a skewed sample of planets, because their detection techniques are biased toward massive objects orbiting close to their parent stars. But most people in the field have concluded that too many apparent planets have been detected by too many different groups for them all to vanish. “It seems highly unlikely that the whole class would turn out to be not a planet,” says Fred Adams of the University of Michigan.

    SOURCE: JOHN WHATMOUGH

    Just about any one of these planets would be enough to challenge the standard scenario of planet formation. In that picture, a vast molecular cloud, or nebula, collapses under its own gravity to form a disk of gas and dust that whirls around a forming star at its center. After most of the cloud falls onto the star, what is left gradually collides and coagulates into so-called planetesimals ranging up to 10 kilometers in size. The planetesimals attract one another through gravity to set off a hierarchy of mergers that eventually produces the inner, rocky planets and the ice-and-rock cores of what will become the gas giants.

    View this table:

    J. Schneider/Marcy and Butler

    Because giant planets require such a large supply of material, they should form only in a region several times farther from their parent star than the Earth-sun distance—called an astronomical unit, or AU. Simple geometry implies that the outer expanses of the disk contain more of the raw materials needed for planet making than the inner regions do. And only there is the disk cool enough for water ice to form out of hydrogen and oxygen in the disk, roughly tripling the amount of solid material available for planet making.

    Even so, many researchers believed there's a limit to the growth of giant planets: When a rock-and-ice core reaches about 10 Earth masses, it begins drawing in huge amounts of gaseous hydrogen and helium and expands to a maximum of roughly one Jupiter mass. At that point, the gravity of the massive planet might tear a gap in the disk that is its food supply, putting a stop to its own growth.

    All was not paradise in this picture. “Even its proponents recognize it has problems,” says Alan Boss of the Carnegie Institution of Washington. For one thing, it was touch-and-go whether the giant planets' cores could grow fast enough to accrete gas before the disk dissipated. For another, some modelers had suggested that the planets might migrate inward or outward after their formation, confusing this tidy tableau. “But since there was no evidence for this process having been important in our solar system,” says Boss, “there was no motivation to get wild eyed and say it might have happened elsewhere.”

    Roving giants

    When the hot Jupiters came rolling in, astronomers got wild eyed. “Nobody in his right mind would have suggested that you would find a Jupiter-mass companion” so close to a star, says Robert Noyes of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts. His team came up with the latest detection, in April—a Jupiter-mass object orbiting the star ρ Coronae Borealis. At 0.23 AU, this object is farther from its parent star than 51 Peg and its epigones, but still much closer than permitted in the classical picture. Even the massive object orbiting at a temperate 2.1 AU from the star 47 Ursae Majoris—a discovery made by Marcy and his San Francisco colleague Paul Butler (Science, 26 January 1996, p. 449)—seems uncomfortably close for a giant planet.

    So theorists took a deep breath and began asking whether many of the new planets could have formed according to the standard scenario, then migrated many AUs inward. The underlying ideas were developed in the 1970s by the California Institute of Technology's Peter Goldreich and Tremaine. They wanted to understand how, say, the moons of Saturn could tug on its disklike rings to carve out their prominent gaps and sharp edges. Goldreich and Tremaine realized that in the course of this interplay, the rings would exert a drag on the satellites that would move their orbits in or out. This same process could operate on a much larger scale, they proposed—in protoplanetary disks. “We said you could expect planets to have moved a long way through these gravitational torques,” says Goldreich.

    Researchers such as William Ward of NASA's Jet Propulsion Laboratory (JPL), in Pasadena, California, later showed that these torques would usually act to brake a planet and send it drifting inward toward its parent star. And as early as 1993, Douglas Lin of the University of California, Santa Cruz (UCSC), and colleagues suggested that our own solar system could have experienced this kind of realignment. The planetary disk could have given birth to many more planets than the nine that remain, Lin said, but most of them migrated, lemminglike, into the sun.

    “When 51 Peg came along,” recalls CfA's A. G. W. Cameron, “I said, ‘Okay, Doug Lin was right.’” What remained was to find some means of stopping the migration of a giant planet on the brink of oblivion, leaving it trapped in a close orbit around the parent star. Last year, Lin, UCSC's Peter Bodenheimer, and Derek Richardson of the University of Washington, Seattle, came up with two different mechanisms for putting on the brakes. One relies on a kind of gravitational dance between a massive planet and a young, rapidly spinning star. Once the planet came very close to the star, it would raise tides on the stellar surface. Racing slightly ahead of the planet because of the star's spin, like the rabbit in a greyhound race, those tides would exert a gravitational pull on the planet, keeping the drag of the disk from slowing it any further.

    Feeding frenzy.

    Even after a newborn giant planet tears a gap in a protoplanetary disk, material might stream in and feed continued growth.

    W. KLEY AND P. ARTYMOWICZ

    Another possibility, Lin and his colleagues proposed, is that the star's own magnetic fields might sweep the region near the star clear of material. Once the migrating planet broke into the clear, it would no longer feel the drag of the disk and would stabilize. “Do you remember the old LPs?” asks Lin. “When the needle gets [close to] the center it can't go any farther,” because there are no more grooves.

    Boss calls migration and stoppage “by far the leading idea” for explaining the 51 Peg planets. Others aren't so sure, pointing out that Lin's migration would accelerate as the planet approached the star, making it hard to stop. “If [Lin] had a good mechanism, he wouldn't have had two in his paper,” quips Jack Lissauer of the NASA Ames Research Center in Moffett Field, California.

    Disk drag, though, may not be the only way to shift planets around. Renu Malhotra, a dynamicist at the Lunar and Planetary Institute in Houston, found another possibility when she considered gravitational interplay within the early solar system. She focused on a time when that system was already millions of years old, after the planets had formed and most of the disk's gas and dust had dissipated. Swarms of leftover planetesimals are thought to have remained, however. It's as if “you sweep the floor and leave a lot of dirt behind,” says Malhotra.

    The planetesimals that fell toward the sun after they interacted with the outer planet Neptune would have encountered Jupiter's potent gravity and been slung out of the solar system. Once these planetesimals with low angular momentum had been removed, Neptune would have been more likely to have later interactions with planetesimals carrying high angular momentum, some of which would have been transferred to the planet. Over time, the process would have shifted Neptune roughly 5 AU outward.

    Jupiter, meanwhile, would gradually have given up angular momentum to the planetesimals and drifted inward. The drift would have been only a fraction of an AU in our solar system, but Malhotra is just beginning to consider situations in which a giant's drift might be larger—in a planetary system richer in planetesimals, for example.

    Planetary perturbers

    Neither migration mechanism, however, can explain the orbital peculiarities of three other new objects—those around the stars 70 Virginis, 16 Cygni B, and HD 114762. Their paths are highly eccentric: The object around 70 Vir, for instance, ranges from 0.6 AU to 2.7 AU in the course of its orbit. Yet standard planet-formation theory holds that a planet should be born in a nearly circular orbit, because the eccentricities of the planetesimals that piled together to form it should average out. And migration, by itself, should not change the shape of a planet's orbit—just shrink or expand it. So some theorists have looked for ways to perturb a planet's orbit later in its existence.

    Last year, for example, Rasio and Eric Ford, also of MIT, found that if two giant planets were circling the same star at sufficiently similar distances, the system could become unstable (Science, 8 November 1996, p. 954). One planet could be hurled outward onto a highly eccentric orbit, or even escape the system. As a bonus, this mechanism could in rare instances fling the other planet in toward the star to produce a hot Jupiter. The second planet's orbit would be eccentric at first, but tidal effects similar to those invoked by Lin for stopping migration might “recircularize” it, says Rasio. Stuart Weidenschilling of the Planetary Science Institute in Tucson, Arizona, adds that three planets can interact with even fewer dynamical inhibitions. “Putting in three planets gives you a lot more possible outcomes,” he says.

    In the case of the planet around 16 Cyg B, another perturber may be at work: the star's binary companion. This spring, three groups published calculations tracing how the steady gravitational pull from the companion, a star called 16 Cyg A, would affect the planet's orbit. The researchers, including Tremaine at CITA and many others, assumed a sharp tilt between the orbital planes of the planet and the binary system, and the absence of any other giant planet to disturb the balletic, three-way interaction. Under those conditions, they found, the planet's eccentricity slowly oscillates, spending roughly a third of its lifetime at high values—“a very plausible explanation” for the observations, says Pawel Artymowicz, a theorist at Stockholm Observatory in Sweden.

    The shape of their orbit isn't the only puzzle the other two eccentric planets present. They also have masses more than six times that of Jupiter, well beyond the mass limit set by standard planet-formation theory. One possibility, say astronomers, is that these eccentric heavyweights might not be planets at all. Instead, they might be brown dwarfs—balls of gas that formed when shards of the original nebula collapsed, rather than objects built up piece by piece, like true planets. In principle, brown dwarfs could form with eccentricities and masses much greater than any planet's, which would neatly solve the puzzle of the heaviest, most eccentric companions. Notes CfA's David Latham, “The simplest picture would be that planets have circular orbits and brown dwarfs have eccentric orbits.”

    A few skeptics go further and raise the possibility that none of the “planets” found so far really deserves the name. “I think there's a bandwagon effect to interpret these as planets,” says David Black, director of the Lunar and Planetary Institute in Houston. With perhaps one exception—the giant Jupiter circling 47 Ursae Majoris in a Mars-like orbit—“they may not be planets at all,” says Black. Although calculations suggest that a gas cloud of less than about 10 Jupiter masses would be hard pressed to collapse under its own gravity to form a brown dwarf, Black says the complicated dynamics of a binary system could well drive the number down, allowing many, if not all, of the new worlds to be failed stars.

    Limits to growth

    George Wetherill, of the Carnegie Institution of Washington, has a humorous response to Black's skepticism. He recalls a lunchtime debate during a recent conference, in which some astronomers mentioned that standard models have difficulty making a planet of even Jupiter's size before the planet-forming disk dissipates. And if Jupiter did not form by agglomeration in a disk, said the astronomers, then strictly speaking it should not be called a planet. Says Wetherill, “I can just see the headline: ‘Scientists Find That Jupiter Is Not a Planet.’”

    He thinks theorists will find ways to create the full range of otherworldly planets, no matter how massive or eccentric. Some of the latest developments seem to support this view. Computer models by Stockholm's Artymowicz and Lubow, of the Space Telescope Science Institute, have shown that the growth-limiting gap that opens in the disk may have “weak points,” allowing streams of gas to leak through and continue feeding a protoplanet. “It would allow a mechanism by which planets can grow larger” than theorists had thought possible, says Michigan's Adams. “To me, the idea has a lot of plain appeal; it makes sense.”

    The dynamics of the planetary disk could also allow some planets to be born in eccentric orbits, Artymowicz and Lubow have found. The team points out that a growing planet excites spiral waves in the disk that serves as its nursery—analogs to waves studied by Goldreich and Tremaine in Saturn's rings. Interactions with those waves can drive a planet's eccentricity either up or down, the team found. The waves affect planets differently depending on their mass, with planets smaller than 10 Jupiters losing eccentricity and heavier ones gaining it, roughly the pattern seen in extrasolar planets.

    Even making giant planets close to their parent stars—rather than forming them elsewhere and transporting them inward—may not be unthinkable. “It may be possible. That's all I can say,” notes Lissauer, who has done preliminary work on the possibility with Olenka Hubickyj, also at Ames, and UCSC's Bodenheimer. Going slightly further, Bodenheimer notes that JPL's Ward has proposed that material draining inward from the disk might supply enough mass to build a giant planet in a region that had been reserved for mere Mercurys.

    Just as biologists have realized that bears—or human beings, for that matter—are by no means a necessary end point of evolution, astronomers are realizing that our own solar system is not the inevitable result of planet formation. As their surprise fades, observers are left searching the tangled bank of the heavens for more clues to how it all came to be that way.

    Additional Reading

  14. Astronomy

    51 Peg and the Perils of Planet Searches

    1. James Glanz

    It isn't easy being the oldest. Like the first among human siblings, the first planetlike object found around a sunlike star, detected some 18 months ago at the star 51 Pegasi, has faced more than its share of scrutiny. No more than a slight wobble of 51 Peg had suggested the presence of the companion. It's the same kind of clue that has led observers to eight more putative planets, but it leaves plenty of room for doubt. Just 3 months ago, for example, one astronomer claimed in Nature that the planet searchers might have been fooled by a large-scale sloshing on the star's surface—an issue that is still unresolved.

    Now, Science has learned, astronomers at the California Institute of Technology (Caltech) and NASA's Jet Propulsion Laboratory (JPL), in Pasadena, have sown more doubt. Using a powerful telescopic array called an infrared interferometer, they may have “resolved” the 51 Peg system: observed a spatial structure inconsistent with a simple point of light. The star itself is almost certainly too small to appear as anything other than a point, and a planet should not be visible. So the preliminary results—which have been described only at conferences and on the World Wide Web—could suggest that the object orbiting 51 Peg is a dim companion star, not a planet.

    Still, 51 Peg does not show other hallmarks of a close binary, so astronomers are reacting cautiously. “It would be such a blockbuster result,” says David Latham of the Harvard-Smithsonian Center for Astrophysics. “It's not impossible, but it's not what I expected.”

    But even if 51 Peg's planet survives this challenge, it illustrates the uncertainties that beset the search for planets around sunlike stars. Observers must sift through hundreds of dark features called absorption lines in the stars' spectra. If the gravitational pull of an orbiting companion is making the star wobble, like a slighly unbalanced washing machine, the Doppler shift will cause the wavelengths of the lines to creep back and forth.

    The wobble gives only a minimum mass for the companion—0.47 Jupiter masses in the case of 51 Peg, enough to produce the observed wobble if we are viewing the companion's orbit edge-on. But if we happen to be seeing the orbit nearly face-on, the object's mass would have to be much larger—perhaps as large as a star's—to produce the same wobble. That's the possibility that led Xiaopei Pan of Caltech and several collaborators to observe 51 Peg with their Palomar Testbed Interferometer (PTI).

    The device links telescopes separated by as much as 110 meters to resolve details much finer than any single telescope could see. The team first looked at a known binary system called i Peg, and found that PTI could resolve the stars, whose spatial separation is only about twice that of 51 Peg and its companion, says Pan. They then shifted their focus to 51 Peg. According to the team's report on the Web, “Preliminary results from PTI indicate that 51 Peg has been resolved,” which might suggest that it too is a binary star.

    Other astronomers note that stars closely orbited by a companion usually become “tidally locked” to it and begin rotating rapidly, in synchrony with the orbit. This high-speed pas de deux usually stimulates characteristic emissions such as high x-ray output, which aren't seen in 51 Peg. “The preponderance of evidence is that it's a planet,” says Steven Pravdo of JPL, who published the x-ray results with several colleagues last year. “There is probably a 10% or less chance that it's not a planet.” Other members of the PTI group also express caution.

    Only one set of planets seems to be free of such uncertainties, and they are worlds apart from 51 Peg and its ilk. Beginning in 1992, Aleksander Wolszczan of Pennsylvania State University and collaborators found three Earth-size objects tooling around a radio-emitting stellar cinder called a pulsar. By causing the pulsar to wobble, the planets create periodic changes in the otherwise clocklike regularity of the pulsar's radio bursts.

    The precision of the method is so exquisite, says Stuart Anderson, a radio astronomer at Caltech, that it can discern the gravitational “kick” the planets give each other as they pass in their orbits. “That's the real clincher,” he says. Observers looking for planets around more familiar stars are still waiting for a clincher of their own.

  15. Optoelectronics

    Storing Light by Surfing on Silicon

    1. Alexander Hellemans
    1. Alexander Hellemans is a science writer based in Paris.

    Light is a great way to transmit information, but its speedy photons are difficult to slow down when signals must be delayed—for example, to be stored for brief times in optoelectronic circuits. Now a report in the 26 May Physical Review Letters describes a clever solution: translating photons into pairs of electric charges that slowly “surf” on a sound wave across a semiconductor chip until they recombine in a pulse of light.

    The traditional way to delay an optical signal is to send it racing through loops of optical fiber several kilometers long—a bulky and expensive solution. Physicists at the University of Munich and the Technical University of Munich suspected they could do better. The team members began with a 10-nanometer-thick slice of an indium-gallium-arsenide semiconductor, a material that can translate light into electric charge and vice versa. The team sent an optical signal into one end of the chip by pulsing a laser onto its surface. The laser's photons created excitons: wandering pairs of electrons and the positively charged “holes” from which the electrons have been dislodged. Normally these excitons would recombine, giving off light again, within a nanosecond (a billionth of a second). But a second property of the material allowed the team to delay their reunion.

    Indium-gallium-arsenide has piezoelectric properties, meaning that its electrical properties change if the material is stressed. The reverse is also true: The material extends or shrinks when an electric field is applied. By connecting a radiofrequency generator to a series of strips along one edge of the semiconductor, the group set up compressional waves—sound waves—that swept across the chip. Along the way, the sound altered the electric field in the semiconductor, creating electric-field waves that separated the electrons and holes, preventing them from recombining. “We can extend the lifetime of the excitons several orders of magnitude,” says Achim Wixforth of the University of Munich.

    The excitons survive until the migrating electrical fields drag them all the way across the chip. At the far end, the field wave breaks down when it reaches a nickel-chromium strip, and the excitons merge, emitting a flash of light.* In the experiment, the team detected a light pulse from recombining excitons 650 nanoseconds after they were created by a laser pulse. That may not sound like much, but an equivalent fiber-optic delay would require about 3 kilometers of cable, says Wixforth.

    Wixforth notes that, instead of using a metallic strip to break down the electric-field waves, the team can control the release of the “stored” photons by sending a second sound wave into the chip in the opposite direction from the first one. The electrons and holes recombine at the point where the two waves meet. “We can thus switch off the storage of light signals at any time,” says Wixforth.

    The device isn't ready for commercial use just yet. At present, the chip must be cooled to within a few degrees of absolute zero, but team member Carsten Rocke of the University of Munich says less chilly chips are on their way. After that hurdle is cleared, the team predicts that the chips could become an integral part of optical systems. Other physicists agree: “Whatever you can do with a delay line, you can do with this, too,” says David Snoke of the University of Pittsburgh.

Log in to view full text

Log in through your institution

Log in through your institution