# News this Week

Science  30 Jul 2004:
Vol. 305, Issue 5684, pp. 411
1. THEORETICAL PHYSICS

# Hawking Slays His Own Paradox, But Colleagues Are Wary

1. Charles Seife

DUBLIN, IRELAND—In a public appearance that drew worldwide media coverage, Stephen Hawking claimed last week that he had solved one of the most important problems in physics: whether black holes destroy the information they swallow. Speaking at a conference here* in a lecture hall packed with physicists and reporters, the University of Cambridge professor reversed his long-standing position and argued that information survives. As a result, Hawking conceded the most famous wager in physics and handed over an encyclopedia to the winner of the bet.

“It is great to solve a problem that has been troubling me for nearly 30 years,” Hawking said during his presentation. Other physicists, however, doubt that Hawking has solved the long-lived puzzle. “It doesn't seem to me to be convincing,” says John Friedman, a physicist at the University of Wisconsin, Milwaukee.

The question of what happens to information when it falls into a black hole goes to the heart of a central idea in modern physics. Just as scientists in the 19th century figured out that energy can be neither created nor destroyed, many 20th century physicists concluded that information is also conserved. If true, information conservation would be one of the most important principles in science—perhaps more profound even than conservation of mass and energy. Unfortunately, there was a big obstacle: black holes.

When an object falls into a black hole, its mass and energy leave an observable imprint by making the black hole more massive. According to general relativity, however, any information the object carries is irretrievably lost: An outside observer couldn't tell whether the black hole had swallowed a ton of lead, a ton of feathers, or a ton of Ford Pintos. If black holes can destroy information in this way, information conservation cannot be a universal law.

The debate raged for decades whether black holes were an incurable exception to the permanence of information. In the 1970s, Hawking and some of his colleagues, including Kip Thorne of the California Institute of Technology (Caltech) in Pasadena, argued that black holes trump information. Others, such as Caltech's John Preskill, argued that some undiscovered loophole would keep information safe until the black hole somehow disgorged it. In 1997, Hawking and Thorne made a wager with Preskill; the winner was to receive an encyclopedia of his choice, from which information can always be retrieved.

At the Dublin conference, Hawking conceded the bet. Using a mathematical technique known as the Euclidean path integral method, Hawking proved to his own satisfaction that information is not, in fact, destroyed when it falls into a black hole. “If you jump into a black hole, your mass- energy will be returned to our universe … in a mangled form which contains the information about what you were like, but in a state where it cannot be easily recognized,” said Hawking. That implies that black holes are not portals to other universes, a possibility Hawking himself had suggested. “I'm sorry to disappoint science-fiction fans,” he said.

In conceding the bet, Hawking presented Preskill with Total Baseball: The Ultimate Baseball Encyclopedia. Thorne, however, refused to admit defeat. “I have chosen not to concede because I want to see more detail,” he said, but added, “I think that Stephen is very likely right.”

Others are less certain. Friedman, for one, has doubts about Hawking's mathematical method. Quantum field theorists are happy to use the Euclidean path integral technique for problems involving particles and fields, but most gravitational theorists avoid it because it produces equations riddled with hard-to-reconcile infinities. They prefer a more straightforward “Lorentzian” approach to gravity. Nobody has proven that the two methods always give the same results. “I'm skeptical whether the Euclidean path integral method generally represents the evolution of spacetime that is really Lorentzian,” says Friedman. If not, then Hawking's conclusion may be an artifact of the mathematical method rather than a general result. Another reason for skepticism, Friedman says, is that Hawking's calculation takes a sum over all possible idealized black hole locations and all observers in the universe, but the results don't seem to apply to a specific black hole and a specific observer.

In part because of the Euclidean method, Hawking's work doesn't seem to yield any insight into how black holes preserve or release information—whether all the pent-up information bursts forth at once, or whether it trickles out as subtle correlations in radiation coming from the black hole. Even Preskill says he wishes that Hawking's argument made more physical sense and could be expressed in more conventional mathematical terms. “If one could extract from the calculation an understanding that could be reproduced in a purely Lorentzian calculation, that would help a lot,” he says.

Despite his doubts, Preskill has no qualms about accepting Total Baseball. “The terms were that the winner would receive the encyclopedia when the other party concedes,” he says. “I don't have to agree.”

• * 17th annual International Conference on General Relativity and Gravitation, 18–24 July.

2. U.S. SCIENCE BUDGET

# Caught in a Squeeze Between Tax Cuts and Military Spending

1. Jeffrey Mervis*
1. With reporting by David Malakoff and Andrew Lawler.

Banking on the benevolence of a lame-duck Congress is risky business. But for U.S. scientists, a possible postelection session may be the best bet to salvage research programs that are facing budget cuts.

Last week, Congress began a 6-week break having finished work on only one of the 13 spending bills for the 2005 fiscal year that begins on 1 October. Although the lone completed bill provides modest increases for defense research, the House has taken some initial steps on a dozen other agencies that suggest most science programs are in for a very hard time this year. The National Science Foundation (NSF) and NASA are facing real cuts, and the National Institutes of Health may have to settle for a small increase that may not keep pace with inflation (Science, 16 July, p. 321). The dark budget may hold silver linings for the Department of Energy's (DOE's) science programs and the National Institute of Standards and Technology (NIST). But even those gains could be at risk once Congress returns in September for a last-gasp attempt to finish its fiscal business before the November elections.

The squeeze is a result of Republican-led efforts to reduce taxes and hold down domestic spending while fighting wars in Iraq and Afghanistan and defending against terrorism at home. That has left the 13 spending panels that divvy up the government's $2 trillion budget with less money than agencies requested. The latest bad news came on 22 July when a House panel voted to shrink the budgets of NSF and NASA by 2.1% and 1.5%, respectively, below this year's levels. The decline for NSF, which would be the first in nearly 2 decades, contrasts with a 3% increase requested by President George W. Bush. It also makes a mockery of a 15% annual rise called for by a 2001 law that, unfortunately for scientists, appropriators don't have to follow. “We're still hopeful that the numbers will improve after the Senate has acted,” says acting NSF Director Arden Bement. “We are dealing with some frustrated appropriators.” The frustration stems from the fact that NSF and NASA are part of a larger spending bill that also funds the Veterans Administration. Historically, veterans' needs take precedence, especially in an election year. This year, the House panel approved an extra$2.5 billion for veterans' health care, leaving little new money for other agencies. “We can't compete with the veterans,” says Sam Rankin, chair of the Coalition for National Science Funding and a lobbyist for the American Mathematical Society.

The House bill would trim NSF's bread-and-butter research programs by $109 million, to$4.1 billion. It would cut the $935 million education directorate by 10%, including no new funding for a program that links universities with local schools. It would also delay the start of the National Ecological Observatory Network while allowing design work on two prototype sites. For NASA, the$228 million cut reverses a Bush Administration proposal for a $1.1 billion increase for moon and Mars exploration. Several new space science missions took it on the chin while the committee piled on millions of dollars in earmarks. The panel rejected the entire$70 million requested to begin a robotic lunar exploration effort and nixed $12.4 million to start the scientific work on a Jupiter Icy Moons Orbiter and nearly all of the$17.6 million proposed for an Orbiting Carbon Observatory. At the same time, the panel added goodies such as $150,000 for the Coca Cola Space Science Center in Columbus, Georgia, and$3 million for the National Center of Excellence in Bioinformatics in Buffalo, New York.

The committee also refused to fund a new Crew Exploration Vehicle that ultimately could send humans to the moon and Mars. But it fully funded the $4.3 billion request for the space shuttle. The panel said it backed the idea of Bush's exploration vision but noted that the committee “does not have sufficient resources.” The White House says it may veto the bill if the NASA numbers don't improve. One agency that took a big hit last year may get a chance to climb partway out of its budget hole. On 8 July, the House approved an 11% increase for the intramural programs at NIST and told the agency to spend whatever it takes from its research account to outfit its new Advanced Materials Laboratory. Eighty-two NIST employees have accepted buyouts, and a better 2005 budget, says Bement, who also heads NIST, means that “we won't have to lay off any scientists.” At DOE, science advocates are praising a 25 June House vote giving the agency's science office a 3% boost to$3.6 billion, rejecting a White House call for a modest cut. Supercomputing research was a big winner, getting a 16% jump to $234 million. DOE's heavily earmarked biological research account, however, would slump 11% to$572 million. Among the victims is a new $5 million molecular tag production facility. Lawmakers said they didn't like the department's plan to allow only DOE labs to bid for the project. Defense researchers are pleased with an 8% increase, to$1.5 billion, for basic research included in a Department of Defense (DOD) spending bill to be signed soon by Bush. That reverses a proposed 5% cut. Applied research would jump 12% to nearly $5 billion. Now, science advocates are waiting to see how the Senate deals with other science budgets. But many observers predict that the final numbers won't be settled until late this year, in a lame-duck session after the elections. 3. BIOMEDICINE # An End to the Prion Debate? Don't Count on It 1. Jennifer Couzin A bold set of prion experiments in mice may have proven that the misshapen proteins are, by themselves, infectious. If the work holds up, it will be a watershed in prion biology, validating the belief that these proteins alone are the culprits in “mad cow disease” and similar illnesses. As is typical for the controversy-laden field, however, many scientists express reservations about the study on page 673. It was led by Stanley Prusiner of the University of California, San Francisco, who won the Nobel Prize in 1997 for discovering prions. “It's really a striking result that seems to fill in one more piece of the infectivity puzzle,” says Byron Caughey, a biochemist at the National Institutes of Health's Rocky Mountain Laboratories in Hamilton, Montana. “But,” he adds, “it's worth pointing out some significant caveats.” For years, biologists have tried to prove that a protein called PrP can misfold and become an infectious prion by purifying protein clumps from diseased brains and injecting them into healthy animals. But it hasn't been clear that PrP alone was what was being injected; using synthetic misfolded PrP, meanwhile, hasn't reliably triggered disease. In their tests, Prusiner and colleagues used transgenic mice making 16 times the normal amount of PrP. These mice express a truncated PrP that may more readily make up prion clumps. This, the group reasoned, might sensitize the animals to introduced PrP. To obtain PrP free of brain tissue, Prusiner's team genetically altered Escherichia coli bacteria into producing PrPfragments that they misfolded to form amyloid fibrils, which have been implicated in various brain diseases. Prusiner's team injected those prion fibrils into the brains of the mice. Almost a year later, with no animals sick, the researchers were ready to declare the study a failure. But then, 380 days after being inoculated, one of the mice showed symptoms of a prionlike disease. Eventually, all seven inoculated mice showed neurological disease, the last one 660 days after injection. Prusiner's team also inoculated a batch of normal animals with brain tissue from one of the sick ones. These rodents took about 150 days to sicken. “It is a spectacular breakthrough,” says Neil Cashman, a neuroscientist at the University of Toronto. “This is the beginning of the end of all the objections about the prion hypothesis.” Not so fast, say some experts. Do Prusiner's mice with excess PrP get sick normally? wonders John Collinge, director of the Medical Research Council Prion Unit at University College London. His team had relied on rodents with 10 times the normal level of PrP but abandoned them after finding prion disease-like pathology in animals that hadn't been inoculated with anything. Prusiner's mice, says Collinge, may be “poised” to become infectious even without the inoculation; giving them a shot of synthetic, misfolded PrP may push them over the edge, but so might other stresses. The long latency time between inoculation and disease also worries prion experts. Some wonder if the experiments were contaminated by other prion strains in the lab. Yale University neuropathologist Laura Manuelidis, who has long criticized the prion hypothesis, says the brain samples from some of Prusiner's mice resemble RML scrapie, a common strain. Prusiner counters that with contamination, the control animals inoculated with saline should have gotten sick as well. Another explanation for long latency is that infecting animals with synthetic PrP is inefficient. The first inoculations may have contained few prions, says Prusiner. This might also explain why no one has yet accomplished the gold-standard experiment: infecting normal mice, not transgenic ones, with pure prion proteins. Until then, one of biomedicine's longest-running controversies is likely to continue. 4. ARCHAEOLOGY # Wisconsin Dig Seeks to Confirm Pre-Clovis Americans 1. Terrence Falk* 1. Terrence Falk writes on science, education, and public policy from Milwaukee. MILWAUKEE, WISCONSIN—This week, archaeologists will begin to dig 48 kilometers south of here, at a site that even skeptics say may be the most convincing yet in demonstrating the early presence of humans in the Americas. Scientists will search a mucky lakeside just west of the city of Kenosha for additional remains of a woolly mammoth. Bones found previously bear marks from human butchering and have been dated to 13,500 radiocarbon years before present—a full 2000 years before big-game hunters known as the Clovis people were thought to have arrived on the continent. Sites near Kenosha “may be the best pre-Clovis sites in North America,” says team leader Michael Waters of Texas A&M University in College Station. Even pre-Clovis skeptic Stuart Fiedel, an archaeologist with the Louis Berger Group in Washington, D.C., agrees that “the Kenosha sites are high up on my radar screen. On the face of it, they seem to be one of the best cases [of pre-Clovis evidence].” Archaeologists long thought that America was first settled by the Clovis hunters, who crossed the Bering Strait and moved south through an ice-free corridor around 11,500 radiocarbon years ago. Then in recent years dozens of sites in both North and South America pointed to an even older human occupation. But each pre-Clovis site has been bitterly contested (Science, 2 March 2001, p. 1730), and a handful of influential archaeologists believes that definitive pre-Clovis evidence is lacking. “One of my problems with the [pre-Clovis] position is that the sites that it is founded on are still dubious,” says Fiedel. Hence the excitement over the sites near Kenosha. In 1990, an amateur archaeologist found butcher marks on mammoth bones stored at a local historical museum; archaeologists later excavated at two sites, those of the Schaefer and Hebior mammoths. These mammoth bones are so well preserved that collagen could be extracted from inside the bone for radiocarbon dating, yielding dates of about 12,500 radiocarbon years ago, 1000 years before the Clovis people. And a handful of crude stone tools—unlike the elegant spear points of the Clovis people—were recovered under the bone piles. All in all, the sites are unique, with “unequivocal stone tools [and] excellent dates,” says Waters. Now his team is in pursuit of an even older Kenosha mammoth at Mud Lake, where a few bones with cut marks were unearthed during a ditch-digging project in the 1930s and later dated. Waters believes that the rest of the mammoth is there and plans to try to relocate it this summer while scouting for new sites for future excavations. The preliminary dig starts this week, but because heavy rains have slowed the work, full-scale excavation of Mud Lake isn't expected until next year. Given the potential of the Kenosha sites, they have attracted little attention. “I really don't understand why there has not been more investigation devoted to [them] to date,” says Fiedel. Starting this summer, Waters's crew hopes to change that. 5. ENVIRONMENT # States Sue Over Global Warming 1. Erik Stokstad In a legal gambit aimed against global warming, the attorneys general of eight states last week sued the five largest emitters of carbon dioxide in the United States for creating a public nuisance. The states are asking that the electric utility companies cut emissions by 3% each year for a decade. Legal experts predict the states' case will be an uphill battle. Carbon dioxide litigation is heating up. In 2002, environmental groups sued the Overseas Private Investment Corp. and the Export-Import Bank of the United States for not conducting environmental reviews on the power plants they financed. And last year, Maine, Massachusetts, and Connecticut sued the Environmental Protection Agency for not regulating CO2 as a pollutant under the Clean Air Act. Now, the states have taken the first legal action directly against CO2 emitters. The plaintiffs—California, Connecticut, Iowa, New Jersey, New York, Rhode Island, Vermont, and Wisconsin, along with the City of New York—claim that the CO2 that utility companies release contributes to global warming, which will harm state residents. The alleged ills include increased numbers of deaths from heat waves, more asthma from smog, beach erosion, contamination of groundwater from rising sea level, and more droughts and floods. “The harm to our states is increasing daily,” Eliot Spitzer, the attorney general of New York state, said at a press conference. The defendants together spew about 650 million tons of CO2 a year. Their 174 fossil fuel-burning plants contribute roughly 10% of the anthropogenic CO2 in the United States. The suit maintains that annual cuts of 3% are feasible through making plants more efficient, promoting conservation, and using wind and solar power—without substantially raising electric bills. “All that is now lacking is action,” Spitzer said. That claim irks American Electric Power of Columbus, Ohio, a defendant. Spokesperson Melissa McHenry says that the company had already committed to reducing its emissions by 10% by 2006. “Filing lawsuits is not constructive,” she says. “It's a global issue that can't be addressed by a small group of companies.” It will also be a tough suit to win, says Richard Brooks of Vermont Law School in South Royalton, who studies the legal issues of air pollution. The fact that global warming is a planetwide phenomenon will make it difficult to establish how much these companies are contributing to the claimed harm. And under public-nuisance law, the plaintiffs must show that their citizens are suffering significantly more than the nation as a whole. “I would be totally amazed if the court gave this a serious response,” Brooks says. “This makes me imagine that this is more of a symbolic suit.” 6. KENNEWICK MAN # Court Battle Ends, Bones Still Off-Limits 1. Constance Holden When Native American tribes decided last week not to fight an appeals court ruling, it looked as though the way was clear for scientists to study the 9300-year-old skeleton called Kennewick Man, which has been tied up in legal battles for the past 8 years. But scientists say that although the ruling sets a favorable precedent for studying other ancient skeletons, they are not optimistic about getting to study Kennewick Man himself anytime soon. The government continues to find fault with outside scientists' research plans and to deny access to the remains. Negotiations are in progress, but the lawyer for the eight scientist-plaintiffs in the suit, Alan Schneider of Portland, Oregon, says, “we are still far apart.” Going back to court, he adds, “is definitely a possibility.” The Kennewick case finally appeared to have come to an end on 19 July when the defendants, four tribal groups, decided not to appeal to the U.S. Supreme Court a decision by the 9th U.S. Circuit Court of Appeals. That court ruled that because there is no evidence linking the Kennewick skeleton to any existing tribe, the Native American Graves Protection and Repatriation Act (NAGPRA) does not apply to it (Science, 13 February, p. 943). The court's interpretation of NAGPRA is a significant advance that will have “major implications” in other cases in which Native American groups are claiming remains, says Robson Bonnichsen of The Center for the Study of the First Americans at Texas A&M University in College Station. In a U.S. Army Corps of Engineers project in Texas, for example, he says, Native Americans at first claimed remains from a 4000-year-old burial ground, but a compromise has been reached so that scientists will have access to them. Meanwhile, scientists are eager to study Kennewick Man, one of the oldest skeletons in North America. Schneider says that in 2002, the scientists submitted a 40-page study plan to the Department of the Interior and the Corps of Engineers, which has custody of the remains at the Burke Museum in Seattle. It is “a state-of-the-art proposal to do the most detailed look at a first American that has ever been put together,” says Bonnichsen. “We wanted to do a class act.” But officials at Interior and the Corps of Engineers have responded with a throng of objections. According to Bonnichsen, the Corps of Engineers says the skeleton is “fragile,” and officials seek to limit the number of scientists who have access to it. “The corps is concerned about the condition and wants to limit handling to what is needed to produce new knowledge,” says Frank McManamon, chief archaeologist at the National Park Service. McManamon, who has been advising on the government response to the study plan, says the plan doesn't “build on the substantial amount of scientific investigation that has already been done” by government-appointed scientists. For example, he says that Bonnichsen and colleagues want to take bone samples for DNA testing, even though sampling has already been done and three separate labs couldn't extract any DNA. Lawyer Schneider counters that the government-sponsored radiocarbon and DNA tests “used or damaged up to 60 grams of the skeleton,” whereas the scientists have proposed “microsampling,” which would destroy no more than 1.5 grams of bone. He adds that many other areas need study. For example, although government-appointed scientists did computed tomography (CT) scans to examine the projectile point lodged in the skeleton's pelvis, Schneider says that “there is still a major controversy over which direction [it] entered,” and that more sophisticated CT technology is now available to study it. “What Frank [McManamon] seems to be saying is ‘We've looked at them, so you don't need to’”—hardly a scientific stance, says Schneider. While the haggling continues, Native Americans have indicated that they will now embark on a nationwide campaign to pressure Congress to rewrite NAGPRA. 7. U.S. SCIENCE POLICY # Congressmen Clash on Politics and Scientific Advisory Committees 1. Jeffrey Mervis For the past several months, the Bush Administration has responded with strong denials to charges that it has chosen members of scientific advisory committees in part for their political views. The charges are either wrong or distorted or they reflect aberrations in the selection process, Administration officials have asserted (Science, 16 July, p. 323). But last week a prominent House member took a different tack: There's nothing wrong with mixing science and politics in determining the makeup of scientific advisory committees, says Representative Vern Ehlers (R-MI). Ehlers, a former physics professor and staunch conservative, offered this view in an impromptu debate with Representative Henry Waxman (D-CA), a dyed-in-the-wool liberal, at a meeting of the National Academies' Committee on Science, Engineering, and Public Policy. The committee is taking its third stab at recommending how the government can improve its pool of scientific and technology talent. Its previous reports focused on ways to make full-time jobs in Washington, D.C., more welcoming to scientists, but this year's effort is also examining the hundreds of outside advisory committees that help federal agencies do their work. The panel, which includes veterans from previous administrations spanning both parties, hopes to deliver its report soon after the November election. The problems flagged in earlier reports still exist: an intrusive and time-consuming vetting process, a likely cut in salary, and the uncertainty of winning Senate confirmation. Panel chair John Edward Porter, a former representative from Illinois and patron of the National Institutes of Health, says the issues remain “intractable.” But Porter's first question to his former colleagues signaled that, this time around, the burning questions are more political than logistical. “Do you think that it's appropriate for the government to ask someone being considered for an advisory position, ‘Who did you vote for?’” Porter wanted to know. “I think that it's an appropriate question to ask,” replied Ehlers, who also defended the practice of asking where potential advisers stand on various hot-button issues. “Abortion is not a scientific issue, and yet there are technical committees that give advice on issues relating to abortion, like the use of embryonic stem cells in research,” he said. “The dividing line [between politics and science] is not clear. My first principle is to make sure that all views are represented at the table, to get the best people, and then let them shout at each other. That's the ideal scientific advisory committee.” Waxman rejected that argument. “There is a line you need to draw,” he insisted. “For political appointees, the president should expect that his nominee supports his policies. But for advisory committees, they ought not to ask one's views on abortion, or how they voted [in the 2000 presidential election].” Waxman later insisted that the Bush Administration has imposed its own judgments on the advisory process, “settling on a policy first and then finding scientists to support that view.” Earlier in the day, presidential science adviser John Marburger told the panel that the candidate “pool is alarmingly small” when it comes to hiring good federal science managers. But he dodged a question from one of his forerunners, Frank Press, about interference from the White House in staffing his Office of Science and Technology Policy. Resisting such intrusions, Marburger said, “is easier than you might think.” 8. NATIONAL LABS # Los Alamos Suspends 19 Employees 1. David Malakoff The Department of Energy's (DOE's) security and safety problems continue to escalate. George “Pete” Nanos, head of Los Alamos National Laboratory in New Mexico, last week suspended 19 employees—including some senior scientists—pending an investigation of possible rules violations. He had already shut down virtually all work at the lab until the investigation is completed (Science, 23 July, p. 462). Then, starting this week, DOE Secretary Spencer Abraham suspended classified work involving portable computer disks at all DOE facilities, including Lawrence Livermore National Laboratory in California. The massive “stand down” is needed, Abraham says, to make sure that security lapses at Los Alamos weren't repeated elsewhere and to “make certain that specific individuals can be held responsible and accountable for future problems.” Both moves are rooted in a 7 July inventory at Los Alamos that concluded that two classified disks were improperly removed from a safe. Then, on 14 July, an intern's eye was injured by a research laser that had not been turned off. Furious about the incidents, Nanos suspended research at the laboratory and warned that he would fire “cowboys” who flouted the rules. On 22 July, citing “almost suicidal denial” of security and safety practices, Nanos suspended 15 employees involved in the loss of the disks, along with four involved in the laser accident. All will continue to receive pay but are barred from entering the laboratory without an escort. The FBI is investigating the lost disks, and Nanos said some employees could face criminal charges. The DOE-wide shutdown is affecting about a dozen laboratories that do classified work. None of the labs will be able to resume activity until they have performed a series of steps, including a complete inventory of portable disks, the creation of secure repositories for disks and other removable devices containing classified information, and a visit from an independent review team. In the meantime, some researchers are becoming frustrated. In Los Alamos, for instance, residents report a growing number of cars sporting ironic bumper stickers that say “Striving for a Work-Free Safe Zone.” 9. VIROLOGY # Tiptoeing Around Pandora's Box 1. Martin Enserink Researchers say crossing avian and human flu viruses is crucial to understanding the threat of a new influenza pandemic, but they admit that they might create a monster Once again, the world is crossing its fingers. The avian influenza outbreak in Asia, already one of the worst animal-health disasters in history, has flared up in four countries; tens of thousands of birds are being killed in desperate attempts to halt the virus's spread. And again, the unnerving question arises: Could the outbreak of the H5N1 strain spiral into a human flu pandemic, a global cataclysm that could kill millions in a matter of months and shake societies to their core? There is a way to find out, flu scientists say—but it's controversial. Leaving nature to take its course, a pandemic could be ignited if avian and human influenza strains recombine—say, in the lungs of an Asian farmer infected with both—producing a brand-new hybrid no human is immune to. By mixing H5N1 and human flu viruses in the lab, scientists can find out how likely this is, and how dangerous a hybrid it would be. Such experiments can give the world a better handle on the risks, but they could also create dangerous new viruses that would have to be destroyed or locked up forever in a scientific high-security prison. An accidental release—not so far-fetched a scenario given that the severe acute respiratory syndrome (SARS) virus managed to escape from three Asian labs in the past year—could lead to global disaster. Given their scientific merit, the World Health Organization (WHO) is enthusiastically promoting the experiments. But worried critics point out that there is no global mechanism to ensure that they are done safely. Despite the concerns, such studies have already begun. In 2000, the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, started experiments to create crossovers between the H5N1 strain isolated during a 1997 outbreak in Hong Kong and a human flu virus adapted for the lab. The study was suspended when CDC's flu researchers became overwhelmed by SARS and the new H5N1 outbreak, both in 2003, says CDC flu expert Nancy Cox, who led the work. But the agency plans to resume the work shortly with the H5N1 strain now raging in Asia. Others are exploring the options as well. Virologist Albert Osterhaus of Erasmus University in Rotterdam, the Netherlands, is eager to try not just H5N1 but also other bird flu strains, such as H7N7. The Netherlands won't have the required high-level biosafety lab until late 2005, so Osterhaus is talking to researchers in France who do. In the United Kingdom, researchers at the Health Protection Agency, the National Institute for Biological Standards and Control, and universities are also discussing the idea. There are no concrete plans yet—in part because of a lack of funds—but there's a consensus that the studies are important and that Britain is well suited to do them, says influenza researcher Maria Zambon of the Health Protection Agency. The aim of reassortment studies, as they're called, would not be to develop new countermeasures, says WHO's principal flu scientist, Klaus Stöhr, because researchers believe current drugs and an H5N1 vaccine in development would work against a pandemic strain as well. But the experiments would provide a badly needed way to assess the risk of a pandemic. If they indicate that a pandemic virus is just around the corner, health officials would further intensify their fight in Asia and go full-throttle in stashing vaccines and drugs; if not, they could breathe a little easier. “It's an extremely important question, and we have a responsibility to answer it,” insists Stöhr. The safety worries are legitimate, Stöhr concedes, and the work should be done only by labs with ample flu expertise and excellent safety systems—not the ones that let SARS out. “We don't want people just fiddling around,” he says. He also downplays concerns that the results, when published, might help those who would unleash a pandemic on purpose. Anyone with the scientific smarts to do so can already find plenty of ideas in the literature, Stöhr asserts. Moreover, the studies are unlikely to produce anything that could not arise naturally, says Osterhaus: “You could create a monster. But it's a monster that nature could produce as well.” But critics beg to differ. “We've been debating whether to destroy the smallpox virus for years—and now we're planning to create something that's almost as dangerous?” asks Mark Wheelis, an arms-control researcher at the University of California, Davis. Wheelis also points out that there's no way to keep countries with poor safety records from getting in on the game. At the very least, there should be some global consensus on how to proceed, adds Elisa Harris, a researcher at the Center for International and Security Studies at the University of Maryland, College Park—although no formal mechanism for reaching it exists. ## Mix and match The H5N1 strain has been vicious to its human victims, killing 23 of 34 patients in Vietnam and Thailand this year. So far, however, every known patient had been in contact with infected birds; there's no evidence that the virus can jump from one person to the next—for now. But the virus could evolve inside one of its human hosts, acquiring mutations that make it possible to infect humans directly, Stöhr says. Another scenario—one researchers believe sparked several previous influenza pandemics—is reassortment with a human flu virus in a person infected with both. Influenza has a peculiar genome that's divided into eight loose segments, most of them containing precisely one gene. Each segment is copied separately in the host cell's nucleus; at the end of the reproduction cycle, all eight meet up with one another—and with envelope and membrane proteins—to form a new virus particle that buds from the host cell membrane to wreak havoc elsewhere. When a cell happens to be infected with two different strains, homologous segments can mix and match into new, chimeric viruses. To create a worldwide outbreak, a newcomer must cause disease in humans and be transmissible between them, and its coat must look so new that no human immune system recognizes it. This is determined primarily by the two glycoproteins on the viral surface, hemagglutinin and neuraminidase—the “H” and “N” in names like H5N1. (Hemagglutinin comes in at least 16 different types, N in nine.) The current fear is that the Asian flu will keep its H5—which humans have never seen before—but swap enough of the remaining seven gene segments with those of a human strain to become more adept at replication in its new host. During H5N1's first major outbreak in Hong Kong poultry in 1997, 18 people got sick and six died. But the outbreak was stamped out efficiently, and little was heard of H5N1 for 6 years—until it came roaring back last year. Given the magnitude of the current outbreak, the riddle is why reassortment has not yet taken place, says Stöhr. Reassortment studies could help explain whether the world has simply been lucky, or whether there's some barrier to reassortment of H5N1. The experiments are straightforward. Researchers take a cell line such as MDCK or Vero cells, often used for virus isolation, and add both H5N1 and a currently circulating human strain, such as H3N2 or H1N1. Or they can use a slightly less natural technique called reverse genetics, with which virtually any combination of genes can be put into a flu virus. Any viable hybrid strains would be inoculated into mice; those that cause disease would move on to ferrets, a species very similar to humans in its susceptibility to influenza. Any strain that is pathogenic in ferrets and also jumps, say, from a sick animal to a healthy one in an adjacent cage could be humankind's next nightmare. During its first round of experiments with the H5N1 strain, CDC managed to create several reassortants, Cox says, but it didn't get around to characterizing them; they're still sitting in a locked freezer in Atlanta. ## Global risks, global review? Most agree that such experiments are in a league of their own. Controversial flu studies were conducted in the past; for instance, researchers sequenced parts of the genome of the “Spanish flu” strain from 1918 (Science, 21 March 1997, p. 1793) and inserted its genes into other strains to find out why it was so deadly. But that didn't amount to a wholesale fishing expedition for pandemic strains. And because the 1918 strain was an H1 virus, just like one of the currently active ones, you'd expect at least some immunity to it in the human population, says Yoshihiro Kawaoka of the University of Tokyo and the University of Wisconsin, Madison, who studies the 1918 strain. With an H5 virus, in contrast, everyone would be vulnerable. Yet although most countries have systems to review the safety and ethical aspects of run-of-the-mill scientific studies, none have formal panels to weigh studies that could, say, put the entire world at risk or be of potential help to bioterrorists. [The U.S. government has announced plans for a national biosecurity panel and a review system to fill that gap (Science, 12 March, p. 1595), but they have yet to be implemented.] So although CDC's first round of studies cleared all the usual review hurdles at the agency, Cox says, nothing beyond that was considered necessary. Since then, “the times have changed,” Cox says. The H5N1 strain now plaguing Asia, with which CDC wants to work this time, appears to be more virulent than the 1997 version, and the specter of nefarious use of pathogens looms much larger. Moreover, the mishaps with SARS have made people jittery about labs' abilities to keep bugs on the inside. That's why Cox says she has consulted more extensively with colleagues inside and outside CDC, including experts such as Nobel laureate Joshua Lederberg and WHO. She also plans to seek approval from colleagues at the U.S. National Institutes of Health and the U.S. Food and Drug Administration. But flu researcher Karl Nicholson of the University of Leicester, U.K., says there should be a more formal, global consensus on the necessity of the studies, who should conduct them, and how. For any country to undertake them on its own, he says, “is like a decision to start testing nuclear weapons unilaterally.” WHO would be the best organization to start such a process, says Harris: The destruction of the smallpox virus has been debated at WHO, and an international panel there is overseeing experiments with it at CDC and in Russia. But Stöhr believes existing safeguards suffice. The studies have been discussed widely with scientists in WHO's global flu lab network and at a recent flu meeting in Lisbon, he says, and have met with nothing but “overwhelming agreement.” “If there are other voices, we will take them seriously,” Stöhr adds—but for now, it's up to the labs to have their plans rigorously vetted by national authorities and get started. Eventually, any strain with pandemic potential should be destroyed, he says. But there's no way to enforce this, and skeptics point out that the smallpox virus was slated for destruction, too—until the threat of bioterrorism created a movement to keep it alive, perhaps indefinitely, for defensive studies. In a way this discussion is moot, says Richard Webby of St. Jude Children's Research Hospital in Memphis, Tennessee. With flu strains readily available, anyone with a good knowledge of molecular biology could recreate a pandemic virus once it's discovered, he says. “You can destroy this virus,” Webby says, “but it will never really be gone.” 10. NEUROSCIENCE # Crime, Culpability, and the Adolescent Brain 1. Mary Beckman* 1. Mary Beckman is a writer in southeastern Idaho. This fall, the U.S. Supreme Court will consider whether capital crimes by teenagers under 18 should get the death sentence; the case for leniency is based in part on brain studies When he was 17 years old, Christopher Simmons persuaded a younger friend to help him rob a woman, tie her up with electrical cable and duct tape, and throw her over a bridge. He was convicted of murder and sentenced to death by a Missouri court in 1994. In a whipsaw of legal proceedings, the Missouri Supreme Court set the sentence aside last year. Now 27, Simmons could again face execution: The state of Missouri has appealed to have the death penalty reinstated. The U.S. Supreme Court will hear the case in October, and its decision could well rest on neurobiology. At issue is whether 16- and 17-year-olds who commit capital offenses can be executed or whether this would be cruel and unusual punishment, banned by the Constitution's eighth amendment. In a joint brief filed on 19 July, eight medical and mental health organizations including the American Medical Association cite a sheaf of developmental biology and behavioral literature to support their argument that adolescent brains have not reached their full adult potential. “Capacities relevant to criminal responsibility are still developing when you're 16 or 17 years old,” says psychologist Laurence Steinberg of the American Psychological Association, which joined the brief supporting Simmons. Adds physician David Fassler, spokesperson for the American Psychiatric Association (APA) and the American Academy of Child and Adolescent Psychiatry, the argument “does not excuse violent criminal behavior, but it's an important factor for courts to consider” when wielding a punishment “as extreme and irreversible as death.” The Supreme Court has addressed some of these issues before. In 1988, it held that it was unconstitutional to execute convicts under 16, but it ruled in 1989 that states were within their rights to put 16- and 17-year-old criminals to death. Thirteen years later, it decided that mentally retarded people shouldn't be executed because they have a reduced capacity for “reasoning, judgment, and control of their impulses,” even though they generally know right from wrong (see sidebar on p. 599). That is the standard Simmons's lawyers now want the court to extend to everyone under 18. ## Cruel and unusual? Simmons's lawyers argue that adolescents are not as morally culpable as adults and therefore should not be subject to the death penalty. They claim that this view reflects worldwide “changing standards of decency,” a trend that has been recognized in many U.S. courts. Today, 31 states and the federal government have banned the juvenile death penalty. The latest to do so, Wyoming and South Dakota, considered brain development research in their decisions. Putting a 17-year-old to death for capital crimes is cruel and unusual punishment, according to this reasoning. “What was cruel and unusual when the Constitution was written is different from today. We don't put people in stockades now,” says Stephen Harper, a lawyer with the Juvenile Justice Center of the American Bar Association (ABA), which also signed an amicus curiae brief. “These standards mark the progress of a civilized society.” The defense is focusing on the “culpability of juveniles and whether their brains are as capable of impulse control, decision-making, and reasoning as adult brains are,” says law professor Steven Drizin of Northwestern University in Chicago. And some brain researchers answer with a resounding “no.” The brain's frontal lobe, which exercises restraint over impulsive behavior, “doesn't begin to mature until 17 years of age,” says neuroscientist Ruben Gur of the University of Pennsylvania in Philadelphia. “The very part of the brain that is judged by the legal system process comes on board late.” But other researchers hesitate to apply scientists' opinions to settle moral and legal questions. Although brain research should probably take a part in policy debate, it's damaging to use science to support essentially moral stances, says neuroscientist Paul Thompson of the University of California, Los Angeles (UCLA). ## Shades of gray Structurally, the brain is still growing and maturing during adolescence, beginning its final push around 16 or 17, many brain-imaging researchers agree. Some say that growth maxes out at age 20. Others, such as Jay Giedd of the National Institute of Mental Health (NIMH) in Bethesda, Maryland, consider 25 the age at which brain maturation peaks. Various types of brain scans and anatomic dissections show that as teens age, disordered-looking neuron cell bodies known as gray matter recede, and neuron projections covered in a protective fatty sheath, called white matter, take over. In 1999, Giedd and colleagues showed that just before puberty, children have a growth spurt of gray matter. This is followed by massive “pruning” in which about 1% of gray matter is pared down each year during the teen years, while the total volume of white matter ramps up. This process is thought to shape the brain's neural connections for adulthood, based on experience. In arguing for leniency, Simmons's supporters cite some of the latest research that points to the immaturity of youthful brains, such as a May study of children and teens, led by NIMH's Nitin Gogtay. The team followed 13 individuals between the ages of 4 and 21, performing magnetic resonance imaging (MRI) every 2 years to track changes in the physical structure of brain tissue. As previous research had suggested, the frontal lobes matured last. Starting from the back of the head, “we see a wave of brain change moving forward into the front of the brain like a forest fire,” says UCLA's Thompson, a co-author. The brain changes continued up to age 21, the oldest person they examined. “It's quite possible that the brain maturation peaks after age 21,” he adds. The images showed a rapid conversion from gray to white matter. Thompson says that researchers debate whether teens are actually losing tissue when the gray matter disappears, trimming connections, or just coating gray matter with insulation. Imaging doesn't provide high enough resolution to distinguish among the possibilities, he notes: “Right now we can image chunks of millions of neurons, but we can't look at individual cells.” A type of spectroscopy that picks out N-acetylaspartate, a chemical found only in neurons, shows promise in helping to settle the issue. In addition to growing volume, brain studies document an increase in the organization of white matter during adolescence. The joint brief cites a 1999 study by Tomás Paus of McGill University in Montreal and colleagues that used structural MRI to show that neuronal tracts connecting different regions of the brain thickened as they were coated with a protective sheath of myelin during adolescence (Science, 19 March 1999, p. 1908). In 2002, another study revealed that these tracts gained in directionality as well. Relying on diffusion tensor MRI, which follows the direction that water travels, Vincent Schmithorst of the Children's Hospital Medical Center in Cincinnati, Ohio, and colleagues watched the brain organize itself in 33 children and teens from age 5 to 18. During adolescence, the tracts funneled up from the spinal tract, through the brainstem, and into motor regions. Another linked the two major language areas. “The brain is getting more organized and dense with age,” Schmithorst says. ## Don't look at the light Adults behave differently not just because they have different brain structures, according to Gur and others, but because they use the structures in a different way. A fully developed frontal lobe curbs impulses coming from other parts of the brain, Gur explains: “If you've been insulted, your emotional brain says, ‘Kill,’ but your frontal lobe says you're in the middle of a cocktail party, ‘so let's respond with a cutting remark.’” As it matures, the adolescent brain slowly reorganizes how it integrates information coming from the nether regions. Using functional MRI—which lights up sites in the brain that are active—combined with simple tests, neuroscientist Beatriz Luna of the University of Pittsburgh has found that the brain switches from relying heavily on local regions in childhood to more distributive and collaborative interactions among distant regions in adulthood. One of the methods Luna uses to probe brain activity is the “antisaccade” test: a simplified model of real-life responses designed to determine how well the prefrontal cortex governs the more primitive parts of the brain. Subjects focus on a cross on a screen and are told that the cross will disappear and a light will show up. They are told not to look at the light, which is difficult because “the whole brainstem is wired to look at lights,” says Luna. Adolescents can prevent themselves from peeking at the light, but in doing so they rely on brain regions different from those adults use. In 2001, Luna and colleagues showed that adolescents' prefrontal cortices were considerably more active than adults' in this test. Adults also used areas in the cerebellum important for timing and learning and brain regions that prepare for the task at hand. These results support other evidence showing that teens' impulse control is not on a par with adults'. In work in press in Child Development, Luna found that volunteers aged 14 years and older perform just as well on the task as adults, but they rely mainly on the frontal lobe's prefrontal cortex, whereas adults exhibit a more complex response. “The adolescent is using slightly different brain mechanisms to achieve the goal,” says Luna. Although the work is not cited in the brief, Luna says it clearly shows that “adolescents cannot be viewed at the same level as adults.” ## Processing fear Other studies—based on the amygdala, a brain region that processes emotions, and research on risk awareness—indicate that teenagers are more prone to erratic behavior than adults. Abigail Baird and Deborah Yurgelun-Todd of Harvard Medical School in Boston and others asked teens in a 1999 study to identify the emotion they perceive in pictures of faces. As expected, functional MRI showed that in both adolescents and adults, the amygdala burst with activity when presented with a face showing fear. But the prefrontal cortex didn't blaze in teens as it did in adults, suggesting that emotional responses have little inhibition. In addition, the teens kept mistaking fearful expressions for anger or other emotions. Baird, now at Dartmouth College in Hanover, New Hampshire, says that subsequent experiments showed that in teenagers the prefrontal cortex buzzes when they view expressions of people they know. Also, the children identified the correct emotion more than 95% of the time, an improvement of 20% over the previous work. The key difference between the results, says Baird, is that adolescents pay attention to things that matter to them but have difficulty interpreting images that are unfamiliar or seem remote in time. Teens shown a disco-era picture in previous studies would say, “Oh, he's freaked out because he's stuck in the '70s,” she says. Teens are painfully aware of emotions, she notes. But teens are really bad at the kind of thinking that requires looking into the future to see the results of actions, a characteristic that feeds increased risk-taking. Baird suggests: Ask someone, “How would you like to get roller skates and skate down some really big steps?” Adults know what might happen at the bottom and would be wary. But teens don't see things the same way, because “they have trouble generating hypotheses of what might happen,” says Baird, partly because they don't have access to the many experiences that adults do. The ability to do so emerges between 15 and 18 years of age, she theorizes in an upcoming issue of the Proceedings of the Royal Society of London. Luna points out that the tumultuous nature of adolescent brains is normal: “This transition in adolescence is not a disease or an impairment. It's an extremely adaptive way to make an adult.” She speculates that risk-taking and lowered inhibitions provide “experiences to prune their brains.” With all the pruning, myelination, and reorganization, an adolescent's brain is unstable, but performing well on tests can make teens look more mature than they are. “Yes, adolescents can look like adults. But put stressors into a system that's already fragile, and it can easily revert to a less mature state,” Luna says. The amicus curiae brief endorsed by the APA and others also describes the fragility of adolescence—how teens are sensitive to peer pressure and can be compromised by a less-than-pristine childhood environment. Abuse can affect how normally brains develop. “Not surprisingly, every [juvenile offender on death row] has been abused or neglected as a kid,” says ABA attorney Harper. ## Biology and behavior Although many researchers agree that the brain, especially the frontal lobe, continues to develop well into teenhood and beyond, many scientists hesitate to weigh in on the legal debate. Some, like Giedd, say the data “just aren't there” for them to confidently testify to the moral or legal culpability of adolescents in court. Neuroscientist Elizabeth Sowell of UCLA says that too little data exist to connect behavior to brain structure, and imaging is far from being diagnostic. “We couldn't do a scan on a kid and decide if they should be tried as an adult,” she says. Harper says the reason for bringing in “the scientific and medical world is not to persuade the court but to inform the court.” Fassler, who staunchly opposes the juvenile death penalty, doesn't want to predict how the case will turn out. “It will be close. I'm hopeful that the court will carefully review the scientific data and will agree with the conclusion that adolescents function in fundamentally different ways than adults.” And perhaps, advocates hope, toppling the death penalty with a scientific understanding of teenagers will spread to better ways of rehabilitating such youths. 11. NEUROSCIENCE # Adolescence: Akin to Mental Retardation? 1. Mary Beckman The human brain took center stage in 2002 when the U.S. Supreme Court ruled against the death penalty for mentally retarded persons. In that case (Atkins v. Virginia), six of the nine justices agreed that executing a convict with limited intellectual capacity, Daryl Atkins, would amount to cruel and unusual punishment. Instructing the state of Virginia to forgo the death penalty in such cases, Justice John Paul Stevens wrote: “Because of their disabilities in areas of reasoning, judgment, and control of their impulses, [mentally retarded persons] do not act with the level of moral culpability that characterizes the most serious adult criminal conduct.” When the case of Christopher Simmons, who committed murder at age 17, comes before the same justices in October, says law professor Steven Drizin of Northwestern University in Chicago, defense attorneys hope to equate juvenile culpability to that of mentally retarded persons. “Juveniles function very much like the mentally retarded. The biggest similarity is their cognitive deficit. [Teens] may be highly functioning, but that doesn't make them capable of making good decisions,” he says. Brain and behavior research supports that contention, argues Drizin, who represents the Children and Family Justice Center at Northwestern on the amicus curiae brief for Simmons. The “standard of decency” today is that teens do not deserve the same extreme punishment as adults. The Atkins decision provides advocates with a “template” for what factors should be laid out to determine “evolving standards of decency,” says Drizin. These factors include the movement of state legislatures to raise the age limit for the death penalty to 18, jury verdicts of juvenile offenders, the international consensus on the issue, and public opinion polls. In 2002, the court also considered the opinions of professional organizations with pertinent knowledge, which is how the brain research comes into play. Last, the justices considered evidence that the mentally retarded may be more likely to falsely confess and be wrongly convicted—a problem that adolescents have as well. 12. SOLAR PHYSICS # Solar Physicists Expose the Roots of the Sun's Unrest 1. Robert Irion Adaptive optics and new observing tools are illuminating the fine details of our star's gassy outbursts Halloween 2003 played tricks on space-weather forecasters but gave solar physicists some unexpected treats. During two extraordinary weeks in late October and early November, the sun unleashed some of the most powerful flares ever measured. The resulting storms made headlines as they swept past Earth, sparking far-flung auroras and damaging satellites. Telescopes on the ground and in space swung into action, yielding their clearest views yet of the magnetic churnings that trigger the sun's violence. For instance, a new solar satellite that monitors x-rays and gamma rays spotted where the sun propels its most energetic particles. And solar telescopes on the ground, using “adaptive optics” systems to counteract the blurring of Earth's atmosphere, exposed striking fine-scale details of the sun's visible surface as an eruption evolved. These observations, described at a recent meeting,* may help solar physicists forecast when the sun is about to let loose a powerful outburst. ## Sudden transformation Magnetic activity on the sun waxes and wanes in an 11-year cycle, which last peaked in 2001. Solar physicists believe this cycle—and the overall configuration of the sun's magnetic fields—arises from a dynamo of electric currents far below the sun's visible surface. As the sun rotates, bundles of twisted magnetic field lines poke through the surface. The tangles of magnetism block some heat from escaping the sun's interior, creating relatively cool and dark sunspots within “active regions” many times the size of Earth. For reasons that aren't yet clear, some active regions store gigantic amounts of energy within fantastically contorted magnetic fields. Like springs coiled to their maximum possible strains, these fields can snap into more stable patterns when they touch one another, often far above the surface. That violent process whips charged particles back down toward the sun or out into space as a bright flare. These events were on full display during the Halloween outbursts, which shattered what had been a quiet solar autumn. In mid-October, three giant maelstroms of sunspots rotated into view one after another. They marred the sun's face so dramatically that some Californians saw the blotches with their unshielded eyes through the smoke of widespread wildfires. “The sun turned from an almost spotless orb into an ominously scarred source of mighty fireworks in just a few days,” says Paal Brekke of the European Space Agency, deputy project scientist for the spaceborne Solar and Heliospheric Observatory. The fireworks began in earnest on 19 October, when the first of a dozen major flares burst from the sun's surface. By 5 November, one group of sunspots—a tortured cluster called “active region 10486”—had spewed three of the 10 most energetic flares seen since a satellite first began gauging them in 1976, including the biggest one known. “These explosions release several billion 1-megaton bombs' worth of energy in a few minutes,” says physicist Robert Lin of the University of California (UC), Berkeley. “They are the biggest explosions in the solar system, by far.” The flares expel protons and other dangerous high-energy particles, posing the most serious hazard for solar system travelers. But the sun's broadest wallops come from its outer atmosphere, the blazing corona. Called coronal mass ejections (CMEs), these events are literally the blasting of chunks of the corona into space in the form of billowing clouds of plasma. Strong magnetic fields lace through the plasma as it races outward at speeds up to 2800 kilometers per second. When the blob encounters a planet with its own magnetic field, such as Earth, chaos can ensue if the fields clash. The geomagnetic storms in October and November were severe, although a CME from the biggest flare—on 4 November—only struck a glancing blow. Still, an analysis by the National Oceanic and Atmospheric Administration found that 59% of scientific satellites near Earth and in deep space were affected, with symptoms ranging from electronic glitches to instrument failures. Ironically, one doomed instrument, aboard the Mars Odyssey spacecraft, was measuring the radiation environment at Mars before a CME knocked it out, probably beyond repair. The storms apparently killed a$640 million Japanese satellite that studied climate change, and they forced astronauts into the most shielded part of the international space station.

One satellite that withstood the CME onslaught was NASA's Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI). Launched in February 2002, RHESSI studies x-rays and gamma rays from solar flares. Last fall's bright flares were tailor-made for RHESSI analysis, says Lin, the team leader.

Lin and his colleagues, including Gordon Hurford of UC Berkeley, examined signals from three of the powerful flares. The writhing magnetic fields slammed electrons and ions—mostly protons—down toward the solar surface, triggering distinct signals that RHESSI could separate. The particles cascaded along arching magnetic field lines with energies up to billions of electron-volts.

Surprisingly, the electrons and ions appeared to stream along different magnetic paths. The electrons struck the denser surface gas at a distinct pair of “footpoints”—like the two bases of a McDonald's arch—about 15,000 kilometers away from where the ions plowed back into the sun (Science, 14 May, p. 950). “We assumed the electrons and ions were accelerated in the same way, but they're not,” Hurford says. If theorists can explain why, they will gain insight into how flares boost the energies of particles so efficiently.

RHESSI's spectral analysis of the gamma rays, the first ever conducted for solar flares, also suggests that the composition of gases at the base of a flare changes as the flare progresses. Early in a flare, the team sees elements that are easily stripped of electrons, such as iron. But later on, the signal reveals elements to which electrons are more tightly bound, such as neon. “It's clear that something is changing in the lower atmosphere during flares, probably the temperature of the gas,” says Lin. If the researchers can verify this result, he notes, they will have devised a direct way to gauge how temperatures evolve at the hearts of major flares.

## The finer points

On the ground, solar physicists used observations of the Halloween events to make strides toward a cherished goal of space-weather forecasting: predicting which active regions will pop off with flares or CMEs, and when. The advances boil down to measuring how spring-loaded the magnetic fields have become. “People are trying to quantify the magnetic complexity in active regions,” says solar physicist Nat Gopalswamy of NASA's Goddard Space Flight Center in Greenbelt, Maryland. To do that, they need high-resolution images of the magnetic fields, at a scale of 100 kilometers or smaller. Adaptive optics have made such images possible.

Nighttime astronomers use bendable mirrors and computer analysis of atmospheric motions to transform the image of a star from a jittery smear to a stable point. For the sun, the challenge is different: to lock onto a subtle honeycombed pattern on the solar surface called “granulation,” created by rising and falling cells of gas. “You get a whole spectrum of image motion, defocusing, astigmatism, and higher-order aberrations that become harder and harder to correct,” says physicist Thomas Rimmele of the National Solar Observatory (NSO) in Sunspot, New Mexico. Eliminating those aberrations for such an extended target requires an optical network that tracks the clarity of dozens of little solar images as quickly as 2500 times every second—a computationally demanding task.

Several facilities have met the challenge, notably the 1-meter Swedish Solar Telescope on La Palma, Canary Islands, and NSO's 0.76-meter Dunn Solar Telescope. Rimmele and fellow NSO scientist Kasiviswanathan Sankarasubramanian used the Dunn telescope to catch an explosion from active region 10486 on 23 October—the first highly detailed “movie” of an erupting flare. The images, taken during engineering tests of a brand-new optics system, show individual kinked loops of magnetic structures brightening and fading as the flare rifles through.

Six days later, another team—displaced from its home at California's Big Bear Solar Observatory by wildfires—used the NSO adaptive-optics system to take exquisitely detailed images of structures within a sunspot in the same active region. The spot unleashed one of its fiercest flares just 90 minutes afterward. The team, led by postdoctoral researcher Guo Yang of the New Jersey Institute of Technology (NJIT) in Newark, assembled the best images from a rapid-fire series of exposures to further cut down on distortion.

The result clearly shows a sunspot about to blow, says solar physicist Carsten Denker of NJIT. The team traced the “neutral line,” a region of zero magnetic flux winding through the sunspot. Gas channeled by magnetic fields flowed rapidly in opposite directions on either side of the neutral line. “Those are the key ingredients to explain flares,” Denker says. “You need shear flows and a sheared magnetic field. This is the first time we could see those flows with a resolution of 100 kilometers.”

Independently, a team in Alabama watched the same spot, including its powerful outburst. Solar physicists Debi Prasad Choudhary and Ron Moore of NASA's Marshall Space Flight Center in Huntsville, Alabama, used polarization measurements of the sunspot to create the only “vector magnetogram” of the 29 October flare—a detailed map revealing all directional components of the magnetic field. “Several spots of different polarities were pushed together in a complicated way,” Moore says. “It was one of the most strongly sheared and rapidly evolving fields we'd seen.” Solar physicist David Falconer, Moore, and others at Marshall are working toward a way to predict when a major flare is imminent, based on the length of a sunspot's neutral line and the amount of shear along it. The method will require regular and detailed monitoring of active regions, Moore says.

Another recent study underscores how much mystery remains, even about the sun's run-of-the-mill ebbs and flows. Physicist Thomas Berger of the Lockheed Martin Solar and Astrophysics Laboratory in Sunnyvale, California, led a team that used the Swedish Solar Telescope to study the decaying remnants of an active region in May 2003. Berger expected the adaptive-optics system to expose individual blobs of magnetic field in small, discrete units. Instead, the images show a startling array of wavy structures that Berger dubs “ribbons” and “flowers.” Their presence poses a challenge to existing theories of how the sun generates magnetism close to the surface, he says.

“We've found new ways in which the smallest elements of the sun's magnetic field arrange themselves in the turbulent flowfields of the sun's surface,” Berger says. “We've never seen anything as detailed as these strange agglomerations.” The sun, it seems, has many more shocks in store.

• * 204th meeting of the American Astronomical Society, Denver, Colorado, 30 May to 3 June.

13. # A Race to the Starting Line

1. Gretchen Vogel

Scientists are scrambling to devise new methods for snaring athletes who cheat with steroids, hormones, and, someday, even extra genes

KREISCHA, GERMANY—Tucked on the wooded edge of this village in the Saxon hills south of Dresden is a drab, single-story office building with a sinister past. Until 1989 officials of the German Democratic Republic tested their athletes here to certify them as drug-free before international competitions. But it was all a charade. Many of the East German athletes, both men and women, were systematically doped up with testosterone and other anabolic steroids, often without their knowledge. It was the Kreischa lab's responsibility to ensure that the regimen was suspended long enough before a competition to flush out any traces of drugs, explains Klaus Müller, director of the Institute for Doping Analysis and Sport Biochemistry that today occupies the building. Sometimes the drug docs cut it too close. “You would hear that a certain famous athlete couldn't travel to a competition because of a ‘sudden illness,’” says Müller, whose institute is part of a worldwide antidoping network. “We all knew what that meant.”

That crooked chapter in German sport is over, but the practice of doping appears to be more widespread than ever. Last month world champion sprinter Kelli White received a 2-year ban from competition after admitting to having taken banned steroids and the hormone erythropoietin (EPO), which boosts red blood cell counts. Other clients of the Bay Area Laboratory Co-operative (BALCO) nutrition center in Burlingame, California, were also implicated in the scandal; as Science went to press, U.S. officials were investigating evidence that Olympic gold medalists Marion Jones and Chryste Gaines and world-record sprinter Tim Montgomery had been treated with banned steroids and hormones by the same lab.

In the privileged world of elite sports, avarice and the pursuit of glory continue to lead coaches and chemists astray and tempt athletes to risk health and medals. “Sport can be so magnificent and so powerful precisely because humans play the key role,” says Andrew Pipe, a physician at the University of Ottawa Heart Institute and former chief medical officer to Canada's Olympic team. “It can be so depressing and sordid for exactly the same reason.”

Dopers are getting better at covering their tracks, forcing researchers to invent new techniques to detect ever more subtle uses of synthetic chemicals or proteins that boost the body's ability to build muscle, shed fat, or carry oxygen. What was once the exclusive domain of analytical chemists—who searched for steroids in urine samples—now involves endocrinologists and geneticists as authorities attempt to clamp down on what could become the next illicit frontier: doping with genes for muscle building. “Testing gets better and better, but the opposition gets better and better too,” says Don Catlin, director of the University of California, Los Angeles (UCLA), Olympic Analytical Laboratory.

## Back-alley chemistry

Athletes have been seeking an artificial edge since at least the late 1800s, when runners and long-distance bicyclists used nitroglycerin and even cocaine to boost stamina and block pain. But although authorities began testing for banned substances in the 1970s, their efforts had little impact, says Peter Sonksen of St. Thomas' Hospital in London and a former member of the International Olympic Committee's (IOC's) medical commission. “For a long time there was a feeling that many sporting bodies were protecting their players,” he says.

A lack of vigilance created an environment for blatant cheating. For example, a series of astounding world-record performances in the 1980s, especially in power sports such as the shot put or hammer throw, were almost certainly fueled by testosterone and other prohibited anabolic steroids, Müller says.

There is little doubt that steroids help athletes beef up. By targeting the same receptor as testosterone does, they boost the body's capacity for building muscle and erode its capacity for breaking it down. But they have manifold side effects. Although women produce some testosterone naturally, ratcheting up levels even slightly leads to increased body hair and acne and can wreak havoc with the reproductive system. In men, taking steroids suppresses natural production of testosterone, which can lead to bigger breasts, shrunken testicles, and infertility. In both sexes, high doses of the drugs damage the liver and the cardiovascular system.

As testing for steroids began to be enforced more strictly in the 1990s, use of the drugs plummeted —and the pace of record-breaking tapered off. The antidoping forces seemed to have the upper hand until 2002, when the sport world was rocked by revelations that a pair of so-called designer steroids—drugs with no legitimate medical use—had been synthesized, apparently to elude doping testers.

In one case Catlin's team detected unusually low levels of natural steroids such as testosterone, epitestosterone, and androsterone in the urine of a female cyclist, a sign that something was amiss. Probing further, his group found traces of norbolethone, an androgen developed by Wyeth in the 1960s. In animal tests, Catlin says, norbolethone appeared to be a very effective muscle builder while having relatively few masculinizing side effects. It was tested in short children and underweight patients, but Wyeth shelved the compound, apparently because of toxic side effects. Evidently, someone was cooking up a new supply.

A whistleblower made the second discovery possible. In June 2003 Catlin received the residue from a used but empty syringe from the U.S. Anti-Doping Agency. A track coach had sent it to the authorities, suggesting that they take a careful look. Within a few weeks, Catlin and his colleagues had identified tetrahydrogestrinone (THG). The new chemical, which had never before been described, resembles two steroids banned for use by professional athletes: gestrinone, prescribed occasionally for the treatment of endometriosis, and trenbolone, which has some uses in veterinary medicine. Both steroids have powerful anabolic effects, and the UCLA team quickly suspected that the derivative had been designed to activate the same receptors while foiling standard screens for known steroids. When authorities tested urine stored from previous competitions, they found at least a dozen THG-tainted samples, many from athletes who had connections to BALCO.

Because routine screening would never have caught THG, doping testers were confronted with the prospect of having to develop ways to detect an incalculable array of steroids and other chemicals that might play a similar performance-enhancing role. “The THG story tells us very convincingly that there are people out there who are scheming to develop new entities to give to athletes,” says Catlin. “We've studied the chemistry, and there's essentially no end to the possibilities. Are there others out there? There certainly are.” They just haven't been identified yet, he adds.

Some labs are hoping to defeat dopers at their own game. Wilhelm Schänzer and his colleagues at the Institute of Biochemistry of the German Sport University Cologne have begun churning out more than a dozen potential designer agents by tinkering with existing steroids. “We're trying to think in the same way as those who are trying to make new compounds,” Schänzer says. His group uses mass spectrometry to profile the concoctions and identify signals that might betray illicit compounds in bodily fluids.

THG presented a legal challenge as well. Lawyers for athletes who tested positive argued that the authorities couldn't demonstrate that the substance is an anabolic steroid, and therefore it could not be classified as a banned substance. Indeed, the chemical's effects in animals—much less humans—had never been characterized in a legitimate lab; standard animal tests take many months. Under court- imposed time constraints, scientists resorted to a quicker solution, a test originally designed to ferret out environmental pollutants that mimic hormones. The test uses yeast cells altered to make the human version of the testosterone receptor as well as a luminescent protein that glows when the receptor is activated. Using the test, David Handelsman of the ANZAC Research Institute in Concord, Australia, found that THG lights up the cells more brightly than standard anabolic steroids such as trenbolone and even testosterone.

The confirmation came just in time to support the case against European champion sprinter Dwain Chambers, who had tested positive for THG in August 2003. (Chambers has said that he ingested the compound unknowingly in a supplement provided by BALCO.) In February, U.K. Athletics banned him from running in competitions for 2 years. Chambers had been considered a favorite for the gold medal this summer in Athens, but according to British Olympic Association rules, he is banned from the Athens games.

The bioassays may soon join a growing arsenal that scientists are assembling to thwart the use of new designer steroids, says Handelsman. He and his colleagues, for example, are working on a simple test to compare the amount of testosterone normally present in the urine of men and women with the total steroid load, as measured by the bioassays. “If there's a gap, then that suggests there's an unidentified substance there,” Handelsman says.

The workhorse of steroid detection, the mass spectrometer, could also be put to innovative use. Even if an analysis fails to flag unexpected side chains or telltale peaks, it can reveal subtle differences in the ratio of carbon isotopes that can help identify the origin of organic molecules. An unusual ratio of carbon-12 to carbon-13 in certain molecules can raise a red flag in a doping test. If a steroid molecule has a ratio typical of a plant rather than an animal, it is a sign that it comes from an outside source, says Schänzer.

## In pursuit of oxygen

Unknown steroids are hard enough to pin down; injections of naturally occurring hormones are even more elusive. Hormone levels fluctuate from hour to hour and from person to person, so measuring absolute amounts can't nail a doper. To do that, scientists must find secondary signals indicating that the body's normal chemistry has been tampered with.

For years, some athletes took advantage of the dearth of detection methods to pump themselves up with EPO. The hormone, produced mainly in the kidneys, stimulates the body's production of red blood cells so that the blood carries more oxygen. People living at high altitudes produce more EPO naturally to compensate for the lower oxygen concentration in the air. Athletes often take advantage of that trick, training at high altitudes for competitions held nearer sea level. But when recombinant EPO, used to treat anemia, became available in the late 1980s, it spawned a doping epidemic.

The practice is dangerous. If blood has too many red blood cells, it can become too viscous for the heart to pump effectively. EPO is thought to have played a role in the deaths of more than a dozen Dutch and Belgian cyclists who died of sudden heart attacks in the 1980s, just after EPO became available in Europe. Despite the risks, EPO's use was apparently widespread in the 1990s as scientists raced to figure out how to detect its use.

The first EPO tests, introduced a decade ago, set a limit for hematocrit, the percentage of red blood cells in the blood. But that test is flawed, as it cannot tell whether an athlete has used EPO to boost his or her hematocrit to a level just below the allowed limit.

In 2000, in time for the Olympic Games in Sydney, Australia, the IOC introduced a combined blood and urine test for EPO. The blood test measures, among other things, the concentration of hemoglobin and the level of reticulocytes—immature red blood cells—in the blood. Testers look for unusually high levels or sudden changes from previous tests to tip them off to possible dopers. The test has one major advantage: It can detect signs of EPO use weeks after an athlete takes it. But because it does not measure illegal EPO directly, it cannot prove a doping allegation.

A second method allows testers to spot traces of recombinant EPO directly in urine. Because the recombinant version is produced in animal cells, it carries slightly different sugars in its side chains than the natural version. These differences show up in electrophoresis, which measures the distance proteins chug through a gel under the influence of electricity. The concentration of EPO in urine is fairly low, however, so the test could be foiled if an athlete takes diuretics or other urine-increasing drugs.

The bottom line is that the current tests simply don't cut it. “Athletes are getting around the EPO tests all the time,” Catlin says. Officials of the World Anti-Doping Agency (WADA) agree. “We need cheaper and more sensitive tests for EPO,” says Olivier Rabin, WADA's scientific director.

WADA is also funding projects to tackle an old-fashioned doping technique that the organization claims is back in vogue since the introduction of EPO tests. Called blood doping, it involves an athlete either receiving blood transfusions—enriched in red blood cells—from donors, or removing an athlete's own blood, spinning it to concentrate the red blood cells, then reinfusing it right before competition. Although the techniques don't involve foreign chemicals, they are banned by sports organizations on safety grounds.

## A growing threat

One of the compounds that BALCO clients are accused of abusing is something that doesn't show up in any standard doping tests: human growth hormone (hGH). The protein is part of a biochemical cascade that spurs muscle buildup and the shedding of fat. It's used legitimately to treat children who lack the protein and are unusually short. But like EPO and legitimate steroids, it too has been hijacked for use in athletes. Although its effects in healthy athletes are unclear, doping experts suspect that its use is widespread—especially because authorities have not yet introduced an official test for the compound.

That's a high priority, however, and scientists say they have several tests ready for the Athens Games. WADA officials are circumspect about whether they will use any of the tests for hGH in August. “Athletes know it is on the banned substances list,” says Rabin, and should expect to be tested.

Detecting hGH is even harder than detecting EPO, because it doesn't have telltale sugars to betray artificial versions. But in a lucky break for doping sleuths, the pituitary gland's production of growth hormone is rather messy. The gland makes a mixture of variations of the protein as well as protein fragments. The manufactured version, on the other hand, is much cleaner, consisting chiefly of one of the heavier versions, so when someone shoots up with the recombinant protein, the ratio of the different forms is skewed. Endocrinologist Christian Strasburger of the Charité University Clinics in Berlin and his colleagues at the Medizinische Klinik Innenstadt at the University of Munich have developed an immunoassay that measures the ratio of the two forms. The test seems extremely reliable, Strasburger says.

Another group led by Sonksen of St. Thomas' Hospital has developed a method to measure the effects of growth hormone on the production of other proteins, including insulin-like growth factor-1 (IGF-1) and collagen. The test is not as clear-cut as that developed by Strasburger and his colleagues, but it can detect the effects of hGH weeks after someone has injected it. The Strasburger method works best 24 to 36 hours after injection.

16. # Peering Under the Hood of Africa's Runners

1. Constance Holden

Kenyans dominate endurance running, and West Africans excel as sprinters. With a physiological explanation in hand, researchers are now probing the genetics of this geographic mastery

In 1968, a Kenyan runner named Kip Keino emerged as a shining star of the Mexico City summer Olympics, setting a world record in the 1500-meter race. Year after year Keino's success has been followed by equally dazzling feats by his compatriots: Kenyan men now hold world records in the 3000-meter track race, the 15-, 20-, and 25-kilometer road races, the half-marathon, and the marathon. Kenyan men have won 13 of the last 14 Boston marathons. Kenyan women are also rising fast: They hold half of the top 10 marathon times and world records in 20-, 25-, and 30-km track races. What is even more remarkable is that most of these athletes come from a small area in Kenya's Rift Valley, from a group of tribes called the Kalenjin who number little more than 3 million people.

Theories abound about what Kenya-born writer and runner John Manners calls “the greatest geographical concentration of achievement in the annals of sport.” Is it the high altitude that fosters big lungs and efficient oxygen use? Is it their maize-based diet? Or the fact that many children run to school? A grueling training regimen, perhaps?

Such questions have inspired a handful of researchers to try to define the Kenyan magic. Meanwhile, scientists are unraveling why athletes whose ancestors come from the other side of the continent—West Africa—have emerged as the world's fastest sprinters.

## Fuel economy

Leading the charge in penetrating the Kenyan mystique has been Bengt Saltin, a Swedish physiologist who heads the Copenhagen Muscle Research Centre in Denmark. In the 1990s, Saltin's group began comparing Kenyan and Scandinavian runners by scrutinizing their physiological makeups and assessing the “trainability” of novice runners in both countries.

A decade later, the scientists have ruled out most of the popular explanations for Kenyans' domination of running. Altitude is not the key to the riddle, they have found, because there's no difference between Kenyans and Scandinavians in their capacity to consume oxygen. And the Kenyan diet is on the low side for essential amino acids and some vitamins as well as fat, says Dirk Christensen of the Copenhagen center: “In spite of the diet, they perform at high level.” The running-to-school hypothesis was demolished as well: Kenyan children aren't any more physically active than their Danish peers. Do Kenyans try harder? The researchers found that the Danes actually pushed themselves harder on a treadmill test, reaching higher maximum heart rates.

An important clue is the ability of Kenyans to resist fatigue longer. Lactate, generated by tired, oxygen- deprived muscles, accumulates more slowly in their blood. Comparisons of lactate levels have suggested to Saltin's group that Kenyan runners squeeze about 10% more mileage from the same oxygen intake than Europeans can.

Just as more aerodynamic cars get better gas mileage, the Kenyan build helps explain their fuel efficiency. A recent British TV documentary described the Kalenjin as possessing “birdlike legs, very long levers that are very, very thin [on which they] bounce and skip” along.

Saltin's group has quantified this observation. Compared with Danes, the thinner calves of Kenyans have, on average, 400 grams less flesh in each lower leg. The farther a weight is from the center of gravity, the more energy it takes to move it. Fifty grams added to the ankle will increase oxygen consumption by 1%, Saltin's team calculates. For the Kenyans, that translates into an 8% energy savings to run a kilometer. “We have solved the main problem,” declares Henrik Larsen of the Copenhagen center. “Kenyans are more efficient because it takes less energy to swing their limbs.” Other scientists say the jury is still out on the Kenyan question. But “I think Saltin is probably the most correct that anyone is at the moment,” says physiologist Kathryn Myburgh of the University of Stellenbosch in South Africa, who is exploring the role of Kenyans' training.

However, slim lower legs are not the whole story. Kenyan runners also have a higher concentration of an enzyme in skeletal muscle that spurs high lactate turnover and low lactate production. Saltin says that this results in an “extraordinarily high” capacity for fatty acid oxidation, which helps wring more energy out of the muscles' biochemical reactions. Because intense training alters the body's biochemistry, Saltin says that he can't say for sure whether the ezyme levels are due to genes or training. But he adds, “Ithink it's genetic.”

Research in South Africa jibes with the Copenhagen group's findings. A team led by exercise physiologist Adele Weston of the University of Sydney, Australia, compared black South Africans, whose running strengths are similar to those of Kenyans, with white runners. The two groups had similar VO2 max values—that is, when putting out maximum effort, they used up the same amount of oxygen per kilogram of body weight per minute. But the black runners were more efficient in their oxygen consumption, lasting on a treadmill at maximum speed for twice as long as the whites. As with the Kenyans, the black South African runners accumulated less lactate and had higher levels of key muscle enzymes.

## A little more twitchy

Whereas East Africans dominate long- distance running, West Africans have surged to the fore in short-distance events. Little research has been done on West Africans, but there's powerful circumstantial evidence for some physical advantages, as presented by Jon Entine in his book Taboo: Why Black Athletes Dominate Sports and Why We're Afraid to Talk About It. Athletes of primarily West African descent—which includes the majority of U.S. blacks—hold all but six of the 500 best times in the 100-meter race, “the purest measure of running speed,” says Entine, whose book set off a broad debate on the subject.

Various studies have shown that West African athletes have denser bones, less body fat, narrower hips, thicker thighs, longer legs, and lighter calves than whites. But the differences between East and West Africans are even more striking. The fabled Kenyan runners are small, thin, and tend to weigh between 50 and 60 kilograms, whereas West African athletes are taller and a good 30 kilograms heavier, says Timothy Noakes, a prominent exercise physiologist and researcher at the University of Cape Town.

The differences don't stop with body shape; there is also evidence of a difference in the types of muscle fibers that predominate. Scientists have divided skeletal muscles into two basic groups depending on their contractile speed: type I, or slow-twitch muscles, and type II, fast-twitch muscles. There are two kinds of the latter: type IIa, intermediate between fast and slow; and type IIb, which are superfast-twitch. Endurance runners tend to have mostly type I fibers, which have denser capillary networks and are packed with more mitochondria. Sprinters, on the other hand, have mostly type II fibers, which hold lots of sugar as well as enzymes that burn fuel in the absence of oxygen. In the 1980s, Claude Bouchard's team at Quebec's Laval University took needle biopsies from the thigh muscles of white French Canadian and black West African students. They found that the Africans averaged significantly more fast-twitch muscle fibers—67.5%—than the French Canadians, who averaged 59%. Endurance runners have up to 90% or more slow-twitch fibers, Saltin reports.

Bouchard, now at Louisiana State University in Baton Rouge, says his team looked at two enzymes that are markers for oxidative metabolism and found higher activity of both in the West Africans, meaning they could generate more ATP, the energy currency of the cell, in the absence of oxygen. The study suggests that in West Africa there may be a larger pool of people “with elevated levels of what it takes to perform anaerobically at very high power output,” says Bouchard.

Although training can transform superfast-twitch type IIb fibers into the hybrid type IIa, it is unlikely to cause slow- and fast-twitch fibers to exchange identities. Myburgh says there is evidence that, with extremely intensive long-distance training, fast IIa fibers can change to slow type I fibers. So far, however, there is no evidence that slow-twitch fibers can be turned into fast-twitch ones. As an athlete puts on muscle mass through training, new fibers are not created, but existing fibers become bigger.

## Running ACEs

The differences in physique and muscle makeup that underlie the dominance of Kenyan endurance runners and West African sprinters doubtless have a strong genetic component. But researchers are only just getting off the starting mark in the search for genes that influence running performance. Bouchard's group, for example, is collecting DNA samples from 400 runners and other top endurance athletes from the United States and Europe, but he says they haven't spotted any running genes yet.

There are a couple of intriguing possibilities, though. In 1999, a team headed by Kathryn North of the Children's Hospital at Westmead in Australia described two versions of a gene that affects production of α-actinin-3, a protein found only in fast-twitch muscles. They found the less efficient version of the gene—which results in poorer energy conversion—in 18% of the members of a group of Caucasians. In 2003, North's group reported in the American Journal of Human Genetics that only 6% of a group of sprinters had the gene defect; 26% of endurance runners had it. The authors surmise that α-actinin-3 helps muscles generate “forceful contractions at high velocity.”

Alejandro Lucia Mulas of the European University in Madrid is taking DNA samples from Eritrean runners to explore another candidate: different versions of the gene for angiotensin-converting enzyme (ACE). Lucia says the less active version, or I allele, of this gene is associated with less muscle, less fluid retention, and more relaxed blood vessels—which would enhance oxygen uptake—and appears to be more prevalent in endurance runners.

And in Scotland, sports physiologist Yannis Pitsiladis has launched a major onslaught on the Kenyans' secrets with the International Centre for East African Running Science. Headquartered at the University of Glasgow, the virtual center will bring together research on demography, diet, and socioeconomic factors as well as genes. Pitsiladis says he has spent the last 3 years in East Africa collecting DNA samples from their “living legends” and now has DNA from 404 Kenyan and 113 Ethiopian athletes. His team has found a higher prevalence of the I allele for the ACE enzyme in male marathoners compared with men from the general Ethiopian population. But Pitsiladis thinks his numbers may lack significance given the variability of the trait in African populations. “At the moment there is no evidence” that East Africans have a genetic advantage in running, he says.

None of the data negate the importance of cultural habits and training. But as Entine quotes anthropologist and sports science expert Robert Malina, who is retired from Michigan State University, “Differences among athletes of elite caliber are so small that if you have an advantage that might be genetically based … it might be very, very significant.”

Next month's Olympic games in Athens should demonstrate yet again that West African runners are built for speed and Kenyans built to endure.

17. # An Everlasting Gender Gap?

1. Constance Holden

For a while, female runners were closing in on their male counterparts. Now they're barely keeping the guys' taillights in sight

When the U.K.'s Paula Radcliffe ran the London Marathon last year in just under 135 minutes, shaving almost 2 minutes off the record she set in 2002, the whispers started again: Are women improving their performance so quickly that one day they may compete on the same tracks with men?

Expert opinion suggests that day will remain elusive—as long as women retain female bodies. The gap between the sexes appears to have plateaued, with women performing at about 90% of male levels. Apart from the marathon, “world records for women have been absolutely static” for more than a decade, notes Kenya-born journalist and running expert John Manners.

That plateau wasn't evident 12 years ago. In a letter to Nature published on 2 January 1992, titled, provocatively, “Will women soon outrun men?,” Brian L. Whipp and Susan Ward of the University of California, Los Angeles, looked at the world records of five standard Olympic running events, from the 200-meter dash to the 26-mile (42-kilometer) marathon, from the 1920s through 1990. They found that women were improving their times at double the rate of men in the short distances and were narrowing the gap even faster in the marathon, in which record-keeping for women only started in 1955. “The gap is progressively closing,” the authors wrote. At that rate, they projected that marathon times could converge by 1998 and that gender differences in all races could disappear by 2050. In 1996, a poll by U.S. News and World Report reported that two-thirds of Americans believed that “the day is coming when top female athletes will beat top males.”

Absent unforeseen genetic or hormonal interventions, men, it seems, will maintain an advantage. That's due largely to their steady supply of a performance drug that will never be banned: endogenous testosterone, which boosts muscle power and oxygen capacity. The typical young man has a maximum oxygen use capacity, or VO2 max, of about 3.5 liters per minute, compared with 2 liters for a woman, says physiologist Stephen Seiler of the Institute of Health and Sport at Agder College in Kristiansand, Norway. Although individual levels of testosterone vary widely, males tend to have at least 10 times as much of the stuff as women. The hormone stimulates the creation of red blood cells, which means that men's blood holds about 10% more of the oxygen-carrying protein hemoglobin. But oxygen is not the whole story. Kirk Cureton of the University of Georgia School of Health and Human Performance in Athens compared the performance of male and female athletes on an exercise bicycle after scientists had withdrawn blood, leaving the subjects with equal amounts of hemoglobin in circulation. That reduced but did not eliminate the sex difference in VO2 max, indicating that other factors, particularly musculature, play into the difference.

Men have more muscle and larger hearts in relation to body size, says Dirk Christensen, an exercise physiologist at the University of Copenhagen. This affects aerobic capacity: He says that a trained woman's heart can pump out the same volume of blood as a man's can, but it has to work much harder to do so.

Because testosterone spurs growth of muscle tissue, it also affects anaerobic capacity—the ability to produce energy quickly without oxygen—which gives males an edge in sprinting as well. The primary energy for the intense bursts of power required for sprints is generated anaerobically, explains retired Michigan State University anthropologist and sports expert Robert Malina. Indeed, after launching themselves for a 10-second sprint, some athletes “don't take another breath till it's all over,” he says. (Endurance running in contrast relies almost exclusively on aerobic energy.) More muscle means more of the two main anaerobic energy sources: phosphocreatine and glucose.

Recent records support the gender-gap plateau, Seiler says. A few years ago, he and writer Steve Sailor analyzed results from Olympic games and world championships of the International Association of Athletics Federations between 1952 and 1996, selecting events in which men and women ran under the same conditions. They found that if the marathon —which wasn't an Olympic event for women until 1984—were excluded, the mean performance gap for running events increased from 11% in the mid-'80s to 12% in the mid-'90s. They also observed that men's world records were broken far more often in the '90s than women's—largely due to the extraordinary performance of East African runners (see p. 637).

Seiler recently updated these numbers for Science. He reports that from the world records in the eight main running events from 100 meters to the marathon, seven suggest an increasing gender gap. The marathon is the exception: The gap has narrowed from 11.9% to 8.4%, thanks to Radcliffe's new record. Seiler says the current average gap is now 11.01%, up from 10.4% in 1989. In short, he says, “at the highest levels of performance, the gender gap in running performance has actually widened over the last 20 years.”

Much of the female record is clouded by drug use, especially the records set in the 1970s and '80s by Eastern European women that have never been bested. In 1984, 38 women, mostly from the East Bloc, ran 1500 meters in under 4.05 minutes, according to Jon Entine in his book Taboo. In 1991, only nine did.

Although the impressive gains in female marathon performance have suggested to some observers that women have greater endurance than men, physiologist Henrik Larsen of the Copenhagen Muscle Research Centre says that's not so: “Women had not developed long distance; that's why the improvement is much greater on the marathon. We don't see any higher oxidative capacity in women.” Exercise physiologist Timothy Noakes of the University of Cape Town, South Africa, agrees. A smaller body frame gives women an edge on endurance, he says, but men can run 10% faster even when the difference in body size is controlled for.

Whipp says he's still keeping an open mind on the subject of male-female competition. He told Science that he and colleagues are currently working to extend their analysis of world running records to 2003. His team has looked for “a pattern of response that would suggest a physiological limit,” he says, but so far has found none. “There is no evidence at the beginning of the 21st century that the human athlete has reached the limit of [his or her] potential,” Whipp says.

But that appears to be a minority view. “We are approaching the limits of human performance in a lot of the one-dimensional events like the 100-meter sprint or marathon,” says Seiler. “Records will continue to be broken, but the price is extremely high. And the percentage of the population that has the genetic potential to excel at this level is infinitesimal.” As for the gender gap in running, he defers to Norway's marathon queen Grete Waitz, setter of world records in the 1970s and '80s, who said: “As long as women are women, I don't think they will surpass men.”

18. # Graceful, Beautiful, and Perilous

As gymnastics routines grow ever trickier, experts worry that children are being pushed beyond their limits—and are paying with their health

When Natalia Yurchenko introduced a new vault at the 1983 World Championships in Budapest, it helped her win a gold medal. The move is fiendishly difficult: Yurchenko would sprint 20 meters, cartwheel, land backward on a springboard, and launch herself into the air. Arching her back, she would reach for a padded apparatus called a horse and then propel herself into the air again, somersault one-and-a-half times, twist, and land facing backward.

Following that dizzying lead, gymnasts set out to conquer the Yurchenko vault—and injuries mounted. “Every single country had problems,” recalls William Sands, head of sport biomechanics and engineering at the U.S. Olympic Training Center in Colorado Springs, Colorado. The vault became notorious in 1988, when 15-year-old Julissa Gómez broke her neck while attempting it at the World Sports Fair in Japan. She was paralyzed, fell into a coma, and later died of complications. Within a year, the U.S. Gymnastics Federation, predecessor of USA Gymnastics, banned the move at levels below Olympic competition.

Deaths are rare in gymnastics. That's not the case for injuries, which Sands calls “the most pressing and serious problem faced by contemporary gymnastics.” Compared with other kinds of athletics, gymnastics stands out for its singular combination of bone-jarring impacts, intense training, young age, and ever-more-demanding skills. Similar injuries afflict both sexes, but more girls participate and they start training younger. Experts fear that elite teens and preteens, by pushing their bodies to the limits, might be raising their risk of osteoarthritis and other health problems later in life. “Kids may be sustaining injuries that will be with them for a long time,” says Lyle Micheli, an orthopedic surgeon at Children's Hospital in Boston.

There's no question that gymnastics is a punishing sport. Long hours of practice exact considerable wear and tear. An elite gymnast trains 25 to 40 hours a week, typically executing more than 250,000 “skills” a year. In the 1970s and '80s, elite gymnasts began competing at younger ages. Although that trend has stabilized, preteens are subjecting their bodies to sprains, fractures, and sometime even deformities of growing bones.

Gymnastics is also unusual in that it has become more demanding as judges revise the “Code of Points” used to score national and international competitions. In 1996, established routines became worth fewer points, so gymnasts who wanted to outscore the competition had to swing higher into the air, execute more twists and somersaults, and otherwise raise their game. For a few years after that, the U.S. national team had problems with a serious kind of knee injury, a tear of the anterior cruciate ligament. Also raising the risk of injury, equipment has been modified for higher performance. For example, gymnasts can jump higher from new vaulting tables with more spring, allowing more air time for acrobatics.

Sports scientists say it is hard to nail down the health risks of competitive gymnastics for children. For starters, there is no reporting system for gymnastics injuries. In the one existing system, run by the National Collegiate Athletic Association, gymnastics usually ranks in the top three sports for injuries, behind football and hockey. But that doesn't shed light on the private clubs that train most gymnasts. Coaches of elite gymnasts often may be too busy to let scientists in for a close look, says Patrick O'Connor, an exercise scientist at the University of Georgia in Athens.

With fewer than 200 elite gymnasts in the United States, most research is done on less-skilled athletes. These studies back the impression that injuries are common. In a 3-year study of 79 female gymnasts aged 7 to 18, Dennis Caine, a sports-injury epidemiologist at Western Washington University in Bellingham, found that 60 girls suffered a total of 192 sprains, strains, and other injuries. The study also showed that the risk of injury was significantly higher for advanced gymnasts. That makes sense, because as gymnasts get serious, they put in longer hours, work harder, and attempt more difficult routines.

Caine and others have shown that the most commonly injured body parts in boys and girls include the lower back, shoulder, and ankle. Wrist pain is especially prevalent, says John DiFiori, chief of sports medicine and a team physician at the University of California, Los Angeles. In some cases, the distal end of the forearm bones can be damaged, stunting the radius relative to the ulna. “It can be career-ending for some kids, because weight-bearing is too painful,” says Caine. Such nagging injuries can continue to plague gymnasts in college, although he says it's unclear whether they flare up later in life.

For girls, another concern is delayed growth and sexual maturation. Elite female gymnasts tend to grow more slowly and go through puberty later than other girls. Robert Malina, a retired auxologist in Bay City, Texas, doesn't think intense training can be blamed; he says a confounding factor is that larger girls who mature earlier tend to drop out from competition. However, Caine points to several case and cohort studies that indicate at least a temporary halt in growth of some top-level gymnasts, likely due to intense training and poor nutrition, he says. When the gymnasts lightened their load, or retired, their growth rate accelerated.

Many of these issues came to a boil during the 1992 Barcelona Olympics, when European newspapers ran stories about the injuries of young female athletes and the extreme training they had undergone. That spurred a research project that many sports scientists call exceptional in its depth and quality.

The Federal Institute for Sport Science in Bonn, Germany, asked Gert-Peter Brüggemann, then director of the Institute for Athletics and Gymnastics at the German Sport University Cologne, to study the effects of high-level performance on gymnasts. His team examined various training regimens, using records from the once top-secret facilities of the former East Germany. Compared with Western gymnasts, the East Germans were put through more frequent repetitions of skill sets and more difficult maneuvers and got less rest. In total, their growing bodies had endure a longer, harder pounding compared with those of West German athletes.

Those loads had severe consequences. After reviewing archived x-rays and reexamining 42 women and 26 men who once had been elite gymnasts, Brüggemann's group found a much higher injury rate among the East German team than in 23 West Germans who had competed between 1968 and 1985. Mild deformities and abnormalities of the spine were more than twice as common in the East Germans, Brüggemann and colleague Hartmut Krahl, then of the Alfried Krupp Krankenhaus in Essen, reported in 2000 in Belastungen und Risiken im weiblichen Kunstturnen (Load and Risks in Female Gymnastics).

Seeking to prevent such injuries, the group also carried out a 4-year prospective study of 135 young elite gymnasts on German national squads. Working with high-speed video cameras and force-measuring devices in the apparatus and landing mats, they analyzed roughly 100 exercises and measured mechanical loads exerted on the body over time. About half the spinal deformities could be explained by the amount of loading. They also discovered that the severity tended to be worse among those with weaker muscles and connective tissue. Stronger muscles absorb the shock of impacts and bad landings, protecting joints and the spine.

Based on those findings, Brüggemann's team devised a healthier training regimen in which girls spent less time learning fancy routines, instead logging more hours in the weight room. Far fewer gymnasts suffered pain or injury during the first 3 years of the strength-training program, they found, with ankle injuries alone falling more than 50%. One beneficiary of the program is Brüggemann's daughter Lisa, who competed in the 1999 and 2001 World Championships and will be heading to Athens on the German team. “As long as she's in this [training] system, I feel comfortable,” Brüggemann says.

A similar approach has helped the U.S. national team, which has suffered fewer knee injuries since putting a greater emphasis on strength training in 2000. “We work on fitness and health rather than just skills,” says team physician Lawrence Nassar of Michigan State University in East Lansing. Fitness and health are the best predictors of who will make the Olympic squad, he says.

Jill McNitt-Gray, who studies biomechanics at the University of Southern California in Los Angeles, is also working to make gymnastics safer. “I got into this because I was a gymnast and a coach,” she says. “I saw too many athletes getting hurt, and I thought there must be a better way.” In her lab, McNitt-Gray takes high-speed video of gymnasts, measures forces and muscle movements, and creates computerized stick models to test the effects of modifying the moves. The gymnasts are outfitted with tiny sensors to track their motions and muscle use. This system can be used to warn trainers if the gymnasts are nearing their limits, McNitt-Gray says: “If they can't control [the motions], they're more likely to get injured.”

Also on the agenda is improving the equipment. By filming the vault board and spring floor with a high-speed camera, Sands and his colleagues found that it wobbles underfoot before heaving the gymnast up. “It looks like the gymnast has landed on a waterbed,” he says. That's part of what makes the equipment risky, because gymnasts must cope with boards that often compress unevenly and unpredictably. Boards have improved over the years but are still not good enough, says Sands. Similarly, the characteristics of the safety mats that gymnasts land on depend on their construction, age, and other factors. Brüggemann and his colleagues have recommended that safety mats be stiffened for better stability during landing.

Most competitive gymnasts cannot avoid an occasional injury, but little is known about the long-term damage to their health. Anecdotal evidence suggests that elite gymnasts risk developing osteoarthritis and chronic musculoskeletal pain. “In most sports, if you compete at a high level you're going to carry some baggage with you for a long time,” Sands says. The few studies that have tested this hypothesis have yielded conflicting results.

Experts insist that gymnastics can be made safer without diminishing its elegance and power. “It is not as much an art as people like to think. We've got plenty of science to do and plenty of well-understood tools to find out what we need to know,” says Sands. “The problem is lack of money, courage, and commitment.” Perhaps the most important message is “not to cross the pain threshold,” says Caine. “The old adage ‘No pain, no gain’ is inappropriate when it comes to kids.” In the pursuit of Olympic gold, however, that message can easily get lost.

19. # Engineering Peak Performance

Mechanical engineer Mont Hubbard can tell athletes how to do it faster, higher, and farther. But will they listen?

DAVIS, CALIFORNIA—Mont Hubbard handles a weighty discus the way an anthropologist might examine a 10,000-year-old skull, turning it gingerly in his fingertips. “It rolls out of the thrower's hands spinning this way,” he says, turning the shiny blue “artifact,” which might break your foot if it fell on you, in ultraslow motion. With his gray beard and gangly build, the 61-year-old looks nothing like an elite athlete as he rocks back in his well-worn chair in an office cluttered with baseball bats, Frisbees, and thick binders labeled “shot put,” “fly casting,” and “pole vault.” Yet Hubbard knows how to hurl a javelin as far as it can go, guide a speeding bobsled most efficiently through a hairpin turn, and knock the longest home run.

Whereas athletes rely on countless hours of practice to master their sports, Hubbard deduces winning ways from his prodigious understanding of mechanics. “Almost all sports is mechanics,” Hubbard says. “Always there are things that are moving, and almost always the motion is central. You want to know how to get something to move in a certain way, or how to get it to go farther.” When it comes to analyzing and optimizing motion, Hubbard and his group at the University of California, Davis, got game. Employing basic physics and sophisticated mathematical tools, they calculate what athletes try to feel: the best way to move.

Hubbard has studied, among other things, the mechanics of the high jump and the pole vault, the aerodynamics of the discus and the shot put, the flight of punted footballs, the bend of curveballs, the flexing of fishing rods and racehorse forelimbs, the swiveling of skateboards, and the rolling of basketballs around rims. In the 1980s, he developed a system to help javelin throwers launch their spears at just the right angles. In the '90s, he built a bobsled simulator for U.S. Olympians. Currently, Hubbard and his group are analyzing ski jumping, women's gymnastics, trampoline, bungee jumping, and Frisbee flight. Last year, he kicked up controversy by arguing that a baseball batter can knock a slower- moving curveball farther than a fastball (see sidebar on p. 644).

“Hubbard is perhaps the leading figure in determining optimal strategies in sports,” says William Stronge, a mechanical engineer at the University of Cambridge, U.K., who has studied the dynamics of bouncing balls. Neville de Mestre, an applied mathematician at Bond University in Gold Coast, Australia, who has done research on fluid dynamics in sports, says Hubbard is exceptionally eager to put his ideas to the test. “Many people talk about simulations,” he says, “but Mont actually builds experiments to see how it works.”

Hubbard's work may be of gold-medal quality to his peers, but he has had a hard time winning over coaches and athletes. “Most people in sports don't think it's useful,” he admits. That may change, however, as Hubbard and other practitioners of “sports engineering” develop more powerful and realistic analyses that can better predict the keys to victory.

## For the fun of it

Walking into Hubbard's Sports Biomechanics Laboratory on the Davis campus is like stepping into a strange high-tech equipment locker, minus the stench of sweaty socks. On the back wall of the windowless room hangs a rack of javelins, their menacing points ready to perforate the air. Toward the front of the room, a blue fiberglass tub is perched on a hulking metal frame—an early version of the bobsled simulator. Nestled next to it, looking something like a homemade respirator, stands a one-of-a-kind pitching machine that can launch baseballs at 240 kilometers per hour spinning in any direction. A dent in the wall attests to the punch that such supercurveballs pack.

Hubbard has toys that any weekend warrior would envy, but he is far from a natural-born jock. “I love sports,” he says, but “I was never a very successful athlete. I was always a scrawny kid.” Growing up in Altavista, a small town in central Virginia, Hubbard played baseball fanatically but never blossomed into a star. He didn't really develop physically until he entered the U.S. Military Academy at West Point, New York, from which he graduated in 1964. Later in life, Hubbard played squash avidly until his second back surgery sidelined him for good.

After a stint in the Air Force, Hubbard earned his doctorate in 1975 from Stanford University in California, doing his thesis research on control systems of automobile engines. For the first several years of his career, Hubbard stuck to conventional engineering. Then, during a sabbatical at the University of Cambridge in 1981, he became fascinated with the mechanics of walking. He soon began applying his engineering skills to other types of human motion, especially sports. “It would have been an awfully boring 30 years if I'd just been doing the same old stuff,” he says.

Following his heart has meant abandoning traditional funding sources. “Almost nobody pays us to do this,” he says. Hubbard and his students make do primarily with four-figure grants from the U.S. Olympic Committee and various sports federations. So they look for ways to learn more by spending less. For example, in one corner of the lab a tennis ball half-covered with aluminum foil hangs from two pieces of string. Albert Jordan, an undergraduate, sets the ball spinning and swinging in front of a radar gun. He hopes that radar waves reflected by the half-silvered ball will reveal the ball's spin—a first step toward tracking the spin of a batted baseball. It's also, Jordan says, “a way to do an experiment without spending any money.”

Despite the relatively spartan conditions, Hubbard has no trouble attracting students. Master's degree student John Kockelman once owned a bungee-jumping company and performed stunts for television commercials; he's studying the properties of bungee cords. Doctoral student Alison Sheets competed in gymnastics from childhood through college; she's studying the dynamics of the women's uneven bars. “My brain is far better at gymnastics than my body ever was,” Sheets says. “I can still do the skills, and sometimes I feel like I'm doing them better because I definitely understand them a lot better now.”

## Game plan

Although Hubbard has eclectic tastes in research problems, he approaches them all with a well-defined method. First, he tries to tease out the essential elements and features of an activity. For example, in analyzing the flight of a ski jumper, Hubbard and colleagues model the jumper as an assemblage of weighty links representing skis, legs, torso, and head. Using the mathematical methods of mechanical engineering, the researchers crank out differential equations that describe the forces and motions of the various parts, which they generally solve with computers. They systematically alter the variables that the athlete might control—such as the angle between a ski jumper's skis and the wind whipping past—to determine optimal values. Finally, if all goes well, Hubbard's team develops a methodology to help athletes reach peak performance.

Hubbard has applied this approach to myriad sports, most successfully to the throwing events in track and field. In the shot put, discus, javelin, and hammer throw, a thrower can adjust only a handful of angles and rates of rotation. Given an athlete's basic strength and speed, an engineer can calculate exactly how far the athlete will ever be able to heave the object. “That's a very beautiful concept to me,” Hubbard says. “In a sense, you're allowing people to achieve the very best they can, given their physical limitations.”

But such ruminations also underscore the appeal of muscle-building anabolic steroids, he says. Once an athlete has perfected a technique, the only way to throw farther is to boost strength and arm speed—for some, a pursuit that knows no bounds.

## Athlete attitudes

How athletes feel about Hubbard's methods depends on whom you ask. His javelin-training system provided valuable information, says Donna Mayhew, a seven-time U.S. national champion and an Olympian in 1988 and 1992. The javelin “is such a technical sport, any feedback you can get really helps a lot,” she says. “Little changes in technique are going to make big changes in distance.” Hubbard's analysis showed that Mayhew tended to throw the javelin with its nose pointed several degrees too high, she says.

Bobsled driver Brian Shimer, a five-time Olympian who steered the United States to a bronze medal in 2002, says that the bobsled simulator is “great for off- season training and for working on hand-eye coordination.” But the simulator cannot reproduce the feel of a bobsled ride, especially the crushing accelerations in the turns. So riding it might actually dull a driver's well-honed touch, says Shimer, who now coaches U.S. bobsled drivers. “Once the season starts,” he says, “I would not want to get in a simulator.”

And in the tradition-steeped world of baseball, scientific analysis takes a back seat to good old-fashioned coaching. Hubbard's study of hitting curveballs and fastballs makes “interesting conversation,” says Gary Matthews, batting coach for the Chicago Cubs, but it won't change his coaching style. “It's more important to the scientists than it would be to me,” he says. “What's important to me is that a guy has a lot of heart and can move the fastball.”

Still, Hubbard and other sports engineers say they are slowly making inroads with coaches and athletes—and their numbers, although small, are growing. “There's a society of sports engineers that just didn't exist 10 years ago,” says Rod Cross, a physicist at the University of Sydney, Australia, who studies tennis. This September, Hubbard and his colleagues at Davis will host the Fifth International Conference on the Engineering of Sport.

Hubbard says he doesn't worry how—or if—athletes use his ideas. He's too busy moving from project to project. “I don't want to be teaching people how to throw the javelin,” Hubbard says. “I want to be thinking about new stuff.” For engineers at the top of their form, that's the whole point of the game.

20. # Long Gone or Gone Wrong?

Last November, Mont Hubbard and colleagues argued in the American Journal of Physics that a well-hit curveball would sail farther than a perfectly struck fastball—even though the curveball moves slower and packs less energy, both before and after it's walloped. That's because a batted ball travels farther if it has more backspin to give it aerodynamic lift. The top-spinning curveball approaches the batter already turning in a direction that increases the backspin of the batted ball. On the other hand, the back-spinning fastball comes at the batter spinning the wrong way, which decreases the backspin of the outgoing orb. As a result, an optimally struck curveball will travel around 455 feet (138 meters), about 12 feet (3.5 meters) farther than a well-hit fastball.

Not so, contends Robert Adair, a physicist at Yale University and author of The Physics of Baseball. Although he can't say precisely what's wrong with Hubbard's calculations, he claims that balls moving at the speeds Hubbard quotes just don't go that far, so Hubbard and colleagues must have overestimated the lifting effect of spin. For his part, Gary Matthews, batting coach for the Chicago Cubs, says Hubbard's team may be right— especially if the curveball “hangs” high in the strike zone. “A hanging curveball will go a long, long way,” he says. Although Matthews may not be an expert in aerodynamics, he did belt 234 homers in 16 seasons in the major leagues.