# News this Week

Science  07 Aug 2009:
Vol. 325, Issue 5941, pp. 660
1. Public Health

# Type 2 Poliovirus Back From The Dead in Nigeria

1. Leslie Roberts

In 1999, the Global Polio Eradication Initiative (GPEI) scored an unequivocal victory: It wiped one of three serotypes of wild poliovirus, type 2, off the face of the earth, except for samples stored in labs for study or vaccine creation. That triumph left just two foes to battle, poliovirus types 1 and 3, which have continued to put up quite a fight.

But now a version of type 2 has returned. Springing back to life from a weakened form of the pathogen used in a vaccine, poliovirus type 2 is causing a runaway outbreak in Nigeria, where types 1 and 3 are also raging. This type 2 virus had already caused 124 cases of paralysis by 26 July, up from 30 this time last year, and the case count is headed straight up, already exceeding that of the most feared type 1. “[The type 2 outbreak] is very much not under control,” says Mark Pallansch, whose group does genetic analysis of poliovirus isolates at the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta. In July, the World Health Organization (WHO) issued a global alert warning that type 2 poliovirus in Nigeria posed an “increasing risk of international spread.”

It's a stunning setback for the initiative, now already 9 years past its original deadline for vanquishing the virus. “Type 2 is an eradicated pathogen. … No one wants to see the world reseeded” with a virus “the world got rid of 10 years ago,” says Roland Sutter, who directs research at GPEI's headquarters at WHO in Geneva, Switzerland.

If type 2 was going to crop up somewhere, it's little surprise it happened in Nigeria, one of only four countries where transmission of types 1 and 3 has never been stopped. In 2008, nearly 50% of all global cases of types 1 and 3—roughly 1600—occurred in the chaotic, unstable northern part of the country, where opposition to the polio campaign continues and immunization rates are generally abysmal (Science, 6 February, p. 702).

The type 2 poliovirus in Nigeria is a slightly different beast than types 1 and 3, which are wild viruses that have never been quashed. Instead, it's a direct descendant of the weakened virus used to make the oral polio vaccine (OPV) that has mutated and regained its neurovirulent, dangerous state—in other words, a good virus gone bad.

Experts disagree on just how dangerous these circulating vaccine-derived polio-viruses (cVDPVs) are. One school maintains that cVDPVs are wimpier than their wild counterparts, so any outbreak should be easier to control. But those analyzing the Nigerian VDPV outbreak—the largest and longest running ever observed—say it puts an end to that myth. “If there was any doubt that VDPVs pose risks, I think those doubts have essentially been dispelled,” says molecular virologist Olen Kew of CDC.

Any distinction between a wild and vaccine-derived virus is “artificial,” says Pallansch. “It is scientifically unacceptable to make a distinction. … This VDPV paralyzes children. It can only be stopped by immunization. Does it behave like a wild virus? Will it continue to spread indefinitely [without intervention]? Yes.”

No one knew that a vaccine virus could pose such a threat, regaining not only the power to paralyze but also the ability to spread from person to person, until 2000, when CDC scientists investigating a suspicious outbreak of type 1 poliovirus in Hispaniola traced it to the use of OPV—specifically, to a single OPV dose in 1998 or 1999. That meant the vaccine-derived virus had been circulating undetected for years—and could do so in other settings as long as OPV was used.

Some program critics began immediately calling for a global switch from OPV to the inactivated polio vaccine, IPV, made from a killed virus. But GPEI officials in Geneva stuck with OPV for the eradication campaign: Not only was it cheap and easy to administer in drops, it had a proven track record at stamping out outbreaks in tropical settings. Its benefits far outweighed the risks, which could be managed, officials said. And that seemed to be the case in the handful of VDPV outbreaks detected since—until Nigeria.

The first case of vaccine-derived type 2 polio was detected in Nigeria in late 2005 (Science, 28 September 2007, p. 1842). It limped along in characteristic VDPV fashion for a few years, until January 2009, when it “just took off, for reasons we don't pretend to understand,” says Sutter. The mutated vaccine virus could have become more transmissible, experts speculate, or, more likely, the pool of children susceptible to type 2 grew larger, providing a reservoir in which the virus could circulate.

Nigerians had been focused on fighting wild types 1 and 3, using monovalent vaccines that are more effective than trivalent OPV (tOPV) against these two serotypes. That was the right decision, says Kew, but as a result, population immunity to type 2 was low. In northern Nigeria, roughly 20% of children have never received a single dose of tOPV, which is very effective against type 2.

Despite the intensity of the outbreak, experts agree that because tOPV works so well against type 2, a couple of good rounds with that vaccine should “stop type 2 in its tracks,” as Sutter says. Unfortunately, he adds, in Nigeria, “the biggest problem is whether we can reach enough kids.” The country conducted one tOPV campaign in May, and another is scheduled for August. It's too soon to say how many children were immunized in May, but early signs are encouraging, says Kew. “I am still more worried about types 1 and 3,” in large part because type 2 viruses tend to stay put geographically. Sutter agrees. “In Nigeria, type 2 should go first. If we can't get rid of this one, there is little hope of getting rid of the others.”

Longer term, the Nigerian cVDPV outbreak is underscoring just how hard finishing the job of polio eradication will be. To prevent OPV from seeding new outbreaks, the world must stop using the vaccine, says GPEI head Bruce Aylward—when it figures out a way to do so safely. Plans are in flux, but that probably involves using a revamped IPV. With the vaccine-derived and wild-type viruses virtually indistinguishable, says Pallansch, “the job is not finished until all poliovirus is gone.”

2. Biosecurity

# Faulty Risk Analysis Puts Biofacility Plan in Jeopardy

1. Yudhijit Bhattacharjee

Ever since the U.S. Department of Homeland Security (DHS) announced plans to replace the aging Plum Island Animal Disease Center with a mainland facility 4 years ago, the proposal has been attacked by environmental groups and lawmakers worried about the risk to livestock of an accidental release of deadly pathogens like the foot-and-mouth (FMD) virus. Now, as DHS prepares to build the 450 million National Bio and Agro-Defense Facility (NBAF) in Manhattan, Kansas, it has run into its stiffest challenge yet: a damning report from the Government Accountability Office (GAO) that accuses DHS of relying on a flawed risk analysis to push the project through. Released last week, the GAO report provides ammunition not just to those who oppose NBAF on environmental grounds but also to a Texas consortium that wanted the facility. The consortium filed a lawsuit against DHS this spring alleging bias in the selection of the Kansas site. The House Energy and Commerce Subcommittee on Oversight and Investigations plans to hold a hearing this fall. “It is puzzling that DHS wants to transfer the foot-and-mouth virus from the relative isolation of Plum Island into the heart of cattle country,” Representative John Dingell (D–MI), a subcommittee member, said in a press release announcing the GAO report. The report points out a multitude of weaknesses in DHS's assessment of risk at the proposed facility. For one, the report says, the model DHS used to sketch the airborne spread of the FMD virus is suited to studying radiological dispersion, not the dispersion of viral particles. Nor did DHS model the risk of infectious spread from one farm to another, the report said. “Our impression is DHS simply went through the motions,” says Nancy Kingsbury at GAO. Based on the faulty analysis, she says, DHS concluded that all of the six sites under consideration—including Plum Island—were equally safe. So DHS focused on other selection criteria, such as community acceptance—which Kansas lawmakers, including Republican Senator Pat Roberts, played up in their lobbying efforts to win the competition (Science, 12 December 2008, p. 1620). DHS officials say they used the best scientific data available and will not back down. “We intend to place the lab [in Kansas],” DHS Secretary Janet Napolitano told the Associated Press last week. FMD experts contacted by Science say they generally agree with GAO's criticisms. If DHS “did not consider the network of direct and indirect contacts in the region that would be responsible for the spread of the disease,” it is hard to put a lot of confidence in the agency's conclusions about the risks, says veterinarian Andres Perez of the University of California, Davis. Matt Keeling, an FMD expert at the University of Warwick in the United Kingdom, thinks the GAO report may have underestimated problems with DHS's analysis, such as its failure to consider water-borne contamination of the environment. “I'd be worried about what we had in the U.K.,” he says, referring to the 2007 FMD outbreak in Surrey caused by a leak of virus-laden water from the Pirbright animal facility (Science, 14 September 2007, p. 1486) That's why some scientists favor keeping research on FMD and other animal diseases off the mainland. If an incident like the Pirbright leak were to happen “in Kansas and in Plum Island, which of those two releases would result in the largest number of outbreaks?” asks Perez, adding that common sense, not models, should dictate the answer. 3. Swine Flu Outbreak # Worries About Africa as Pandemic Marches On 1. Martin Enserink In July 2002, more than 70% of the 2160 inhabitants of Sahafata, a small village in the rural highlands of southeastern Madagascar, came down with an acute respiratory illness, and 27 died. A few patient samples tested positive for influenza, but the viciousness of the outbreak led health authorities to suspect something worse and ask for assistance. In August, researchers from Atlanta, Paris, and Geneva, Switzerland, descended on the region. They confirmed that the outbreak was caused by H3N2, one of the three influenza strains that circle the globe every year. Although there was nothing special about the Madagascar strain, a series of local factors—from an unusually cold winter to civil unrest and a poor health infrastructure—conspired to make the outbreak more intensive and lethal than usual, they hypothesized. That episode—and a similar one in the Democratic Republic of the Congo (DRC) in 2003—showed just how little researchers know about flu's patterns of spread and severity in Africa, says Jean-Claude Manuguerra of the Pasteur Institute in Paris, who helped investigate the outbreak. And it suggests that, with its myriad of problems, Africa—especially south of the Sahara—might be harder hit by the 2009 H1N1 pandemic virus than any other continent. The virus has been detected in 19 African countries so far. With HIV/AIDS, TB, malaria, and a host of other diseases competing for attention, influenza has never been high on most African countries' priority lists. Whether that is justified is not clear, says World Health Organization (WHO) flu scientist Keiji Fukuda, because nobody knows how much of the massive total burden of respiratory infections in Africa is due to flu. Getting a handle on the spread of influenza viruses in Africa has long been problematic because laboratories and surveillance have been lacking. A 2002 survey by Barry Schoub and colleagues at South Africa's National Institute of Virology showed that only a handful of countries in sub-Saharan Africa had labs able to isolate and characterize influenza viruses. That has begun to change rapidly in the past few years, due mostly to the threat of H5N1 avian influenza, which has surfaced in 11 African countries. At so-called pledging conferences in 2006 and 2007 in Beijing, Bamako, and New Delhi, donors offered billions to fight avian influenza and promote pandemic preparedness in the developing world. Agencies such as the U.S. Centers for Disease Control and Prevention (CDC), the U.S. military, the Pasteur Institute, and WHO have begun providing African labs with training and equipment. Today, 13 countries have a National Influenza Center accredited by WHO (see map); at least seven more have labs capable of testing for the pandemic virus. WHO wants to see the network expand further, says Fukuda. The new labs have already proven their worth in the current pandemic, says Mark Katz, head of the CDC-Kenya lab: Ethiopia and Tanzania, for instance, were able to identify their first cases themselves. Surveillance is also expanding. Lately, Rwanda, Kenya, Tanzania, Uganda, Zambia, and the DRC have joined a list of countries that have set up so-called sentinel flu surveillance systems, Katz says, in which doctors from selected sites send in samples from patients with flulike symptoms for testing. With that, “we should be able to pick up outbreaks of the novel H1N1 and document its spread,” he adds. Still, most countries have no lab capacity whatsoever, and surveillance systems cover only a tiny fraction of Africa's massive geography. “We really won't have a good idea what's happening,” says Schoub, who is now executive director of the National Institute for Communicable Diseases in Johannesburg, South Africa. The absence of reported H1N1 cases in most African countries indicates that in most places, there's simply nobody looking. Experts fear that the high HIV infection rates in many African countries could worsen the pandemic's impact. Although data are limited, studies in Western countries have suggested that HIV-infected seasonal flu patients are at higher risk of severe disease and death than those without HIV; South African studies have shown that they are also more likely to have secondary infections such as bacterial pneumonia, says Schoub. Meanwhile, African health systems' capacity to cope with the pandemic is limited. Again, Madagascar's H3N2 outbreak offered a preview of problems that may play out across the continent. Patients were hard to reach, antibiotics to treat secondary infections were in short supply, intensive care units were nonexistent, and general awareness about influenza was minimal. An analysis of 35 African countries' national preparedness plans, published last year by Richard Coker and colleagues at the London School of Hygiene and Tropical Medicine, showed that preparation was wanting as well. Most plans were detailed when it came to detecting flu in animals but not on coping with a human pandemic. Operational details were scant, and some plans—such as Madagascar's seven pages—were rudimentary. Vaccines could make a difference—in theory, at least. But setting up vaccination programs would be a massive logistical challenge, and besides, it's unclear whether developing nations will have much access to vaccines, despite efforts by WHO Director-General Margaret Chan. Manufacturers have reported disappointing yields, and most of their output is spoken for through deals with rich countries. Add all of that up, and it seems inevitable that the death toll in Africa will be higher than elsewhere, says Manuguerra—but the patchy reporting and the difficulty of doing epidemiology suggest that we'll never know how much higher exactly. “We'll have some data on some spots here and there,” says Manuguerra, “but we won't have the entire picture.” 4. Parasitology # Key Malaria Parasite Likely Evolved From Chimp Version 1. Jon Cohen For centuries, the origin of the main parasite that causes malaria in humans has remained murkier than the stagnant water loved by the mosquitoes that transmit the killer pathogen. Now an international research team has uncovered compelling evidence that the parasite, Plasmodium falciparum, evolved from a relative called P. reichenowi that infects chimpanzees. The data support a provocative theory that as human red blood cells evolved a way to dodge P. reichenowi, they became highly vulnerable to P. falciparum. Much debate has surrounded the relationship of P. falciparum to P. reichenowi, in part because for decades, scientists had only one isolate of the chimp parasite. But working with collaborators in Africa, a team led by Nathan Wolfe of Stanford University in Palo Alto, California, and Stephen Rich of the University of Massachusetts, Amherst, identified eight new isolates of P. reichenowi by examining tissue samples from 10 wild chimpanzees that had died in Côte d'Ivoire and blood from 84 captive chimps living in sanctuaries in Cameroon. A comparison of genes from all nine P. reichenowi isolates and 133 strains of P. falciparum showed that the chimp parasite has much more genetic diversity, indicating that P. reichenowi is older, the research team reports online 3 August in the Proceedings of the National Academy of Sciences. “It's the answer to a substantive mystery,” says Wolfe, a virologist who runs Stanford's Global Viral Forecasting Initiative. Malaria is “one of the oldest diseases in humanity, and it's amazing to us that it took this long to nail down where it came from.” A phylogenetic analysis also shows that the two parasites are on the same branch of the Plasmodium family tree. “This is a huge step forward and places P. falciparum clearly in a cluster of chimp parasite species,” says parasitologist Julian Rayner of the Wellcome Trust Sanger Institute in Hinxton, U.K. In 1920, German researcher Eduard Reichenow claimed to have found P. falciparum—which in people causes what researchers call “malignant malaria”—in chimpanzees. But attempts to infect chimps with lab isolates failed, which led two British researchers, Donald Blacklock and Saul Adler, to inject themselves a few years later with blood taken from a Plasmodium-infected chimpanzee. Neither of them became ill, and they concluded that chimps had a different parasite, which they named reichenowi. In the 1990s, Francisco Ayala, a co-author of the new paper and the former Ph.D. adviser of Rich, revived interest in the long-ignored chimp parasite. Not persuaded by a study that linked the human parasite's evolution to a Plasmodium found in birds, Ayala noticed that the analysis had ignored P. reichenowi. Three years later, his lab showed that it was actually the closest relative of the human parasite. But Ayala could not determine whether the two pathogens evolved from a common ancestor or if one came from the other—or when the split occurred. The new study answers the first issue but leaves unsettled when P. falciparum appeared in humans. Its debut may be linked to a change in human red blood cells 2 million to 3 million years ago. In 2005, Rayner, working with a group led by Ajit Varki and Pascal Gagneux at the University of California, San Diego, School of Medicine, reported that a genetic mutation protects people from P. reichenowi and makes them more susceptible to P. falciparum. The mutation altered the surface of red blood cells, changing a sugar that P. reichenowi binds to during the infection process. As a result, a new sugar adorned human red blood cells that P. falciparum favors. “We escaped malaria from P. reichenowi, but we paid a price,” says Gagneux. Varki now suggests that the chimp parasite might work as a vaccine against P. falciparum, much in the way that cowpox immunizes against smallpox. Any way you go about it, researchers clearly have a bevy of new P. reichenowi isolates to examine. For malaria researchers, says Wolfe, “it's like the solar system is suddenly filled with planets that we never knew about.” 5. ScienceNOW.org # From Science's Online Daily News Site Gorilla Virus in Our Midst Researchers are shaking up the HIV family tree again. For the first time, investigators have found what looks like a gorilla version of the AIDS virus in a person. They do not know how the woman became infected but suspect that other humans harbor a similar virus. The possibility that gorillas can transmit the virus to humans further underscores the danger of butchering the apes or keeping them as pets, which still occurs in some African communities. Dinosaur Study Backs Controversial Find When scientists reported 2 years ago that they had discovered intact protein fragments from a 68-million-year-old Tyrannosaurus rex, the skeptics pounced. They argued that one of the main lines of evidence, signatures of the protein fragments taken by mass spectrometry, was flawed. But now a reanalysis of that mass-spec data from an independent group of researchers backs up the original claim that dinosaur proteins have indeed survived the assault of time. Undoing the Damage of Glaucoma In people suffering from glaucoma, damage to the optic nerve can slowly degrade peripheral vision and, in the worst cases, eventually lead to blindness. But eye drops containing nerve growth factor—a protein that promotes the survival and growth of neurons in the developing brain—appear to have prevented nerve damage in rats and restored some vision in three human glaucoma patients, the authors of a new study claim. Not everyone thinks the reported effect is real, however. Researchers Grow New Teeth in Mice If you've lost a tooth to decay or injury, you may not have to rely on that dental bridge or implant forever. Japanese scientists have found a way to bioengineer new adult teeth. So far, the method works only in mice, but experts say it may one day take hold in humans. Read the full postings, comments, and more on sciencenow.sciencemag.org. 6. Science and Commerce # Systematics Researchers Want to Fend Off Patents 1. Elizabeth Pennisi When an e-mail last week alerted David Hillis to U.S. Patent Application 20090030925, he thought it was a joke. A graduate student had stumbled across the 2-year-old application while looking for phylogenetics material on the Web. It seemed to be claiming the invention of techniques that Hillis, an evolutionary biologist at the University of Texas, Austin, has been using for years to discern how organisms were related to one another through evolutionary time, based on similarities in genetic sequence. “I had a hard time believing [the claim] was true at first,” Hillis recalls. It sounded “like The Onion article about Microsoft trying to patent 1's and 0's.” But this Microsoft application is for real, and as word about it spreads, the phylogenetics community is increasingly worried. They think the patent is unlikely to pass muster at the U.S. Patent and Trademark Office because it seems obvious. However, should the application be approved, some researchers may find that they are doing work that infringes it. “It's frankly terrifying,” says entomologist David Maddison of the University of Arizona, Tucson. “As patents enter this field, there is a very great danger that we will get bogged down in a legal morass.” Patenting basic research tools generally goes against the grain of systematists and evolutionary biologists, who are accustomed to sharing methods freely. Yet those innocent days may be waning. Just last month, a proposed patent for a DNA bar code threatened to derail a carefully crafted agreement about short DNA sequences to discriminate plant species (Science, 31 July, p. 526). “I suspect phylogenetics has simply been under the radar compared to things of interest to dot-coms, venture capitalists, and the tech industry,” says Tamara Munzner, a computer scientist at the University of British Columbia in Vancouver, Canada. “I am concerned that this [Microsoft] application marks the beginning of the end for that state of grace.” Patent 20090030925 was filed by Microsoft researcher Stuart Ozer, an expert in databases, in July 2007. Ozer says he wanted to apply database technologies to complex problems in biological sciences: “I saw an opportunity to create a new approach in analyzing sequence data when phylogenetic information was available,” he says. The patent application describes a way to use biological data that has been organized according to evolutionary relatedness. It includes methods for counting evolutionary events and grouping positions within molecules. However, “this patent is written in such broad language that it appears to swallow up any activity that involves understanding biodiversity through phylogenetics,” says William Piel, a phylogeneticist at Yale University. He points out that such analyses date back to Charles Darwin, who sketched the first evolutionary tree; today, more than 350 phylogeny software packages are available on the Web. “Microsoft might as well patent the multiplication tables,” Piel says. Ozer stresses that his patent does not apply to creating phylogenies or determining evolutionary trees, only to a technique for making further use of the variation seen among sequences used in such an analysis. “It is a novel approach applying specific mathematical methods to existing, phylogenetically annotated data to draw conclusions about relationships within a molecule,” Ozer explains. Several of the patents related to phylogenetics that have cropped up before now have met stiff resistance. In the mid-1990s, Willie Davidson, a molecular biologist at Simon Fraser University in Burnaby, Canada, caused a fuss when he and his colleague sought to patent the use of a piece of mitochondrial DNA to identify unknown specimens by comparing that specimen's DNA sequence with a database of DNA of known organisms. The pending patent was licensed to a French company called Atlangene, which challenged the right of the CSL Food Science Laboratory in Aberdeen, U.K., to use that approach to identify tuna species. The laboratory's scientists appealed to the systematics community for support. A senior researcher at the Natural History Museum in London, with backing from the U.K. government, challenged the patent application, which Davidson subsequently withdrew, he says. Three times, the Barcode of Life project has grappled with patent applications. The latest involved Vincent Savolainen of the Royal Botanic Gardens, Kew, in Richmond, U.K., and colleagues. Savolainen had been a member of an independent committee of plant scientists trying to decide which pieces of DNA would work best for bar-coding plants. Not until the committee was on the verge of publishing its plan did the rest of the committee discover that Savolainen and a few colleagues had filed a patent application to cover the use of one of the pieces of DNA selected. When the Consortium for the Barcode of Life heard about the pending patent, “we were immediate and emphatic about asking [Savolainen] to withdraw the application,” says consortium executive secretary David Schindel. Savolainen and his colleagues complied. The interpretation—and fate—of Ozer's Microsoft patent is still undecided. But Hillis, Munzner, and others are keeping close watch on these and other patent developments. Says Schindel, “It's a constant threat requiring constant vigilance.” 7. Intellectual Property # Drug Metabolite Patents Prompt Legal Battle 1. Eliot Marshall U.S. medical groups are lining up this week against a biotech company in a rare federal court battle over whether a specific drug test can be patented. One side says the patents apply to aspects of the human metabolism that should not become private property. The other side says such patents are vital to foster further medical advances. The outcome is likely to affect an ongoing debate over patenting natural phenomena, a theme already before the U.S. Supreme Court (Science, 12 June, p. 1374). On 5 August, the U.S. Court of Appeals for the Federal Circuit in Washington, D.C., heard oral arguments in a case that pits Prometheus Laboratories in San Diego, California, against the Mayo Clinic in Rochester, Minnesota. Prometheus runs a testing service that monitors responses to the drug azathioprine, which is used to treat autoimmune disorders. The Mayo Clinic wants to do the same kind of testing but doesn't have a relevant patent. The American Medical Association and the American College of Medical Genetics have sided with Mayo, arguing that Prometheus's patents are based on mere observations of nature and seek ownership of a common metabolic response. Such “observational patents,” according to Mayo, add to health-care costs and complicate research. Prometheus and its allies respond that not only are the patents valid, but also that denying them could stifle innovation. The Biotechnology Industry Organization in Washington, D.C., warns in an amicus brief that voiding the patents could mean “promising innovative products that could benefit patients and consumers would be developed abroad, or not at all.” The American Intellectual Property Law Association also supports Prometheus's position. The issue arose 5 years ago after Mayo and its profitmaking subsidiary, Mayo Medical Laboratories (MML), proposed to do azathioprine metabolite testing. Doing the test itself would be faster and give doctors more options than sending samples to Prometheus, administrators decided. “We were not aware of Prometheus's intellectual-property claims,” says MML Chief Administrative Officer David Herbert. Prometheus sued in U.S. court, asking Mayo to stop work. Mayo complied. But “we decided to take a stand,” says Herbert. Mayo argued that the patents were invalid, and in 2008, the court agreed with Mayo. Prometheus appealed, leading to this week's hearing. Herbert says the legal fight has already cost Mayo more than it would have earned from selling its own test. At issue is a procedure for adjusting the dose of azathioprine, an anti-inflammatory drug used to treat Crohn's disease and other autoimmune disorders. The drug has a narrow window of efficacy. Too little gives no benefit, but too much can be toxic to the liver or, in rare cases, suppress the bone marrow and cause death. The response is driven particularly by thiopurine methyltransferase (TPMT) enzymes, which are produced by a gene that may be present in a single copy, two copies, or none. People without the gene are most at risk. It's possible to test patients for the TPMT gene, but Mayo doctors say it's often necessary to watch blood levels of azathioprine metabolites because TPMT isn't the only enzyme involved. The Prometheus patents (issued in 2002 and 2004) cover a simple procedure for adjusting the drug dose according to safe azathioprine metabolite levels seen in previous patients. Although its critics say this approach is obvious, Prometheus defends the test's originality. After hearing arguments this week, the appellate judges may wait to act until the Supreme Court has ruled on a related case. 8. ScienceInsider # From the Science Policy Blog The United States should begin vaccinating its citizens this fall against the H1N1 influenza virus, says an expert panel that guides the Department of Health and Human Services (HHS). The Advisory Committee on Immunization Practices proposed rationing the limited supplies of vaccine among high-priority groups representing roughly half the country's population. At the top of the list are people with underlying conditions that put them at risk for severe disease, pregnant women, those from 6 months to 24 years old, anyone who lives with infants under 6 months old, health-care workers, and emergency personnel. HHS hopes to have enough doses of the vaccine by October for everyone in these groups. A mistake in handling manuscripts—not deliberate copying—led to an embarrassing use of borrowed text in a recent paper on the use of human embryonic stem cells, claims one of the authors, Karim Nayernia of Newcastle University in the United Kingdom. The journal's editor withdrew it after receiving an allegation of plagiarism, but Nayernia says the error occurred when a postdoc accidentally submitted the wrong draft for online publication. A Senate appropriations panel has matched President Barack Obama's constrained funding request for the National Institutes of Health in 2010. The442 million boost to $31.8 billion is less than half of the increase the House of Representatives approved last week. Legislators won't make a final decision on the spending bill until fall at the earliest. In other news, epidemiologist David Michaels, known for studying nuclear workers' health risks, has been chosen to run the U.S. Occupational Safety and Health Administration, pending Senate confirmation. Meanwhile, David Besser, the former acting director of the Centers for Disease Control and Prevention, is moving to ABC News. For more science policy news, go to blogs.sciencemag.org/scienceinsider/. 9. Origins # On the Origin of Eukaryotes 1. Carl Zimmer If the eukaryote cell hadn't evolved, we wouldn't be here to discuss the question of how it originated. In the eighth essay in Science's series in honor of the Year of Darwin, Carl Zimmer describes one of the most important transitions in the history of life: the origin of cells with a nucleus, which gave rise to every multicellular form of life. You may not feel as though you have much in common with a toadstool, but its cells and ours are strikingly similar. Animals and fungi both keep their DNA coiled up in a nucleus. Their genes are interspersed with chunks of DNA that cells have to edit out to make proteins. Those proteins are shuttled through a maze of membranes before they can float out into the cell. A cell in a toadstool, like your own cells, manufactures fuel in compartments called mitochondria. Both species' cells contain the same molecular skeleton, which they can break down and reassemble in order to crawl. This same kind of cell is found in plants and algae; single-celled protozoans have the same layout as well. Other microbes, such as the gut bacterium Escherichia coli, lack it. All species with our arrangement are known as eukaryotes. The word is Greek for “true kernel,” referring to the nucleus. All other living things that lack a nucleus, mitochondria, and the eukaryote LEGO-like skeleton are known as prokaryotes. “It's the deepest divide in the living world,” says William Martin of the University of Düsseldorf in Germany. The evolution of the eukaryote cell is one of the most important transitions in the history of life. “Without the origin of eukaryotes, we wouldn't be here to discuss the question,” says T. Martin Embley of Newcastle University in the United Kingdom. Along with animals, eukaryotes gave rise to every other multicellular form of life. Indeed, when you look at the natural world, most of what you see are these “true kernel” organisms. The fossil record doesn't tell us much about their origin. Paleontologists have found fossils of prokaryotes dating back 3.45 billion years. The earliest fossils that have been proposed to be eukaryotes—based on their larger size and eukaryotelike features on their surfaces—are only about 2 billion years old. Paleontologists have not yet discovered any transitional forms in the intervening 1.45 billion years, as they have for other groups, such as birds or whales. “One gets a bit of fossil envy,” says Anthony Poole of Stockholm University. Fortunately, living eukaryotes and prokaryotes still retain some clues to the transition, both in their cell biology and in their genomes. By studying both, researchers have made tremendous advances in the past 20 years in understanding how eukaryotes first emerged. A key step in their evolution, for example, was the acquisition of bacterial passengers, which eventually became the mitochondria of eukaryote cells. But some scientists now argue that the genes of these bacteria also helped give rise to other important features of the eukaryote cell, including the nucleus. “It's been a really cool journey,” says Embley. ## Unexpected ancestry Scientists first divided life into prokaryotes and eukaryotes in the mid-1900s, using increasingly powerful microscopes to see the fine details of cells. But they couldn't say much about how prokaryotes and eukaryotes were related. Did the two groups branch off from a common ancestor? Or did eukaryotes evolve from a particular lineage of prokaryotes long after the evolution of the first prokaryotes? An important step toward an answer to these questions was taken in the 1970s. Carl Woese of the University of Illinois, Urbana-Champaign, and his colleagues compared versions of an RNA molecule called 16S rRNA in a wide range of prokaryotes and eukaryotes. They reasoned that species with similar sequences were closely related and used that reasoning to draw a tree of life. Eukaryotes were all more closely related to one another than any were to prokaryotes, they found, which suggests that eukaryotes all belong to a single lineage and that the eukaryote cell evolved only once in the history of life. But Woese and his colleagues got a surprise when they looked at the prokaryotes. The prokaryotes formed two major branches in their analysis. One branch included familiar bacteria such as E. coli. The other branch included a motley crew of obscure microbes—methane-producing organisms that can survive on hydrogen in oxygen-free swamps, for example, and others that live in boiling water around deep-sea hydrothermal vents. Woese and his colleagues argued that there were three major groups of living things: eukaryotes, bacteria, and a group they dubbed archaea. And most surprising of all, Woese and his colleagues found that archaea were more closely related to eukaryotes than they were to bacteria. Although archaea and bacteria may seem indistinguishable to the nonexpert, Woese's discovery prompted microbiologists to take a closer look. They found some important differences, such as in the kinds of molecules archaea and bacteria use to build their outer membranes. A number of scientists began to study archaea to get some clues to the origins of their close relatives, the eukaryotes. Many scientists assumed that after the ancestors of eukaryotes and archaea split apart, eukaryotes evolved all of their unique traits through the familiar process of small mutations accumulating through natural selection. But Lynn Margulis, a microbiologist now at the University of Massachusetts, Amherst, argued that a number of parts of the eukaryote cell were acquired in a radically different way: by the fusion of separate species. Reviving an idea first championed in the early 1900s, Margulis pointed to many traits that mitochondria share with bacteria. Both are surrounded by a pair of membranes, for example. Mitochondria and some bacteria can also use oxygen to generate fuel, in the form of adenosine triphosphate (ATP) molecules. And mitochondria have their own DNA, which they duplicate when they divide into new mitochondria. Margulis argued that mitochondria arose after bacteria entered host cells and, instead of being degraded, became so-called endosymbionts. Many studies have bolstered this once-controversial hypothesis. The genes in mitochondria closely resemble genes in bacteria, not those in any eukaryote. In fact, a number of mitochondrial genes point to the same lineage of bacteria, part of the alpha proteobacteria. Additional evidence for the endosymbiont hypothesis comes from the genes in the eukaryote nucleus. Some of the proteins that carry out reactions in mitochondria are encoded in nuclear DNA. When scientists have searched for the closest relatives of these genes, they find them among bacterial genes, not eukaryote genes. It seems that after the ancestors of mitochondria entered the ancestors of today's eukaryotes, some of their genes got moved into the eukaryote's genome. ## Mitochondria everywhere Although most eukaryotes have mitochondria, a few don't—or so it once seemed. In 1983, Thomas Cavalier-Smith of the University of Oxford in the United Kingdom proposed that these eukaryotes branched off before bacteria entered the eukaryote cell and became mitochondria. According to his so-called archezoa hypothesis, mitochondria first evolved only after eukaryotes had already evolved a nucleus, a cellular skeleton, and many other distinctively eukaryotic features. But a closer look at mitochondria-free eukaryotes raised doubts about the archezoa hypothesis. In the 1970s, Miklós Müller of the Rockefeller University in New York City and his colleagues discovered that some protozoans and fungi make ATP without mitochondria using structures called hydrogenosomes. (They named it for the hydrogen it produces as waste.) In 1995, scientists discovered mitochondrialike genes in eukaryotes that only had hydrogenosomes. Further research has now confirmed that hydrogenosomes and mitochondria descend from the same endosymbiont. By 1998, Müller and Martin of the University of Düsseldorf were arguing that it was time to throw out the archezoa hypothesis. They maintained that the common ancestor of all living eukaryotes already carried an endosymbiont. They predicted that further study would reveal mitochondrialike structures in eukaryotes that seemed to be missing mitochondria at the time. Based on the biochemistry of mitochondria and hydrogenosomes, Martin and Müller sketched out a scenario for how the original merging of two cells occurred. They pointed out that it is very common for bacteria and archaea to depend on each other, with one species producing waste that another species can use as food. “That sort of stuff is all over the bottom of the ocean,” says Martin. Martin and Müller proposed that mitochondria descend from bacteria that fed on organic carbon and released hydrogen atoms. Their partner was an archaea that used the hydrogen to make ATP, as many archaea do today. Over time, the archaea engulfed the bacteria and evolved the ability to feed their newly acquired endosymbionts organic carbon. In the 11 years since Martin and Müller proposed their “hydrogen hypothesis,” scientists have come to agree that the common ancestor of living eukaryotes had an endosymbiont. “It is certain,” says Eugene Koonin of the National Center for Biotechnology Information in Bethesda, Maryland. One by one, exceptions have fallen away. Along with making ATP, mitochondria also make clusters of iron and sulfur atoms. While studying Giardia, a “mitochondria-free” eukaryote, Jorge Tovar of Royal Holloway, University of London, and his colleagues discovered proteins very similar to the proteins that build iron-sulfur clusters in mitochondria. The scientists manipulated the proteins so that they would light up inside Giardia. It turned out that the proteins all clumped together in a tiny sac that had, until then, gone unnoticed. In 2003, Tovar and his colleagues dubbed this sac a mitosome. Scientists now agree that mitosomes are vestigial mitochondria. ## Mosaic genomes In 1984, James Lake of the University of California, Los Angeles, and his colleagues challenged Woese's three-domain view of life. Lake and his colleagues took a close look at ribosomes, the protein-building factories found in all living things. They classified species based on the distinctive lobes and gaps in their ribosomes. Based on this analysis, Lake and his colleagues found that eukaryotes do not form a distinct group on their own. Instead, they share a close ancestry with some lineages of archaea and not others. In effect, they found that there are only two major branches of life—bacteria and archaea. Eukaryotes are just a peculiar kind of archaea. Lake dubbed the archaeal ancestors of eukaryotes eocytes (dawn cells). Since then, a number of scientists have tried to choose among the three-domain hypothesis, the eocyte hypothesis, and several others. They've analyzed more genes in more species, using more sophisticated statistical methods. In the 12 August issue of the Philosophical Transactions of the Royal Society, Embley and his colleagues present the latest of these studies, comparing 41 proteins in 35 species. “It is the eocyte tree that is favoured and not the three-domains tree,” they concluded. Embley and his colleagues selected proteins that preserved the clearest signal of the deep ancestry of life. They have been carried down faithfully from ancestor to descendant for billions of years. But eukaryote genomes also include genes that have been imported from other species through a process called horizontal gene transfer. About 75% of all eukaryote genes are more closely related to genes found in bacteria than ones in archaea. Scientists have tried to make sense of this genetic mélange by cataloging the kinds of jobs archaeal and bacterial genes do in our cells. Archaeal genes tend to be involved in information processing. Bacterial genes tend to be associated with metabolism and the structure of our cells. But the line is not always easy to draw between archaeal and bacterial genes. Koonin and his colleagues have found that the proteins that make up the walls of the nucleus are made up of both archaeal and bacterial genes. One possible explanation for the mixed-up eukaryote genome is that the bacteria that gave rise to mitochondria didn't just shrivel up into ATP-producing factories. Instead, many of their genes were transferred to the nucleus of their archaeal host. Those genes then helped produce the eukaryote membranes, nucleus, and metabolism. Most of our genes, in other words, were transferred from an endosymbiont. Having a second genome in such close quarters, Koonin and Martin have argued, may have posed a hazard to the survival of early eukaryotes. Along with protein-coding genes and other useful pieces of DNA, the genomes of many species also carry viruslike stretches of genetic material called mobile elements. Mobile elements can, on rare occasion, jump from one host genome to another. And once in their new host genome, they can make copies of themselves that are reinserted back in the genome. As mobile elements bombard a genome, they can disrupt the proper working of its genes. Koonin and Martin suspect that with an endosymbiont in their midst, early eukaryotes would have been particularly vulnerable to attacks from mobile elements. They propose that the nucleus—the structure that gives eukaryotes their name—evolved as a defense against this attack. After mobile elements are transcribed into single-stranded RNA, they are copied back into the genome. With the invention of a nucleus, RNA molecules were moved across a barrier out of the nucleus in order to be translated into proteins. That wall reduced the chances of mobile elements being reinserted back into the genome. Despite all the new insights into the origin of eukaryotes, scientists are far from agreed on all the details. In the July issue of Bioessays, for example, Yaacov Davidov and Edouard Jurkevitch of the Hebrew University of Jerusalem propose that the ancestors of mitochondria were not mutualists with archaea but predators that pushed their way into other prokaryotes and devoured them. Instead of killing their prey, Davidov and Jurkevitch argue, some predators took up residence there. Scientists are also still debating how many bacterial genes eukaryotes got from the original endosymbiont. Prokaryotes sometimes pass DNA between distantly related species with the result that their genomes have become mosaics of genes. It's possible, some researchers argue, that many genes were transferred this way into the eukaryote genome from a variety of bacteria. Testing these ideas will demand a better knowledge of the diversity of both prokaryotes and eukaryotes. It may also require new methods for reconstructing events that happened 2 billion years ago. “These are some of the hardest problems in biology,” says Embley. Whatever the exact series of events turns out to be, eukaryotes triggered a biological revolution. Prokaryotes can generate energy only by pumping charged atoms across their membranes. That constraint helps limit their size. As prokaryotes grow in size, their volume increases much faster than their surface area. They end up with too little energy to power their cells. Eukaryotes, on the other hand, can pack hundreds of energy-generating mitochondria into a single cell. And so they could get big, evolving into an entirely new ecological niche. “You don't have to compete for the same nutrients,” says Nick Lane of University College London, author of Life Ascending: The Ten Great Inventions of Evolution. “You simply eat the opposition.” The eukaryote also opened the way to more complex species. Single-celled eukaryotes could evolve into multicellular animals, plants, and fungi. Individual cells in those organisms could evolve into specialized forms, such as muscles and neurons. “As soon as you've got one prokaryote inside another prokaryote,” says Lane, “you've completely transformed the cell and what it can do.” ## References T. Cavalier-Smith, "Archamoebae: the ancestral eukaryotes? " Bio Systems 25, 25 (1991). Y. Davidov and E. Jurkevitch, "Predation between prokaryotes and the origin of eukaryotes." BioEssays 31, 748 (2009). P. G. Foster, C. J Cox, T. M. Embley, "The primary divisions of life: a phylogenomic approach employing composition-heterogeneous methods." Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 364, 2197 (2009). J. Hackstein, J. Tjaden, M, Huynen, "Mitochondria, hydrogenosomes and mitosomes: products of evolutionary tinkering!" Current Genetics 50, 225 (2006). E. J. Javaux and J. Emmanuelle, " The Early Eukaryotic Fossil Record." Advances in Experimental Medicine and Biology 607, 1 (2007). N. Lane, Power, Sex, Suicide: Mitochondria and the Meaning of Life. Oxford University Press (2006). N. Lane, Life Ascending: The Ten Great Inventions of Evolution. New York: Norton (2009). W. Martin and M. Müller, "The hydrogen hypothesis for the first eukaryote." Nature 392, 37 (1998). W. Martin and E. V. Koonin, "Introns and the origin of nucleus–cytosol compartmentalization." Nature 440, 41 (2006). L. Sagan, "On the origin of mitosing cells." Journal of Theoretical Biology 14, 255 (1967). J. Sapp, "The Prokaryote-Eukaryote Dichotomy: Meanings and Mythology." Microbiol. Mol. Biol. Rev. 69, 292 (2005). J. Tovar et al., "Mitochondrial remnant organelles of Giardia function in iron-sulphur protein maturation." Nature 426, 172 (2003). C. R. Woese, "Interpreting the universal phylogenetic tree." Proceedings of the NationalAcademy of Sciences of the United States of America 97, 8392 10. Newsmaker Interview # Space Telescope's Chief Scientist Recalls How Hubble Was Saved 1. Yudhijit Bhattacharjee As he prepares to retire, David Leckrone reflects on the rescue effort that kept the observatory in orbit and on the prospects for humans in space. One day in October 2008, David Leckrone's cell phone rang shortly before he was to board a flight from Baltimore, Maryland, to Houston, Texas, to watch the launch of a long-awaited final mission to refurbish the Hubble Space Telescope. It was a call to inform the Hubble chief scientist that the 18-year-old telescope's computer had developed a snag, which meant the mission would have to be postponed. For Leckrone, the delay marked yet another heart-stopping moment on a roller-coaster ride that began with former NASA Administrator Sean O'Keefe's 2004 decision to cancel the final servicing mission. O'Keefe's decision was reversed, and the mission launched successfully on 11 May 2009. The seven astronauts on board completed all the planned improvements and repairs: swapping out the Wide Field and Planetary Camera 2 for the more advanced Wide Field Camera 3, installing a new computer and the new Cosmic Origins Spectrograph, replacing gyroscopes and batteries, and fixing the Advanced Camera for Surveys (ACS), which had lost two of its three channels in 2006. Leckrone, who will retire this October after 17 years in the job, told Science last week that he expects all of Hubble's instruments to be doing science by early fall. He also shared some of the drama from the days leading up to—and during—the mission. His answers have been edited for brevity. Q:How is the Hubble doing? D.L.:We are still in the midst of calibrating some of the old and new systems to make sure they run in an optimal fashion. The ACS is already doing science; the Wide Field Camera [3] will be doing so in 2 to 3 weeks. The Cosmic Origins Spectrograph will become operational after that. Q:Have there been any issues to resolve? D.L.:We had a couple of little hiccups. A cosmic ray particle hit the circuitry of the backup computer a few weeks after the mission ended, and some of the data arrived in all zeroes and all ones. We rebooted it, and it has been fine since. We created a software patch in the main computer to intervene if this ever happens again. We had a similar hiccup on the ACS. It's routine to have some trouble here and there, but we never panic. Q:How long do you think the Hubble will stay afloat? Five years? D.L.:The 5-year time frame is a budget number. From an engineering and science standpoint, I think it could go at least 10 years before starting to degrade. You always find clever scientists and engineers to keep the instruments going. As the spacecraft's ability diminishes, the creativity of the team on the ground increases. Q:Did you and others give up hope of extending Hubble's life after Sean O'Keefe canceled the mission? D.L.:It seemed like the mission was for certain gone. We went through a mourning period. NASA wanted us to tie off the work on the instruments at a logical stopping point, and some had the idea that the new instruments might be usable on other missions. That was not credible because they had been designed for Hubble. The next thing that came along was this bright idea of servicing the mission robotically. It looked promising; our folks put in a substantial effort into developing a design for a robotic mission, but the time and costs for pulling it together would have been substantial. Fortunately, after Mike Griffin took over, he said, “Let's scuttle the robot idea and go back to what we know how to do.” Q:Was that difficult? D.L.:We had to undo some of the steps from when NASA was thinking about using the instruments elsewhere. Also, many of our topnotch team members and contractors had gone off and taken other jobs. It took a little while to pull back the best people. Q:Did the mission go smoothly? D.L.:Mostly, yes. The worst moment came at the beginning of the first spacewalk when astronauts had to remove the Wide Field Camera 2. We had power tools to loosen the single bolt that was holding it in place. The crew tried three different torque settings; none of them was strong enough. At that point, I could see the whole mission crumbling into dust around my head. Drew Feustel [one of the astronauts] saved our bacon. He went into manual mode, made sure that the tool was properly engaged, and applied some muscle. He was scared the bolt would break, which would have been a disaster. At the last minute, it popped loose. I felt like I came within 6 inches of being hit by a truck. It was a great example of what humans can do in space. Q:Couldn't robots have done that? D.L.:There's a limit to what robots can do; we are just not there technologically. One could argue that a robotic mission would have been only 50% as effective. Q:What's your message to the panel that's reviewing the human space flight program [Science, 22 May, p. 999]? D.L.:I have told the panel that it would be insane to do away with the human program. I hope the panel will recommend that whatever new architecture is developed will retain the capabilities that we have, including the ability to service large space observatories. That is the bedrock of our preeminence in space right now. Without humans in space, there is no NASA. 11. Neuropathology # A Late Hit for Pro Football Players 1. Greg Miller Emerging research suggests that hard knocks on the field may cause delayed brain damage in retired athletes. As a professional wrestler, Chris Nowinski had what those in the business call “natural heat.” His ring persona, Chris Harvard, was an Ivy League snob who sometimes entered the ring reading a book. Fans loved to hate him. “I think I achieved greatness when I was in Mumbai, India, and had 20,000 people chanting ‘Harvard sucks!’” Nowinski says. Nowinski had in fact attended Harvard University, where he'd majored in sociology, been a standout defensive tackle on the football team, and graduated in 2000. A stint on Tough Enough, a reality TV show, led to a contract with World Wrestling Entertainment. But Nowinski's wrestling career was short-lived. He quit in 2003 after a series of concussions left him prone to memory lapses, irritability, sleeplessness, and persistent headaches. “After the last one, I just didn't bounce back,” he says. Nowinski took a job at a life sciences consulting firm near Boston, thinking he'd get back in the ring as soon as he felt better. But as weeks turned into months with little progress, he began combing the medical literature for information about the long-term effects of head injuries in athletes. What he found disturbed him. Studies dating back to the 1920s had documented dementia and neurodegeneration in boxers. But were other athletes at risk? Until recently, the literature had very little to say. Now Nowinski is trying to use his knack for getting people riled up to change that. In 2007, he co-founded the Sports Legacy Institute, a nonprofit organization funded by donations that promotes research on sports-related head injuries. Last year, the institute announced the formation of a new research center in partnership with the Boston University School of Medicine to study neuropsychiatric symptoms in athletes and to examine donated brains for signs of pathology. Using his contacts in the sporting world, Nowinski has persuaded 134 football players and other athletes to donate their brains for research when they die. So far, the Boston group and other researchers working independently say they have found signs of pathology in a total of 12 former players in the National Football League (NFL), plus at least four wrestlers and one soccer player, many of whom died in their 40s. Only one postmortem exam of an NFL player has so far turned up negative: that of Damien Nash, a running back for the Denver Broncos, who died at age 24. Although the number of cases is still very small, and most have yet to be published in peer-reviewed journals, the researchers insist their preliminary findings are cause for concern because this type of brain pathology is virtually unheard of in people this young. Nowinski has used these cases to generate considerable media attention. At this year's Super Bowl, he and colleagues held a press conference to announce new findings, including early signs of pathology in an 18-year-old high school football player. As was the case in his wrestling days, not everyone likes his style. “Chris Nowinski is someone I don't want anything to do with,” says Bennet Omalu, the neuropathologist who published the first case report on neurodegeneration in an NFL player in 2005. Omalu, who is currently the chief medical examiner in San Joaquin County in California, initially collaborated with Nowinski and the Boston group, but he withdrew from the project because he felt Nowinski was putting publicity ahead of research and failing to give him proper credit for his contributions. Despite their differences, both men are passionate about the issue, and both see growing evidence that contact sports put athletes at risk for neurological problems years after they quit playing. If they can prove their case, sports leagues like the NFL will likely face more pressure to protect athletes on the field and take care of them after they retire, and parents and coaches will face troubling questions about the long-term hazards for younger athletes. ## Troublesome tangles At Ann McKee's office at the Veterans Administration Hospital in Bedford, Massachusetts, about a half-hour drive from Boston, the décor includes poster-sized images of diseased and injured brains and bobblehead dolls of football players. McKee is a neuropathologist on the faculty at Boston University and co-director of the new research center. She grew up in Wisconsin and says she's a loyal Green Bay Packers fan. Rummaging through drawers, she pulls together a tray of glass microscope slides and sets them down with a clink next to a microscope with two sets of eyepieces. Placing a slide on the scope, she invites a visitor to take a look. At first the view is a constellation of purple dots: the stained cell bodies of neurons. There's a dizzying blur as McKee moves the slide and centers the view on a cluster of brownish dots surrounding a blood vessel. Another blur brings another dark cluster into view, this one lining the depths of a sulcus, as the grooves in the cerebral cortex are called. “None of this should be here, just none of it,” McKee says. The dark spots indicate a high concentration of a protein called tau, a prime suspect in several neurodegenerative disorders, including Alzheimer's disease. But the brain tissue on the microscope didn't come from an elderly Alzheimer's patient; it belonged to a former NFL player who died at age 45. John Grimsley, a retired linebacker, shot himself in the chest while cleaning his gun in February 2008. His death was ruled an accident. After a call from Nowinski, Grimsley's wife agreed to allow McKee to examine his brain. Grimsley had started experiencing problems with memory and concentration at age 40, his family told the Boston researchers. Those symptoms worsened toward the end of his life, and he became emotionally volatile. Grimsley's brain lacks the clumps of amyloid protein that are a hallmark of Alzheimer's disease, and the distribution of tau is different, McKee and colleagues reported in the July issue of the Journal of Neuropathology and Experimental Neurology. His pathology fits another diagnosis, chronic traumatic encephalopathy (CTE), that has previously been reported in boxers but only recently in other athletes. Long-term neurological problems in boxers were first described in a 1928 paper in The Journal of the American Medical Association written by a New Jersey pathologist named Harrison Martland. After consulting with a fight promoter, Martland compiled a list of “punch drunk” fighters and described their symptoms. In the ensuing decades, researchers reported more cases and published neuropathological exams showing extensive brain damage in boxers. The condition became known as dementia pugilistica, and more recently as CTE, to acknowledge newer evidence that boxers are not the only athletes at risk. Omalu was the first to document CTE in a football player. One Saturday morning in late September 2002, he recalls, he had the television on as he was getting ready to go to work at the Allegheny County medical examiner's office, near Pittsburgh, Pennsylvania. The morning news included the death of former Pittsburgh Steelers center Michael “Iron Mike” Webster. At his peak in the 1970s and '80s, Webster was considered one of the best to ever play his position. But after retiring in 1990, Webster fell on hard times, going through a divorce and periods of depression, drug abuse, and homelessness. He died at age 50. Omalu wondered whether the hits Webster sustained in his 17 seasons in the NFL might have contributed to his troubles. He had no idea he was about to get a chance to investigate. “I went to work and lo and behold, Mike Webster was on my autopsy table,” says Omalu. Webster had died of a heart attack, but postconcussion syndrome was listed as a contributing factor. At first glance, Webster's brain looked normal, Omalu says. But closer examination revealed protein deposits similar to those found in patients with Alzheimer's disease, including dense tangles of tau and a smattering of amyloid plaques. (Webster's case may be unusual; most subsequent studies of football players have found only tau.) Omalu and colleagues' case report on Webster, published in Neurosurgery in 2005, elicited a flurry of letters to the editor. One was a lengthy rebuttal from members of the NFL's committee on mild traumatic brain injury, who argued that the paper should be retracted or substantially revised because the researchers had misinterpreted their neuropathological findings and failed to present an adequate clinical history. Other letters were more supportive and joined the call for more research. One of these was from Robert Cantu, a sports neurologist and leading authority on concussion in Concord, Massachusetts, who happened to be Chris Nowinski's doctor and later became a co-founder of the Sports Legacy Institute. ## Gaining ground Although Omalu's report on Webster was initially greeted with skepticism, other evidence soon began to bolster the case that something was going wrong in the brains of some football players. In 2006, Omalu published a second Neurosurgery paper describing signs of CTE in another Steeler, Terry Long, who committed suicide by drinking antifreeze in 2005. Like Webster, Long had struggled in his postfootball life. In all, Omalu says he has found CTE in the brains of eight of the nine professional football players he's examined. (Nash was the sole exception.) In Boston, McKee says she has found CTE in all four former pros she has examined. Something is clearly abnormal in these athletes' brains, says Daniel Perl, a veteran neuropathologist at Mount Sinai School of Medicine in New York City, who recently began collaborating with the Boston researchers. In the most extreme cases, says Perl, “the extent of tau pathology is just unbelievable.” Studies with living players hint at problems too. A survey of 2552 retired professional football players, led by Kevin Guskiewicz, a neuroscientist and research director of the Center for the Study of Retired Athletes at the University of North Carolina, Chapel Hill, found that those who reported three or more concussions in the course of their careers were five times more likely than those who reported no concussions to show signs of mild cognitive impairment, a frequent prelude to Alzheimer's disease. The researchers published the findings in Neurosurgery in 2005. In the same group, those with more concussions were more likely to have received a diagnosis of depression, Guskiewicz and colleagues reported in 2007 in Medicine and Science in Sports and Exercise. Critics, including members of the NFL's committee on mild traumatic brain injury, dismissed the study because it relied on retirees' self-reported history of concussions and subsequent symptoms, Guskiewicz says. He and colleagues have begun a follow-up study in which retirees spend 2 days in Chapel Hill for a battery of tests to gauge their cognitive and psychological well-being more directly. They will also undergo magnetic resonance imaging scans to look for brain abnormalities. Guskiewicz says the preliminary work suggests that NFL players, particularly those who suffer more concussions, are at risk of depression and cognitive impairments later in life. ## Looking downfield It's a possibility the NFL has been quick to dismiss in the past, Guskiewicz and others say. The league did, however, invite McKee and others to present their findings at a meeting of its committee on mild traumatic brain injury in June. She says the response was polite but skeptical. The use of performance-enhancing anabolic steroids was raised as an alternative cause of pathology, McKee says. “I tried to make the case that the common denominator here is repetitive head trauma and that a number of these individuals never used anabolic steroids.” “There is neuropathological evidence that there's something going on, but exactly what the etiology is is not all that clear,” says Ira Casson, the co-chair of the NFL committee on mild traumatic brain injury. He says the NFL is conducting its own study to examine 120 retired players; it will include detailed histories and cognitive and psychological tests and will use a variety of neuroimaging methods to look for signs of brain damage. As a control group, the study will include 60 players who tried out for the NFL but didn't see significant playing time—a comparison that would presumably distinguish any effects of playing football in general from effects of playing in the NFL per se. One of the many major questions is whether all football players—or other people subjected to repeated blows to the head—are susceptible to CTE. After all, thousands of former hard-hitting athletes appear to be just fine. The number and severity of blows, and their proximity in time, are almost certainly a factor, says Cantu. Drug use and genetics may be important, too. For example, a handful of studies have suggested that a certain version of the apolipoprotein E gene, called APOE4, may make people more susceptible to ill effects from brain injuries. (APOE4 also appears to up the risk of Alzheimer's disease in the general population.) Exactly how repetitive brain trauma might cause delayed neurodegeneration is not known. Although tangles of tau protein contribute to neurodegeneration in a number of diseases, very little is known about what causes the protein to aggregate, says John Trojanowski, who studies mechanisms of neurodegeneration at the University of Pennsylvania. He notes that Swedish researchers reported in the Annals of Neurology in 2006 that amateur boxers have elevated levels of tau in their cerebrospinal fluid 7 to 10 days after a fight. But why? Rodent studies suggest that inflammation and oxidative stress after brain injuries encourage the buildup of amyloid protein. The same may be true for tau, Trojanowski says, but the experiments have yet to be done. Perhaps the most worrisome of the findings to date is the presence of tau in the brain of the high school football player who died. “An 18-year-old should have a pristine brain,” McKee says. For Nowinski, the case raises the question of how young is too young to play contact sports and underscores the urgency of studying CTE. “We want to figure out something to do before kids who didn't need to get this do [get it] because someone thought it was a good idea for them to start bashing their heads at 6 years old,” he says. 12. Theoretical Physics # Can Gravity and Quantum Particles Be Reconciled After All? 1. Adrian Cho One group of intrepid theorists thinks the answer to that question may be “yes,” and if they're right, one argument in favor of string theory unravels. Zvi Bern's epiphany came in October 2005 when he read in the newspaper about that year's Nobel Prize in physiology or medicine. Barry Marshall and J. Robin Warren took the honors for discovering that peptic ulcers are caused not by stress or diet but by a bacterium. When skeptics scoffed at the idea, Marshall gave himself the bug—and an ulcer—and cured himself with antibiotics. From the tale, “it was clear that in science the big money is in overturning the accepted beliefs,” says Bern, a theoretical physicist at the University of California, Los Angeles. So he decided to return to a project that might upend a pillar of physics lore: the belief that standard quantum theory and gravity don't mix. More precisely, many theorists think it is impossible to make a quantum theory of pointlike particles—a “quantum field theory”—that also incorporates Einstein's theory of general relativity. Try it, they say, and the theory will go mathematically haywire, spitting out meaningless infinities. That supposed inevitability has been a prime motivation for string theory, which assumes that every particle is actually an infinitesimal string vibrating in nine-dimensional space—and which is inherently immune to these infinities. But point particles and gravity may be compatible after all. For several years Bern, Lance Dixon of SLAC National Accelerator Laboratory in Menlo Park, California, and colleagues have tried to show that one particular quantum field theory of gravity known as N = 8 supergravity gives sensible answers. In a paper in press at Physical Review Letters, they take another step to show that this theory works mathematically. “It's a remarkable result,” says Kellogg Stelle, a string theorist at Imperial College London, who had bet Bern a bottle of Italian Barolo wine that the latest calculation would “blow up,” as physicists like to say. “I thought they would find infinities, and I lost fair and square.” The work doesn't disprove string theory, but it has string theorists backpedaling a bit in their criticism of quantum field theory. “At certain points, our understanding has been incomplete, and we may have said things that weren't right,” says John Schwarz of the California Institute of Technology in Pasadena. “That being said, the fact is that we still need string theory.” Physicists have a “hand-waving” argument for why quantum field theory can't jibe with gravity, whereas string theory can. According to general relativity, massive objects like Earth bend space and time, or spacetime, to produce the effect we call gravity. But quantum mechanics says that spacetime should suffer from quantum uncertainty so that at the smallest scales, it should erupt into a frothy “quantum foam,” in which where and when lose precise meaning. That should send any theory of point particles spiraling out of control. In contrast, strings would stretch over the bubbles in foam. So calculations in string theory should work, whereas those in quantum field theory should fail. Except that N = 8 supergravity seems to give finite answers anyway. To show that, Bern and Dixon, who first considered the problem in the late 1990s, analyzed the behavior of gravitons, massless particles that make up gravitational fields just as photons make up light. To prove that the theory avoids unwanted infinities, the team need only show that it gives a finite answer for the probability that one graviton will bounce or “scatter” off another. In practice, that calculation is hugely complicated. Gravitons can deflect one another by exchanging a particle, such as another graviton (see diagram). But they can also exchange more particles in evermore complicated processes in which the intermediary particles form closed loops in the little “Feynman diagrams” that theoretical physicists use to sort through the mathematics. The more loops a process has, the more likely it is to produce infinities. In 2007, the team showed that together, the diagrams with three loops yielded a finite answer. In the new paper, they extend the result to four loops. The numbers don't balloon out of control because N = 8 supergravity is laden with symmetries: ways in which the underlying variables can be changed without changing the overarching equations. In particular, it is the quantum field theory of gravity with the most “supersymmetry,” which relates particles of different inherent spins. The symmetries keep the infinities in check, Bern says. “There are correspondences betweens the various pieces [of the calculation], and when you add them all up, there are cancellations.” N = 8 supergravity is just a mathematical testing ground, not a realistic theory, Dixon cautions. “It's not the real world, it's a toy model,” he says. Still, if it gives finite answers, then more-realistic quantum field theories of gravity might, too. But that's a big if, says Stelle, who notes that the researchers have to prove that the theory yields finite results for any number of loops, or “to all orders.” Stelle says he's willing to bet (again) against the proposition: “If N = 8 supergravity were finite to all orders, it would be a miracle.” What's more, Schwarz says, even if N = 8 supergravity is finite to all orders, it still wouldn't be a fully coherent theory of gravity. Like any theory, it would still need an extra bit called a “nonperturbative completion,” and Schwarz and his colleagues argue that the nonperturbative completion of N = 8 supergravity is, in fact, string theory. Still, physicists may need to rethink the assumption that quantum field theories of gravity are doomed to produce infinities as their pointlike particles sink into the quantum foam. “There was no proof, and it just got repeated over and over,” Bern says. Now, the notion may be fading like the idea of an enchilada-induced ulcer. 13. News # A New Wave of Chemical Regulations Just Ahead? 1. Robert F. Service The United States's aging flagship law for controlling toxic compounds is ripe for overhaul, and chemical companies—or some of them—want to play a major role in shaping what comes next. In 1976, shortly after Congress passed a law designed to regulate toxic chemicals, Russell Train, then the administrator of the U.S. Environmental Protection Agency (EPA), called the new law “one of the most important pieces of ‘preventive-medicine’ legislation” ever passed by Congress. The Toxic Substances Control Act (TSCA), Train said, would help regulators identify chemicals hazardous to human health and phase them out. TSCA led to restrictions on a handful of chemicals, including a ban on polychlorinated biphenyls and limits on certain uses of metalworking fluids. But then came asbestos. In 1989, EPA used TSCA to ban the persistent ultrafine fibers that numerous studies had linked to lung diseases. But TSCA stipulates that chemicals should be restricted using the least burdensome regulations available. And because EPA couldn't convince a U.S. appeals court that banning asbestos was the least burdensome way to regulate it, in 1991 the court overturned the ban. In the 18 years since, EPA has not banned a single chemical. “TSCA has essentially failed,” says Richard Denison, a senior scientist with the Environmental Defense Fund in Washington, D.C. The way TSCA has been interpreted has raised the bar for restricting a chemical so high that it has effectively gutted the law, Denison and others say. It's a sentiment that has spread far beyond environmental organizations, prompting widespread calls—even within the U.S. chemical industry—for the law's reform. “There is general agreement that some change is needed in TSCA,” says J. Clarence Davies, a senior fellow with Resources for the Future in Washington, D.C., who also helped craft the original TSCA legislation. “But there is a big difference of opinion on how much.” Just how that question gets answered could have a major effect not just on chemical manufacturers and consumers but also on a broad range of scientists who could be called on to revamp the way chemicals are tested and gauge their risk. TSCA was designed to regulate chemicals and chemical mixtures, with the exception of food, drugs, cosmetics, and pesticides. It gives EPA the authority to require reporting and testing, and then to restrict substances that present an unreasonable health risk. But the law is widely viewed as having major loopholes. When TSCA was written, Davis notes, it allowed tens of thousands of existing compounds to be grandfathered in without comprehensive health and safety testing. Companies still had to report health and safety data for new chemicals. But they were able to claim much of their submissions as confidential to protect proprietary chemical recipes for their products. The provision made it nearly impossible for scientists and environmentalists to challenge the release of new chemicals. Even when EPA does have the authority under TSCA to collect data on chemicals of potential concern, it rarely does so. To ask manufacturers to submit health data, EPA must first verify that the chemical poses a health risk to the public. “It is the epitome of a Catch-22,” Davies says. The result, according to a 2006 report by the Government Accountability Office (GAO), is that of the 62,000 chemicals in use when TSCA went into effect in 1976, EPA has required testing of fewer than 200 and has either banned or limited production of only five. In a follow-up report earlier this year, GAO concluded: “Without greater attention to EPA's efforts to assess toxic chemicals, the nation lacks assurance that human health and the environment are adequately protected.” That ineffectiveness isn't for lack of concern. Consumers have become increasingly anxious about several classes of compounds on the market. They include phthalates, commonly found in children's toys; flame retardants; and bisphenol A, an organic compound used in the plastic coatings that line food cans and other items, which has been found to have estrogenic effects. Numerous studies have raised red flags about all of these compounds. Biomonitoring studies have revealed that most people have trace amounts of many industrial compounds in their blood. For example, a 2004 study by the Washington, D.C.–based Environmental Working Group, which fights children's exposure to toxic chemicals, found a total of 287 industrial chemicals in the umbilical cord blood of 10 newborn babies. They included chemicals used in fast-food packaging, stain repellents in textiles, flame retardants, and pesticides. No one is sure whether those compounds are harmful in such small doses, but it is clear that toxics create a significant personal and financial burden. Last year, for example, a study by researchers at the University of California, Berkeley, and UC Los Angeles found that preventable chemical- and pollution-related diseases in California alone (not including air pollution) cost the state's insurers, businesses, and families$2.6 billion a year in direct and indirect expenses.

The effects are far-reaching. According to Maureen Gorsen, the former head of California's Department of Toxic Substances Control, 57% of landfills in California are leaking hazardous waste into groundwater. TSCA reform wouldn't stop all such leaks, as landfills contain many substances, such as used oil, that are unlikely to be restricted. Nevertheless, Gorsen notes that the problem is prompting expensive cleanups that must be paid for out of the cash-strapped state's general fund. “It's a losing battle,” Gorsen said at a recent meeting at UC Berkeley. “We can't keep going on the way we have been for the past 40 years.”

TSCA's perceived ineffectiveness, combined with growing public concerns about chemical exposures, has spurred government agencies in the European Union and several U.S. states to launch their own alternatives. By far the most comprehensive is the European Union's effort, abbreviated REACH (for “Registration, Evaluation, Authorisation, and Restriction of Chemical substances”). It requires chemical manufacturers and importers in Europe to submit health and safety data on all compounds they sell in the European Union in excess of 1 million metric tons per year. Most notably, the act shifts the burden of carrying out health and safety screens from government regulators to companies themselves.

States are also jumping into the fray. California, Washington, Maine, Massachusetts, and Michigan have all recently passed laws increasing their control over toxics. These initiatives largely fall into two camps: one set banning chemicals such as phthalates, cadmium, and lead from children's products; the other promoting “green chemistry” to find safer alternatives to toxic compounds used in manufacturing processes and consumer products. “The states are way ahead of the federal government at this point,” says Joel Tickner, a chemical regulations policy expert at the University of Massachusetts, Lowell. Some industry representatives agree. “We look at [the state initiatives] in large part as a lack of public confidence in the safety of these materials that can be attributed to a lack of confidence in the regulatory process,” says Calvin Dooley, president and CEO of the American Chemistry Council (ACC), a trade group in Arlington, Virginia.

According to Dooley, Tickner, and Denison, the combination of the new state laws, REACH, and declining consumer confidence in the safety of household products has prompted many large chemical manufacturers (such as the multinationals that make up most of ACC's membership) to embrace TSCA reforms. The current patchwork of regulations that have arisen is becoming an increasing burden for companies doing business across U.S. state lines and national borders. And many are hoping that new TSCA reforms would preempt state laws. Smaller companies, however, have been less pleased with the idea of reforms. The Society of Chemical Manufacturers and Affiliates (SOCMA)—which is made up of small to midsize chemical companies—reflected the split in a position paper it issued in March: “Sweeping revisions of TSCA … could be highly detrimental to innovation and quality of life, yet paradoxically not produce major changes in our ability to protect health or the environment.”

Just how sweeping any change would be isn't yet clear. But at least one potential revision is on the table. In 2008, Senator Frank Lautenberg (D–NJ) and Representative Henry Waxman (D–CA) sponsored the Kid-Safe Chemicals Act, which is aimed at reducing children's exposure to toxic compounds. The proposed act would not initially evaluate all chemicals, but it would shift the onus to companies to demonstrate that chemicals they sell are safe rather than have the government prove they are harmful. The bill would also make safety data available to consumers through an Internet database in hopes of encouraging public oversight of the process. Last year, the Kid-Safe Chemicals Act made little progress in Congress thanks to a promised veto threat from President George W. Bush. But Lautenberg and Waxman have said they plan to reintroduce the bill this term, possibly as early as this fall.

In April, SOCMA President Joseph Acker said, “The Kid-Safe Chemicals Act would bring an unproven REACH-like system to the U.S.” that would be overly burdensome to chemical manufacturers. But Dooley argues that the law doesn't go far enough. “Let's not take a piecemeal approach,” he says. “Consumers would be much better served with a more-comprehensive reform of TSCA rather than focusing on one section: children.” Dooley says his organization is comfortable with shifting the burden of providing safety data on chemicals from EPA to companies and with other major changes such as allowing businesses to make fewer secrecy claims. SOCMA favors more cautious changes.

Both ACC and SOCMA argue that REACH goes too far in requiring complete health and safety data for all chemicals sold in the European Union. Instead, the groups say, chemicals should be ranked for review by a formula that considers both potential hazard and potential exposure. Tickner agrees that any federal TSCA reforms are likely to incorporate that type of risk-based approach.

So far, the Obama Administration's position on the specifics of toxics reform remains unclear. Lisa Jackson, the Administration's newly appointed EPA director, has listed reform of chemical regulation as one of her five top priorities. In a January memo to EPA employees, she wrote: “It is now time to revise and strengthen EPA's chemical management and risk assessment programs.” With such clear signals coming from a high level, there's a good chance TSCA in its current form will soon find itself phased out.

14. News

# Putting Chemicals on a Path To Better Risk Assessment

Industry and regulators are embracing new technologies to move beyond slow, expensive, and perplexing animal tests.

Practically every bottle of sunscreen contains ethylhexyl methoxycinnamate, a compound that blocks ultraviolet rays. But there's a slight risk that it could pose a health hazard of its own, because the compound is weakly estrogenic. Last summer, researchers with the U.S. National Toxicology Program (NTP), which is led by the National Institute of Environmental Health Sciences, proposed to investigate with a time-honored toxicological test: a large, multigenerational study of rodents. To get a dose comparable to what might harm humans, they would feed large amounts of ethylhexyl methoxycinnamate to rats and mice. “What does this mean, really?” wonders Kim Boekelheide, a toxicologist at Brown University, who reviewed the proposal for the study. “It's a difficult question.”

Many researchers and policymakers are asking similar questions about how chemicals are evaluated for safety. The current mainstay of toxicity testing—giving animals large doses of chemicals—is slow and expensive, and its relevance to humans is often unclear. “We have a system that many people regard as broken,” says Melvin Andersen of the Hamner Institutes for Health Research in Research Triangle Park, North Carolina.

As the U.S. Congress mulls legislation that might tighten regulations for chemicals (see p. 692), companies and government agencies are already seeking new approaches for testing them. In March, the Environmental Protection Agency (EPA) issued a 20-year strategic plan. It incorporates much of the advice of a major report* issued in 2007 by the National Academies' National Research Council (NRC): Use computers to predict toxicity and gather data from high-throughput, rapid assays of human cells rather than animal tests. The report has “created incredible momentum,” says Tina Bahadori of the American Chemistry Council, an industry trade group in Arlington, Virginia, that is spending $9 million this year on research related to interpreting data from new testing methods. The transition won't be quick or easy. Basic research to figure out how cellular assays indicate signs of human disease could cost$1 billion to $2 billion over a decade or two. There are significant questions, such as how to ensure that cellular tests truly reveal information about the risk of human disease. “The ultimate, total replacement of existing animal toxicity tests with these new high-throughput methods is likely a long way off,” says James Bus, who directs toxicology and environmental research at Dow Chemical Co. in Midland, Michigan. But EPA Administrator Lisa Jackson has made improving the assessment and management of chemicals a top priority. And that focus will likely end up affecting companies. Because EPA calls the shots on what kinds of data it requires, the agency's position determines what kinds of tests companies have to conduct, says Dan Newton of the Society of Chemical Manufacturers and Affiliates in Washington, D.C. “For the first time in a very long time, the prominence of attention being paid to chemical issues within the agency has risen significantly,” says Richard Denison of the Environmental Defense Fund, an advocacy group in Washington, D.C. ## Costly cornerstones The current system of toxicity testing in the United States dates back to a 1937 tragedy, when a company advertised an antimicrobial drug called “Elixir of Sulfanilamide.” More than 73 people died from toxic side effects. In the following uproar, Congress passed the Food, Drug, and Cosmetic Act, which required toxicity testing in animals. A decade later, another law required animal testing of pesticides—a process that EPA has overseen since it was formed in 1970. Most toxicity testing is done by companies. For example, under EPA's voluntary program called the High Production Volume (HPV) Challenge, companies have provided the agency with minimal safety data on more than 1400 chemicals that are manufactured in quantities of 450,000 kg or more a year. Denison and other environmentalists criticize the program for not gathering more data; Europe's REACH program requires safety and exposure data for any chemical manufactured in or imported into the European Union in quantities of just 10,000 kg per year, with more tests for HPV chemicals. In 2005, EPA asked NRC how it could do a better job in assessing the health risk of chemicals. The resulting report envisions new streams of data coming from rapid tests of chemicals on human cells. Warning signs of toxicity would come from disturbances in cellular signaling pathways. The tests would be expensive to create but also fast and cheap to operate so that many kinds of chemicals and even mixtures could be run through them. Making the vision real, of course, will require discovering all the pathways and the ways in which chemicals can perturb them. No one knows for sure how many such pathways exist, but estimates range up to 200. It's such a big job that the NRC panel, chaired by epidemiologist Daniel Krewski of the University of Ottawa, recommended a new national institute be founded with an annual budget on the order of$100 million. The institute would also develop high-throughput screening tests that could identify these perturbations.

EPA began heading in that direction in 2005, when it started its National Center for Computational Toxicology (NCCT). One part of the center is EPA's ToxCast program, which has subjected 309 well-characterized chemicals, mostly pesticides, to 467 different high-throughput tests, examining various measures that could indicate toxicity (Science, 22 August 2008, p. 1036). NCCT is collaborating with NTP and the National Institutes of Health's Chemical Genomics Center, which is also running high-throughput tests. “The agency has made huge progress with ToxCast,” says Krewski. In July, it received from Pfizer the first of nearly 120 compounds that failed for toxicity reasons in clinical trials, along with data on animal studies and on toxicity in patients. The goal is to use these data to investigate how well cellular results can predict toxicity in humans. “It's an incredible thing that they're doing this,” says Robert Kavlock, who directs NCCT. “Normally, these are tightly held company secrets.” Kavlock says as many as four other pharmaceutical companies will also provide data.

It's not just the pharmaceutical industry that has invested in improving testing. Agrochemical companies, for example, also use high-throughput techniques to rapidly screen new chemicals for biochemical or genetic signs of toxicity. Computer models that analyze chemical structure, called quantitative structure-activity relationship models, are also commonly used to eliminate compounds likely to be toxic. George Daston, a toxicologist at Procter & Gamble, described at a May meeting at the National Academy of Sciences how researchers at the company are working on genomic tests to screen for compounds that might disrupt hormone signaling. “This is really where the action is in terms of taking these from simple preliminary exercises to being the cornerstone of assessment of potential toxicity,” he said.

Many questions remain about tests based on toxicity pathways in human cells. One is exactly which chemicals to examine. Because chemicals often break down inside the body into other compounds, researchers need a way to predict the metabolites. Dosage is another key variable; little is known about exposure to many chemicals, although biomonitoring can help narrow the data gap (Science, 25 June 2004, p. 1892). Another major issue is how to identify which cellular signals are true signs of an adverse effect. Some changes might be so-called adaptive responses, part of a natural coping mechanism that doesn't necessarily indicate harm. And the human body may have ways to manage toxic chemicals that aren't apparent if cells are tested in isolation.

Perhaps the biggest question of all is how to validate the new tests. Kavlock and other researchers say that rodent tests are a useful point of comparison. “I think we can use the animal data to build confidence,” Kavlock says. But others, such as Krewski, see a danger: “If we spend all our time focusing on trying to replicate what happens in animals, we miss the chance to focus on toxicity pathways in humans.” He envisions a comprehensive suite of data from cellular tests, epidemiology in some cases, and computational models—such as NCCT's nascent Virtual Liver—as a way to validate pathway-based toxicity tests of the future. Not everyone shares Krewski's enthusiasm. “I think it's a very long road to do that,” says Dale Hattis, a toxicologist at Clark University in Worcester, Massachusetts. “They're going to consume a huge amount of resources that might have better uses.”

Peter Preuss, head of EPA's National Center for Environmental Assessment, recognizes the pace of change at EPA is likely to be quite slow, comparing it with changing the course of an aircraft carrier. Kenneth Ramos of the University of Louisville in Kentucky says a large constraint is resources: EPA's research budget has declined about 30% in real terms in the past decade.

There are other challenges in moving new tests into the regulatory process, as Lynn Goldman of Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, relates in what she calls a “cautionary tale.” In 1996, Congress passed two laws that required EPA to develop ways to test chemicals for the potential to disrupt the endocrine system. But it was only this April that the agency released a final proposal for 67 chemicals to be screened. The program was delayed not just by a lack of funding, Goldman says, but also by opposition from industry and lack of political will. Although there is more support overall for developing a radically new system of toxicity testing—animal-rights groups are enthusiastic about the concept, for example—the research challenge is also far greater.

• * Toxicity Testing in the 21st Century: A Vision and a Strategy, National Academy Press, 2007

15. Science Careers

# Young Industrial Chemists Find The Learning Curve Never Ends

1. Rachel Petkewich*

Academia teaches technical skills, but scientists in the private sector say landing a job and thriving there is an education in itself.

When materials chemist Daniela Radu started looking for a job in her field, she decided to stay flexible. Her Ph.D. adviser at Iowa State University in Ames told her she would be competitive for faculty positions, but her postdoc adviser at the Scripps Research Institute in San Diego, California, said her research questions were more suited to a career in industry. She interviewed for both kinds of jobs.

When you're looking for a job and a new place to live, Radu notes, at a certain point you just feel at home. “That is the way I felt when I came to the Experimental Station at DuPont,” says Radu, who was born in Romania. She has been working at the lab in Wilmington, Delaware, for almost 2 years, developing photovoltaic materials. She is, she says, “very happy” there.

Other young Ph.D. chemists we spoke to—at companies large and small, specializing in fields as wide-ranging as pharmaceuticals and petrochemicals—agree that life in the private sector has a lot going for it. Some say it offers a better work-life balance; others relish the challenge of developing real-world products designed to change people's lives for the better. All say their academic training prepared them well to ask good questions, set up experiments, and solve chemistry problems. But they still had much to learn, they say, about other skills needed to succeed in an industrial career, as well as major adjustments to make on leaving the research environment and culture of their universities.

## Teamwork, deadlines, and bottom lines

Teamwork, a focus on products, and real deadlines were other important differences the chemists noticed in moving from academia into industry. Organometallic chemist Neil Strotman has worked in process catalysis at Merck in Rahway, New Jersey, for about 2 years. The atmosphere there, he says, is worlds away from the individual-oriented achievement he experienced while working on his Ph.D. at the University of Wisconsin, Madison, and postdoc at the Massachusetts Institute of Technology in Cambridge. In academic science, Strotman says, you generally work with one adviser on one project; in industry, “you are working with an entire team, and everything you do has a very collaborative nature.”

The driving force behind the work is different, too, says Dustin Levy, a physical chemist with Smiths Detection in Danbury, Connecticut: “The whole reason you are there is to make a profit, and you are not doing pure science anymore.” People who want to explore a subject completely can get very frustrated in industry, Levy says.

For solid-state chemist Michael Hurrey, a big chunk of the difference boils down to a technical phrase: “fit for purpose.” Hurrey earned a Ph.D. in analytical chemistry from the University of North Carolina, Chapel Hill, in 2004 and now works on physical and materials chemistry projects at Vertex Pharmaceuticals in Cambridge. In industry, he explains, “f it for purpose” means doing exactly what the project requires—and no more. For example, Hurrey says, his current project at Vertex “honestly could be a dissertation.” But “I will get to a certain point that is good enough and move on to something else.”

Industry's emphasis on patents brings with it other constraints, Catherine Faler says. At ExxonMobil Chemical's Baytown, Texas, plant, Faler uses her synthetic skills to improve catalysts that turn petroleum distillates into polymers. ExxonMobil permits its scientists to give presentations at conferences and publish in scientific journals but only after the intellectual property has been patent-protected. “Everything I do is proprietary,” she says. It's a stark contrast to her academic training at the University of Pennsylvania, where, she says, if she had an “interesting and cool” result, she would immediately share the details with her chemistry buddies.

## Tailoring the message

Knowing the differences among audiences and tailoring the message to each is a crucial part of an industry job and one that's difficult to learn before you start working in industry, Hurrey says. Even when communicating inside your company, what you say and how you say it depends on whom you're talking to; peer scientists, administrators, lawyers, and engineers each have their own information needs. Addressing people outside the company—scientists at conferences, competitors, and customers—requires altogether different messages, he adds.

Levy, a product manager at Smiths Detection, had to learn on the job how to communicate scientific and technical ideas to nontechnical customers. He completed a Ph.D. in physical chemistry at the University of Maryland, Baltimore County, in 2005, then did a postdoc in optical spectroscopy at the National Institute of Standards and Technology in Gaithersburg, Maryland. In graduate school, he says, his main communication challenge was explaining to microbiologists how a laser spectrometer operates. Today, his customers are hazmat technicians, firefighters, police officers, and postal workers, many of whom have no technical background.

## Scale it up and write it down

Strotman's job in process chemistry requires a different kind of translation: translation of scale. He develops catalytic methods aimed at reducing cost and environmental impact while increasing the efficiency and reliability of processes such as new drug syntheses for HIV and diabetes. His academic experience prepared him to do small-scale chemical reactions, but he had to learn large-scale development from more experienced colleagues at Merck. Issues such as purification and fire hazards become far more relevant as scale increases, he says. The work involved in making drugs is highly regulated for patient safety and requires careful documentation. Today, Christopher Ciolli is a process chemist at Ricerca Biosciences, a contract research organization in Concord, Ohio. But his first industry post, which he started just 4 days after finishing his Ph.D. at the University of Wisconsin, Madison, was in a “current good manufacturing practices” (cGMP) facility at Abbott Laboratories in Illinois, where the work is closely scrutinized by the U.S. Food and Drug Administration.

In graduate school, Ciolli says he “picked a bottle off the shelf and recorded the amount used” in his lab notebook. In a cGMP environment, he says, the same chemical first had to have specifications established, be tested for purity, and be approved by quality assurance as meeting those specifications. Then he had to document the internally assigned reference number, the lot, the expiration/retest date, and the lab number in which the chemical was used.

Such restrictions aren't unique to pharmaceutical companies; industry in general requires more stringent safety, regulatory, and environmental training and documentation than academic facilities do, says Jinkun Huang, an organic chemist in process research and development at Amgen in Thousand Oaks, California.

## Postdocs and internships

Although an academic postdoc is always good experience to have, an industry internship may be better preparation for understanding industry.To get a head start on skills unique to industry, several young chemists strongly recommend summer internships or co-ops in addition to postdoctoral research. Vertex's Hurrey completed an internship at Eastman Chemical in Kingsport, Tennessee, during the summer between college and graduate school. Being immersed in the “culture of a very large organization” was beneficial, he says, even though he ended up in a very different industry.

Adam Myers grew up in Indianapolis knowing several scientists at pharma giant Eli Lilly. He studied chemistry at Purdue University, interned for three summers at Eli Lilly, and earned his Ph.D. in organic chemistry at Purdue in 2005. As a result of his internships, he says he treated documentation “a little more from an industrial perspective during my graduate school than an academic perspective.” He also took opportunities to handle group finances when his Ph.D. adviser was on sabbatical, which gave him some valuable experience, he says.

After earning his Ph.D., Myers worked as a manufacturing chemist and chemical safety officer with a medical diagnostics start-up called Quadraspec in West Lafayette, Indiana, until a layoff in 2007. He is currently working on pharmaceutical analysis at a contract research organization called BASi in West Lafayette.

## Breaking in, staying on

The chemists we interviewed say there are many ways to land a job in industry. On-campus recruiting worked for DuPont's Radu and Merck's Strotman. Levy found an ad for the job at Smiths Detection in a chemistry magazine. Faler connected with ExxonMobil by contacting the manager of the catalyst group at Baytown after being tipped off by a friend. Myers and Ciolli searched broadly and networked their ways to their first and second jobs.

Myers recommends “casting a wide net” when exploring companies and job postings. That means looking beyond obvious industry sectors and job-title keywords. “You can be surprised at what will actually be out there that is perfect for your skill set but may not have the particular label you thought it would,” he says.

Persistence pays. Ciolli interviewed with Abbott Laboratories during the fall on-campus recruiting process. The on-campus interview was a dead end. While at the recruiting event, however, Ciolli heard an Abbott human resources representative offer to return to his campus to give a seminar on resumé and interview preparation. “At the time, I was coordinating a student-run seminar series on campus,” Ciolli says. He arranged a seminar with the human resources rep, who passed his resumé on to a hiring manager for an internally posted position. “Without that little piece of networking, I wouldn't have gotten the job there,” he says.

For his second job, Ciolli wanted to move into process chemistry, applied to scale-up. He also wanted to move from Illinois to Ohio to be closer to extended family. “I'd been looking for about a year just to see if there was something that met both personal and professional needs,” he says. A local online jobs site listed only a very brief job description. “I didn't know who the hiring company was until they called me back,” he says. He knew the company, having researched Ricerca during his previous job search. He was “ecstatic” to start there last year.

## Help and humility

Once on board, Ciolli advises, new employees should be ready to learn. Everybody in industry, from receptionists to research managers, “knows more about that industry than you do walking in,” he says. If you keep the right attitude, people will show you the ropes. “Check the ego at the door,” he says, and “most people are willing to help.”

“One of the things I think was critical to me being successful at Abbott and earning the job here at Ricerca early on in my career was just to stay humble and ask questions and listen to what the people around me have to say,” Ciolli says. “But they aren't going to spoonfeed you everything. You have to get out there, … talk to people, and help move things along.”

• * Rachel Petkewich writes from Oakland, California.