News this Week

Science  24 Dec 2004:
Vol. 306, Issue 5705, pp. 2168

    Allegations Raise Fears of Backlash Against AIDS Prevention Strategy

    1. Jon Cohen

    Much to the dismay of AIDS researchers and clinicians around the world, the Associated Press (AP) ran a series last week that has reignited debate about the safety of one of the most heralded interventions in AIDS prevention: use of the drug nevirapine to prevent HIV transmission from an infected mother to her infant. This treatment likely has spared tens of thousands of children from the disease. Experts insist that, although the drug is not problem-free and some irregularities occurred during one clinical trial, nevirapine's benefit far outweighs the risks.

    The AP stories focus on a study in Uganda, which revealed in September 1999 that a single dose of nevirapine given to an HIV-infected mother in labor and to her infant could halve transmission rates. The finding, later confirmed by other studies, led to the widespread use of this cheap, simple intervention in poor countries. The AP series alleges that officials at the National Institute of Allergy and Infectious Diseases (NIAID), which funded the so-called HIVNET 012 study, downplayed problems that surfaced in 2002, did not promptly communicate them to the Food and Drug Administration (FDA) and the White House, and steamrolled over concerns of its staff, one of whom has gone to Congress with charges of an alleged “cover-up.”

    The study had “irregularities with record keeping” at its headquarters in Kampala, Uganda, acknowledges Clifford Lane, NIAID's deputy director. But he stresses that “there has been nothing to refute the claims of safety and efficacy with regard to single dose nevirapine treatment to prevent the transmission from mother to infant.” And he worries that “this particular news story may cause people to stop using nevirapine, and infants could be infected and die needlessly.”

    In the wake of the story, Rev. Jesse Jackson, a former U.S. Presidential candidate, decried NIAID's actions as “a crime against humanity” and called for Congress to investigate “this catastrophe.” In South Africa, where President Thabo Mbeki's government has been criticized for its slow adoption of nevirapine to prevent mother-to-child transmission (MTCT), the political online publication ANC Today said the AP stories proved the hesitation was “fully justified,” and it assailed NIAID for using Africans as “guinea pigs.”

    Nonprofit organizations that provide nevirapine to prevent maternal-infant transmission in developing countries have struck back on their websites. The Elizabeth Glaser Pediatric AIDS Foundation in Los Angeles notes that the drug has been used hundreds of thousands times “without any significant toxicities for mothers or babies.” A statement from Global Strategies for HIV Intervention, based in San Rafael, California, says six other MTCT studies confirm the safety and efficacy of nevirapine and stresses that the problems at the Ugandan site have been known for years. “This is not new news,” says the statement.

    Center of controversy.

    A shack on the grounds of Kampala's Mulago Hospital served as the HIVNET 012 trial site.


    In fact, Boehringer Ingelheim, the drug's manufacturer, first uncovered problems with HIVNET 012, which involved 645 mother-infant pairs. Nevirapine is an FDA approved drug to treat HIV infection, but the Uganda results led Boehringer to seek FDA endorsement for its use in preventing MTCT, explains principal investigator J. Brooks Jackson of Johns Hopkins University, which collaborated with researchers from Makerere University in Kampala. As part of the process, Boehringer audited the Uganda site in January 2002 and discovered discrepancies in the records. A Boehringer representative said the audit turned up “a lot of pin pricks but no show stoppers,'” recalls Jackson.

    When advised of the problems later that month, NIAID's Division of AIDS should have informed FDA within 3 days but did not. “That was an error,” concedes Edmund Tramont, who heads NIAID's Division of AIDS and who did not learn about the discrepancies until March 2002. At that point NIAID informed FDA, shut down the site for new studies, and notified the public, triggering a flurry of press coverage. NIAID also hired a contractor to audit the site. That second audit revealed serious unreported incidents, including deaths and “thousands” of less serious “adverse events.” Tramont's worries were assuaged when he learned that the unreported deaths, which were not related to the drug, had in fact been recorded, and that the unreported adverse events were also unrelated to the drug and involved diseases like malaria and tuberculosis.

    Because an initial review of the discrepancies uncovered no safety issues, NIAID officials say they saw no reason to give the White House a detailed briefing about their concerns. That June, President George W. Bush announced a $500 million program to prevent MTCT in developing countries that would rely heavily on nevirapine. The AP alleges that NIAID “chose not to inform the White House” about its internal concerns for fear of “scuttling the use of nevirapine in Africa.”

    Tramont sent over yet another audit team. This third audit compared the hospital records of 80 mother-infant pairs to the information in the database—a statistically significant sample. It found discrepancies, but they were relatively infrequent. In early April 2003, when NIAID was wrestling with whether to reopen the Ugandan site for research, NIAID's Betsy Smith wrote a report for FDA that sharply criticized the study's adverse event reporting. “Subject records on site were of poor quality and below expected standards of clinical research considered at the forefront of medical research,” Smith concluded. Tramont edited the report and removed that detail and other critical aspects, a move the AP reported led to “disbelief” among some staffers. Tramont says he made the changes because he felt Smith relied too heavily on the misleading second audit.

    Jonathan Fishbein, the NIAID staffer who has gone public with his concerns, became embroiled in what was then a backroom dispute in July 2003, shortly after he was hired by the Division of AIDS to improve clinical trials. Fishbein wanted more time to review the issues before allowing the Ugandan site to reopen for new clinical studies, but Tramont was impatient. “I want this restriction lifted ASAP because the site is now the best in Africa run by black Africans,” Tramont e-mailed Fishbein. “The site was shut down for 15 months,” says Tramont. “It was stupid and bureaucratic not to reopen it.”

    In February 2004, with office tensions mounting, Fishbein received notice that he was being terminated for “non-performance.” He took complaints of what he viewed as his mistreatment and the scientific cover up to many officials, including the head of the National Institutes of Health (NIH). He also sought whistleblower status. Although NIH will not discuss Fishbein by name, deputy director Raynard Kington says a research integrity officer reviewed what he called allegations of “scientific misconduct” and determined they were “erroneous.” NIH did ask the Institute of Medicine to review the scientific issues surrounding HIVNET 012, and that panel plans to issue a report in March 2005. Meanwhile, Fishbein says he is “not is disagreement” that nevirapine saves lives. “My issue is not nevirapine, but the process.”


    Report Slams SLAC's Safety Practices

    1. Adrian Cho

    Management at the Stanford Linear Accelerator Center (SLAC) routinely disregarded safety regulations in order to keep the scientific results coming. That's the conclusion of a Department of Energy (DOE) investigation into a serious electrical accident this fall at DOE's high-energy physics facility in Menlo Park, California (Science, 29 October, p. 788). The accident has led to the indefinite shutdown of the lab's accelerators, causing SLAC to lose ground to a Japanese laboratory engaged in the same type of research.

    Released on 15 December, the DOE accident report blasts SLAC management for fostering a culture in which “unsafe conditions have become a part of the everyday way of doing business.” SLAC spokesperson Neil Calder says the lab will take its comeuppance and do what's needed to fix the problems. “The report is the report,” says Calder. “We respect that, and now we can use [the report] as a means of going ahead” to improve safety.

    The 11 October accident occurred when an electrician tried to install a circuit breaker in a 480-volt power panel without shutting off the electricity, a practice known as hot work. The action presumably was a timesaving step. A short caused an explosion that set the electrician's clothes on fire. He suffered severe burns over 50% of his body and was hospitalized for several weeks. The accident automatically triggered the inquiry by DOE's Office of Environment, Safety, and Health. The lab's flagship PEP-II particle collider and other accelerators had been taken down for repairs and improvements in July but were scheduled to resume operations in mid-October.

    Busy as Bs.

    SLAC's BaBar detector is falling behind its Japanese counterpart in spotting B mesons.


    Investigators found plenty of blame to go around. There was no justification for installing the breaker with the power on, they concluded, and the SLAC field supervisor who ordered the work had not obtained the required hot work permit. The electrician, a contractor, lacked the face shield, hood, fire-resistant clothing, and insulated tools that would have protected him. Moreover, according to the report, local DOE officials had not been pressing the lab to follow its own safety regulations.

    But investigators directed their harshest criticism at laboratory management. “It appears that SLAC has consistently placed operations ahead of safety,” the report says. Investigators found that hot work was routinely performed without permits, and that management allowed such breaches of protocol in order to keep the lab's accelerators running and the data flowing. “SLAC's emphasis on the scientific mission as a means to secure funding from the [DOE] Office of Science and compete with other laboratories reached [the field supervisor's] level as direction to ‘just get the job done,’” the report states.

    SLAC's main competitor is the Japanese particle physics laboratory KEK in Tsukuba. Like SLAC, KEK has a collider designed to produce fleeting particles called B mesons, which may hold the key to understanding the subtle differences between matter and antimatter. In recent years KEK's collider has pumped out significantly more B mesons than SLAC's (see graph). SLAC researchers are still competitive, says Sheldon Stone, a physicist at Syracuse University in New York, but “it certainly doesn't help that they're shut down.”

    SLAC and local DOE officials must draw up a corrective action plan, to be submitted to DOE by early February. The lab's accelerators won't start up until DOE is sure that the lab can operate safely, says Milton Johnson, chief operating officer for DOE's Office of Science. “We'll take whatever time is necessary to assure that the employees and workers are safe,” he says. In the meantime, Stanford University, which runs the lab for DOE, has convened its own panel of experts to examine lab safety.


    Halt of Celebrex Study Threatens Drug's Future, Other Trials

    1. Jennifer Couzin

    Another COX-2 inhibitor is on the ropes. On 17 December, the National Cancer Institute (NCI) halted a 2000-person clinical trial testing whether Celebrex could inhibit colon polyps. Hours later, two more cancer trials and an Alzheimer's trial testing Celebrex and Naproxen were suspended by the scientists overseeing them. In addition, dozens of other trials involving the drug were undergoing careful review amid a flurry of conference calls. As Science went to press, the National Institutes of Health (NIH) was trying to decide whether to halt its Celebrex trials—roughly 40 in all—and the Food and Drug Administration (FDA) was weighing whether to pull Celebrex off the U.S. market.

    The scenario was strikingly similar to what happened this fall to Vioxx, a COX-2 inhibitor manufactured by Merck. The company withdrew the drug on 30 September after a study of Vioxx's effect on colon polyps revealed a doubling of heart attacks and strokes from the drug after 18 months of use. That action triggered a painstaking review of cardiac events in the NCI study called Adenoma Prevention with Celecoxib (APC). Experts found a 2.5-fold increase in heart attacks and strokes for those taking a moderate dose of Celebrex, and a 3.4-fold increase for those taking a high dose. As with Vioxx, extended use of the drug seemed to correlate with cardiac hazards: Volunteers were taking Celebrex for an average of 33 months.

    Pfizer, the drug's maker, is so far hesitant to withdraw Celebrex. “The cardiovascular findings … are unexpected and not consistent” with a comparable colon polyp study that Pfizer is running, said Hank McKinnell, the company's chairman and chief executive officer, in a statement. Pfizer has stopped advertising Celebrex to consumers, however.


    Celebrex's side effects led to suspended studies.


    NIH director Elias Zerhouni said in a hastily called press conference last week that for now, the agency is leaving decisions about trial suspension up to individual investigators. But Zerhouni ordered a review of all NIH-funded studies of COX-2 inhibitors and requested that researchers send out revised informed consent forms to participants. In addition to cancer studies, NIH was funding a 2500-person trial of whether Celebrex can prevent Alzheimer's.

    “It may not be possible to get these trials done,” says Charles Geyer, director of medical affairs for the National Surgical Adjuvant Breast and Bowel Project (NSABP), a cooperative group funded by the NCI that runs multi-center trials. NSABP has suspended its two Celebrex studies while it reviews the APC data. One study, slated to enroll 1200 people, is testing whether Celebrex can prevent colon polyps; a second, slated to enroll 2700 women, is testing Celebrex as a treatment for breast cancer.

    “This is going to put a brick wall in the field,” says Richard Goldberg of the University of North Carolina, Chapel Hill, and the lead investigator on the NSABP colon polyp trial. “The COX-2 inhibitors have been an important therapeutic approach.” In addition to its use for arthritis pain, Celebrex is already approved to reduce intestinal polyps in patients with familial adenomatous polyposis, a hereditary condition that leads to colon cancer.

    Although no published Celebrex study is as extensive as the APC trial, many researchers were taken aback by the APC results. Historically, Celebrex has displayed fewer problems than Vioxx, perhaps because it targets the COX-2 enzyme less selectively. “We were dismayed” by the APC findings, says John Breitner, a psychiatrist at the VA Puget Sound and the University of Washington in Seattle and the lead investigator on the Alzheimer's prevention trial.

    Another COX-2 inhibitor made by Pfizer, Bextra, was also recently shown to cause cardiovascular problems in high-risk patients. That has added to concern about the whole class of drugs, although it's not clear if selective blocking of COX-2 explains everything. Scientists may need to reconsider other mechanisms, and whether long-term use of non-steroidal anti-inflammatory drugs in general can cause blood clotting.


    Editing No Longer Infringes U.S. Trade Sanctions

    1. Yudhijit Bhattacharjee

    Pushed into a legal corner, the U.S. Treasury Department last week removed all restrictions on editing manuscripts from authors in three countries under a U.S. trade embargo. Publishers hailed the step by the department's Office of Foreign Assets Control (OFAC). But some wondered why the same freedoms were not extended to music, films, and other forms of artistic expression, and others questioned whether the government should be exerting any control at all.

    Under the new ruling, U.S. citizens are no longer required to seek a license from OFAC for any transactions with individuals in Iran, Cuba, and Sudan that “directly support the publishing and marketing of manuscripts, books, journals, and newspapers.” It overturns two recent OFAC pronouncements that had sparked intense protests from publishers and led to a suit this fall by a coalition of organizations (Science, 1 October, p. 30). Iranian human-rights activist and 2003 Nobel Peace Prize winner Shirin Ebadi joined the lawsuit, claiming suppression of her memoirs.

    “This is a true victory for the freedom of the press,” says Marc Brodsky, executive director of the American Institute of Physics, which publishes 11 journals. “It's unfortunate that the bureaucracy couldn't get itself organized to change the rules until we went to court.” The plaintiffs have not yet decided whether to drop the suit.

    OFAC denies that the ruling, which applies to “academic and research institutions and their personnel,” was a response to the legal challenge. “OFAC's previous guidance was interpreted by some as discouraging the publication of dissident speech from within these oppressive regimes. This is the opposite of what we want,” says the Treasury's Stuart Levey.

    The 16 December statement may not be enough to end the controversy, however. Observers note that the new ruling retains OFAC's jurisdiction over publishing and also prohibits U.S. citizens from collaborating on manuscripts from government officials in the embargoed countries. Representative Howard Berman (D-CA), author of a 1988 amendment to the trade sanctions law that exempts informational materials, is unhappy that the new ruling exempts only publishing. “Why should it be OK for a publisher to commission a book from an Iranian dissident but not for a film studio to work with a Sudanese filmmaker?” he says. “The [decision] reflects the fact that these regulations were a desperate attempt to head off mounting legal and political pressure.”


    Long-Sought Enzyme Found, Revealing New Gene Switch on Histones

    1. Jennifer Couzin

    In the molecular biology equivalent of stubbing one's toe on King Tut's undiscovered tomb, a team of scientists, to its great surprise, has identified a genetic switch hunted by biologists for decades. The switch, buried deep inside a cell's nucleus, is an enzyme that chemically alters the protein spools around which a cell's DNA wraps. The enzyme's discovery, reported online last week in Cell and in the 29 December issue—along with related finds published this fall—has scientists racing to find more switches like it. The switches could reveal much about how cells control gene activity and illuminate cancer, multiple sclerosis, and other diseases that may be spurred by gene expression gone awry.

    “It's the sort of thing that everybody wanted to find,” says Tony Kouzarides, a molecular biologist at the University of Cambridge, U.K. In the last couple of years, though, hope had faded. “The feeling,” says Kouzarides, “was … that they didn't exist.”

    The newly discovered enzyme acts upon histones, the specialized proteins that strands of DNA loop around in order for a cell to condense its genetic material inside a nucleus. Rather than inert spools, histones are increasingly seen as active cogs in a cell's gene-regulation machinery. For example, certain enzymes can add methyl groups to tails that protrude from histones, which turns genes either on or off. But biologists couldn't find enzymes that did the opposite, leaving them wondering whether methylation was permanent.

    Mission Accomplished.

    Scientists have finally found an enzyme acting as a histone demethylase.


    Although many biologists had searched for these so-called histone demethylases, Harvard molecular biologist Yang Shi wasn't one of them. Rather, his group had become entranced by an unusual protein complex that performs a dizzying array of functions in cells. One component of the complex, an enzyme found in species from yeast to people, had an ability to quash gene expression on its own. Trying to discern how it acted, Shi and his colleagues spent a year ruling out every viable option but histone demethylation, which they left for last in part because few believed it existed.

    Eventually, the team conducted biochemistry experiments showing that the enzyme demethylated a specific amino acid, a lysine, on the tail of one kind of histone. Shi's group then used the technique of RNA interference to reduce levels of the enzyme in human cells. That led to methylation of various histones and increased the expression of nearby genes. This, says Shi, drove home that the enzyme, dubbed lysine specific demethylase 1 (LSD1), represses specific genes by maintaining unmethylated histones.

    Other scientists are struck by the work. “It opens up a whole new horizon,” says David Allis, a molecular biologist at Rockefeller University in New York City who has argued for the existence of a “histone code” in which methylation and other histone tail modifications control gene expression. A report published this fall in Science by Allis and Scott Coonrod at Cornell's Weill Medical College in New York City, and a separate paper published at the same time in Cell by Kouzarides's team, offered the first hints that cells could perform demethylation. The two teams independently found that part of a human protein could chemically transform amino acids on a histone, demethylating them in the process. But in those studies, demethylation took place amid other chemical reactions. Shi's paper describes “true demethylation,” says Kouzarides.

    Questions to be explored now include how demethylation is controlled and what role it might play in diseases. “We just have to understand what signals trigger this regulation,” says Stéphane Richard, a molecular biologist at McGill University in Montreal. Kouzarides and others predict that additional histone demethylases will be found. Some may activate genes instead of repressing them as LSD1 does, the researchers say.

    Several diseases, in particular certain leukemias and colon cancer, have been tentatively linked to faulty methylation, so histone demethylases could represent inviting drug targets. Indeed, Shi has already filed for a patent on LSD1, and Allis and a company with which Kouzarides is affiliated have done the same for their enzyme.


    A Ruff Theory of Evolution: Gene Stutters Drive Dog Shape

    1. Elizabeth Pennisi

    Evolutionary biologists like to go to exotic places for their studies. For his graduate work in evolutionary biology at the University of Texas Southwestern Medical Center in Dallas, John Fondon III simply headed to the local dog park. He wanted to sniff out DNA changes that enabled canines to evolve quickly into more than 100 breeds, and dog parks were a good source for the DNA of purebreds.

    Armed with DNA from more than 100 dogs, including their own, Fondon and his adviser Harold Garner have now shown that slight differences in the lengths of certain genes involved in development can transform a collie nose into a puglike one and even change the number of toes in one breed. Furthermore, their study, reported in the 28 December Proceedings of the National Academy of Sciences, drives home the potential evolutionary importance of repetitive DNA sequences called tandem repeats. Changes in the size of a tandem repeat within a gene can alter the gene's protein, making it work more or less efficiently. “We think the value and impact of these [repeats] on genetics and on phenotype is very much underestimated,” says Garner, a physicist. “They are resources in the genome for things to rapidly evolve,” not just in dogs but in other species as well.

    Less DNA, more toes.

    The sixth toe (x-ray) in Great Pyrenees seemed to arise when a key gene lost some bases.


    That provocative proposal has received mixed reviews so far. Fondon and Garner have yet to prove that differences in the lengths of tandem repeats matter, says Robert Wayne, an evolutionary biologist at the University of California, Los Angeles, but the concept intrigues him. “Tandem repeats, generally regarded as junk DNA, offer a novel mechanism for evolutionary change,” agrees Wayne.

    Fondon began to chase down tandem repeats after a stint on the Human Genome Project. These genetic stutters are sequences of three or so DNA bases that are repeated over and over again. No one knows for sure what causes a particular stutter to double or triple in number. But once multiple copies exist, enzymes copying DNA can drop off repeats or add extra copies.

    With Garner, Fondon had come up with a program to identify tandem repeats in the human genome. “I was really dumbfounded [at] the number and types of repeats coded in the genes,” particularly developmental ones, Fondon recalls. Intrigued by the role these repeats might have in evolution, he turned to dogs. Most researchers assume that the DNA variation underlying evolutionary adaptations comes about by single base changes in a gene's sequence. But modern dogs have changed much faster than can be explained by these so-called point mutations.

    So, using human and mouse genes known to be involved in development as probes, Fondon and Garner tracked down 37 related canine genes and sequenced the repetitive regions in each one in 92 dog breeds. They initially tapped their own pets for blood samples: Fondon's Labrador retriever, and Garner's Weimaraner and Dalmatian. Next, Fondon headed to dog parks. He also tracked down canine DNA samples from kennel clubs and breeders he solicited on the Internet. Garner even persuaded one of the university's key donors to make an unusual gift: blood from her three dogs.

    The 142 dogs tested diverged significantly in the number of repeats in the various development genes. To determine if these tandem repeat variations translated into physical differences, i.e., altered phenotypes, Fondon and Garner used a high-resolution laser scanner that generated three-dimensional images of dog skulls. A program that morphed one breed's skull into another's helped quantify differences between breeds. The researchers then correlated the degree of change with variations in repeat length and in the ratios of different repeats.

    Snout slip.

    In 65 years, changes in repetitive DNA may have caused the bull terrier's nose to point ever more downward.


    For example, the length of a breed's snout correlated directly with the number of repeats in a gene called Runx-2. But there was a twist, Garner notes. Runx-2's tandem repeat consists of two different three-base sequences, randomly ordered along the length of the repeat. If there's more of one threesome relative to the other, that breed's muzzle tends to be longer and straighter.

    The researchers found an intriguing connection with another gene, Alx-4. Most dogs have five toes on their hind legs, but members of the Great Pyrenees breed tend to have six. Knowing that Alx-4 causes mutant mice to have an extra toe, Fondon checked that gene in dogs. The tandem-repeat region of the six-toed Great Pyrenees was 51 bases shorter than in other breeds. In contrast, a five-toed Great Pyrenees had the full complement of bases.

    By comparing DNA and skulls of bull terriers from the 1930s and now, Fondon and Garner may have seen evolution in action. The older skull was less droopy, and DNA extracted from it also had one more repeat in the Runx-2 gene than did the modern terrier's gene.

    Sean Carroll from the University of Wisconsin, Madison, worries that Fondon and Garner overestimate the importance of tandem repeats in typical evolution, noting that dog owners have bypassed natural selection by breeding for physical characteristics without thought to how the resulting changes would impact a dog's survival in the wild. Intensive breeding may have prompted the rampant changes in tandem repeats, more so than would occur under natural conditions. But David King, an evolutionary biologist at Southern Illinois University in Carbondale, argues that it doesn't matter whether natural selection or artificial breeding is at work—the role of tandem repeats is now clearly important: “[Fondon and Garner] have shown that tandem repeats are effective for fine-tuning evolution.”


    Singapore Leads, U.S. Lags in Science, Math Student Achievement

    1. Yudhijit Bhattacharjee

    Singaporean students lead the world in math and science, according to the latest international comparison of student performance. Educators say that the top ranking, among elementary and middle school students from as many as 49 countries, also demonstrates how a nation's commitment to excellence can pay off fairly quickly.

    The findings come from the 2003 Trends in Mathematics and Science Study (TIMSS) released last week by the International Association for the Evaluation of Educational Achievement ( Singapore had excelled in two previous studies, but this time its fourth graders rose from fifth to first place in science after education officials revamped the small island nation's curriculum and strengthened teacher training. “The lesson here is that when you focus on a goal, you can produce measurable results within a short period of time,” says Patrick Gonzales, an analyst with the U.S. National Center for Education Statistics in Washington, D.C.

    More than 360,000 fourth- and eighth-grade students participated in the 2003 study, taking math and science tests designed to assess both knowledge and understanding and based on common elements of the various curricula (see box). The survey's top tier has a decidedly Asian flavor, with students from Japan, Chinese Taipei, and Hong Kong ranking among the top five countries in both math and science at both grade levels (see tables). Most European countries fall somewhere in the middle, whereas most Middle Eastern and North African nations lag. And although boys and girls have similar scores in math at both levels in most countries, boys show significantly higher achievement than girls in eighth-grade science.

    View this table:

    For U.S. students, the results send a mixed message. Eighth-graders did better in both subjects, rising from 28th (of 41 countries) to 15th (of 46) in math and from 17th in 1995 to 9th in science. But fourth-grade students stayed in the middle of the pack in math—12th out of 26 and 25 countries, respectively, and lost ground in science, slipping from 3rd to 6th place.

    The decline in fourth-grade science is a result of less time spent on the subject, argues the National Science Teachers' Association (NSTA). “We have been hearing from many elementary teachers that they are not teaching science because of the increased emphasis on literacy,” says NSTA executive director Gerald Wheeler. “Science is essentially being squeezed out of the elementary classroom.”

    View this table:

    One piece of good news for U.S. educators is a shrinking achievement gap between white and minority students in eighth-grade science. But the survey highlights the continued disparity in achievement along economic lines. Eighth-graders in schools with 75% or more students eligible for free or reduced-price lunch, a measure of poverty, scored 110 points below their peers in schools at which fewer than 10% of the students receive a subsidy, for example.

    “Poor kids are held to lower standards than more affluent kids,” says Jack Jennings, director of the Center on Education Policy in Washington, D.C. “We must bring higher quality teaching and resources to the poorer school districts.”

    The next TIMSS will be held in 2007.


    A Technical Fix For an Ethical Bind?

    1. Constance Holden,
    2. Gretchen Vogel

    Scientists and ethicists are taking a closer look at ways to create pluripotent human stem cells without involving embryos. But how close are such ideas to reality?

    “There are a lot of ways to skin the cat here,” says Robert Lanza of Advanced Cell Technologies (ACT). In this case, the “cat” is the challenge of devising new and genetically tailor-made human stem cell lines while bypassing the creation of an embryo. Such an achievement would enable scientists to sidestep the ethical debate that has polarized the United States and triggered governments around the world to become involved to an unprecedented degree in regulating research.

    Last month, the President's Council on Bioethics heard two such proposals. One would allow scientists to determine that an embryo is nonviable before any cells are taken from it; the second was a way to jinx DNA before it is transferred into an egg so that it could never develop into a viable organism. Out of the political limelight, other researchers are working on additional methods that might ease some of the controversy over whether embryos can be used to further potentially lifesaving medicine.

    The prize.

    Researchers hope pluripotent stem cells will help cure disease.


    Although public opinion polls show wide support for human embryonic stem (ES) cell research, it's likely that for some time stem cell researchers will be confronted with a patchwork of standards ranging from the permissive policies in some Asian countries to outright bans in Catholic countries such as Ireland and Austria.

    Many scientists feel that the possible benefits from human ES cells for understanding and curing disease far outweigh any ethical concerns about destroying a week-old embryo. But some are deeply conflicted. And even those who are not hope for new techniques that will be more effective as well as less ethically problematic. To find those, scientists are exploring various ways to derive ES-like cells from an abundance of early cell types. Ultimately, everyone agrees, the Holy Grail of ES cell generation resides in finding a way to coax mature body cells to “dedifferentiate”—make the journey back to earlier, more plastic stages—with no use of eggs or embryos.

    But progress is slow. The ideas presented last month are theoretical and have not yet been tested in the lab. Although other approaches have been tested, they have not yet proven as efficient as the standard methods for deriving ES cells. And some of the new ideas raise troubling ethical questions of their own. Most important, some scientists say, the field has a lot of work to do to figure out exactly what ES cells are capable of, and they worry that the new proposals will divert attention or resources from that effort. “Suggesting experiments for political ends … is in itself simply another obstruction,” says Stanford University stem cell researcher Irving Weissman.


    Physician and ethicist William Hurlbut of Stanford is touting the jinxed DNA idea, which he calls “altered nuclear transfer,” as a comprehensive solution to the challenge of creating new human embryonic cell lines with specific genetic properties—the goal of human nuclear transfer or research cloning. By knocking out a key developmental gene before transferring the nucleus of a donor cell into an enucleated egg cell, he says, one could create a reprogrammed cell capable of forming ES cells but lacking the signals needed to form an organized embryo. No embryo created, he says, no embryo destroyed.

    But not everyone agrees, and Hurlbut's proposal is not, in fact, new. “These ideas have been floating around for years,” he acknowledges, although he takes credit for “reframing the moral argument.” Cloning and stem cell researcher José Cibelli, now at Michigan State University in East Lansing, filed for a patent on the technique in 2002, when he still worked for ACT in Worcester, Massachusetts. And developmental biologist Hans Schöler of the Max Planck Institute for Molecular Biomedicine in Münster, Germany, says he proposed the technique independently in 2002 as a way around Germany's embryo protection law. In a slight variation, Schöler has suggested injecting a snippet of RNA into the recipient oocyte to block expression of key developmental genes.

    Idea man.

    Ethicist William Hurlbut is promoting a worry-free way to clone.


    In the method patented by ACT and proposed by Hurlbut, scientists would genetically alter the donor nucleus to block the expression of a gene required for the proper organization of the early embryo. The resulting cell would lack the key organizational cues essential to form a fetus and would likely differentiate into a random assortment of cell types, not an embryo, Hurlbut argues.

    To date, the discussions have remained theoretical. Weissman thinks the idea is reasonable, but he has advised Hurlbut—who is not a researcher—“on how hard it is to make even reasonable ideas work.” Until someone spends long hours in the lab testing the idea, he says, he cannot take it seriously.

    But if Hurlbut or someone else could develop an efficient method, Weissman says, “he would be doing the medical and scientific world a great favor.” The president's bioethics group seemed to agree. And Hurlbut says he got a thumbs-up from the Archbishop of San Francisco, William J. Levada, who wrote President George W. Bush last summer commending the idea.

    Many seem to have doubts about it. If the knockout gene allows for several days of relatively normal development, then it would not solve the problem, says Richard Doerflinger of the National Council of Catholic Bishops in Washington, D.C. “A short-lived embryo is still an embryo.” “I think this is an abuse of cloning technology,” says Lanza of ACT—more troubling than nuclear transfer itself. “It will be a sad day when scientists use genetic manipulation to deliberately create crippled embryos to please the Church.” Cibelli concedes that his team “debated about whether we should file for a patent. We thought some would see this as creating a defective human for purposes of exploitation.”

    Eggs alone?

    Some researchers think that a type of disordered embryo created solely from an unfertilized egg cell is a better option. Researchers can trick an egg into dividing with either chemical or electrical signals that set off the same cascade as the penetration of a sperm does. The result is called a parthenote. In some insects and reptiles, parthenogenesis occurs naturally and can produce live offspring. In mammals, however, the lack of genes from the father invariably causes defects that kill the fetus before birth.

    A parthenote “is obviously not an embryo,” says developmental biologist Ann Kiessling of the Bedford Stem Cell Research Foundation in Somerville, Massachusetts, who with her colleagues has been working to derive pluripotent stem cells from human parthenotes. The only mammalian parthenote—a mouse—that has made it to term was the product of heavy genetic intervention by her creators, she notes (Science, 23 April, p. 501).


    Kiessling and others also argue that parthenogenesis may be more efficient than nuclear transfer because “primate eggs seem to activate pretty readily,” with roughly 25% of them surviving to the blastocyst stage—more than twice the success rate reported this year by a Chinese group using nuclear transfer (Science, 12 March, p. 1669). So this approach could provide a simpler way to get genetically tailored cell lines both for studying and treating genetic diseases—at least for women with viable oocytes, she says.

    Reproductive biologist Karl Swann of the University of Cardiff says he and his colleagues have found a chemical trigger that seems especially powerful at sparking division in oocytes—even those that have failed to fertilize when exposed to sperm. This could greatly add to egg availability, he says, because fertility clinics discard many thousands after they fail a second attempt at fertilization. Swann says he and his team have achieved reliable development to the blastocyst stage, but they have not yet derived any cell lines.

    If cell lines from parthenotes can be developed, says ACT's Lanza, they would offer another enormous benefit. They have only one set of genes, he says, which reduces the complexity of surface proteins responsible for immune rejection. Thus, he says, they would be ideal for stem cell banking: “With a few hundred lines, you could match the genetics of most of the population”—a practical goal that could never be reached with nuclear transfer because “you would need millions and millions of eggs” to treat individual patients.

    But Cibelli says that, contrary to Lanza's claims, the immunity issue has by no means been resolved: “The big question now is, will [cells from parthenotes] be recognized as cells or foreign tissue?” He and his colleagues are trying to find out with the ES-like cell line he derived at ACT in 2002 from a rhesus monkey parthenote—the only primate line created that way so far (Science, 1 February 2002, p. 779). George Daley of Harvard University is also intrigued but skeptical. He is using mice to check on the “engraftability” of cells from parthenotes, but he warns that the cells might trigger rejection by the immune system, which “not only recognizes foreign [cells] but absence of self.”

    No daddy.

    Five-day-old human parthenote.


    Hurlbut says many scientists are unenthusiastic about parthenogenesis. It “is of limited value because the genotypes would be restricted to those of fertile females, and it is hard to be certain that these cells would not carry genetic abnormalities,” he says. “Most of the scientists I talk to really want nuclear transfer.”

    Nor does this solution completely satisfy Catholic critics. “If [parthenotes] are organized enough to make a blastocyst, my concerns would still be there,” says Doerflinger. “The jury is out on what exactly a parthenote is, but I don't think it's been shown that it isn't an embryo.”

    Exploiting defunct embryos

    Donald Landry and Howard Zucker of Columbia University in New York City floated another proposal at the bioethics meeting: using cells from more or less defunct embryos. Up to 60% of the embryos created for in vitro fertilization (IVF) treatments are considered “nonviable,” meaning development has been arrested, but individual cells are still functioning. Drawing an analogy to brain death in human organ donors, Landry and Zucker propose that markers could be developed to determine “organismic death” in embryos.

    To test that idea, he and Zucker plan to monitor several hundred embryos that have stopped dividing. They will then characterize the chemical and genetic signatures of embryos that haven't shown any signs of development for 24 hours. Such signals could be used to declare embryos nonviable. “My expectation is that the analogy to brain death and the harvesting of organs will be so directly applicable … that this will be seen properly as falling within the guidelines of current federal policy,” says Landry.

    But Daley and others question whether such embryos could really yield cell lines. “We know the more intact the blastocyst, the better the cells,” he says. Cibelli also cautions that it will be “very hard to determine” an embryo's status—“we don't have an EEG machine we can plug into the embryo.” What's more, some ethicists are skeptical about this one too. Tadeusz Pacholczyk of the National Catholic Bioethics Center in Philadelphia, Pennsylvania, says: “I'm not convinced that an arrested embryo is the same as a dead embryo,” given the ability of single cells from early embryos to form entire organisms.

    Snatching the single cell

    Lanza, for one, is much more enthusiastic about another potential method for getting cells without destroying embryos: growing new ES cell lines from single cells that have been detached from embryos without damaging them. The procedure is already used for preimplantation genetic diagnosis, so the main trick would be to get the extracted cell to start dividing. ES cells are herd animals, and they often die when alone in culture. But Lanza says ACT expects to be able to cultivate new cell lines from single cells taken from either blastocysts or earlier-stage embryos called morulas without harming their development.


    Fusion with an ES cell nucleus prompts a nervous system cell to glow green—a sign a pluripotent gene has turned on.


    But possibilities also breed perils. At the morula stage, a cell may still be totipotent, which means it has the potential to develop into a full embryo. So in some eyes, destroying it to make an ES cell line is akin to destroying a complete embryo, observes Peter Braude, a stem cell researcher at Guy's, King's and St. Thomas' School of Medicine in London. Braude points out that scientists are now in a position to dream up countless unprecedented scenarios. For example, he says, what if you take an eight-cell embryo and separate it into two four-cell clumps? If one clump develops into a blastocyst, is implanted, and becomes a baby, and the other is used to start a cell line, no embryo was destroyed—but “there are those that would argue that a twin life has been destroyed.”

    The Holy Grail

    The best chance to circumvent the ethical dilemmas may be to find a way to reprogram mature cells into pluripotent stem cells while bypassing both egg cells and embryos. That would not only satisfy critics, it would also eliminate headaches involved in obtaining donated eggs and embryos. Technically as well, it would fulfill the promise offered by so-called therapeutic cloning: cell lines tailored to an individual's genes that would enable doctors to study complex diseases in the lab and provide potential donor cells that avoid the problems of immune incompatibility. And nothing would be created that could potentially be implanted and become a baby.

    That goal is many years away, however. Scientists must first figure out what really happens in the process of reprogramming during nuclear transfer. Researchers have yet to nail down exactly what factors in the egg cytoplasm manage to turn a differentiated nucleus back into one that can direct the development of a whole organism. Some clues about where not to look have come from experiments in which scientists sought to see if an ES cell could be used instead of an egg to reprogram the nucleus of a somatic cell. But ES cell cytoplasm does not seem to contain the magic ingredient.

    However, a new paper from Schöler and his colleagues suggests that the key ingredients may lie in ES cell nuclei. In the November issue of Stem Cells, the team reports on work that grew out of the observation that ES cells can fuse with mature cells in culture and apparently prompt them to turn on genes key to ES cells' plasticity. To track down the source of that reprogramming power, the team fused mouse neural cells with either ES cell cytoplasm or ES cell nuclei. The cytoplasm didn't seem to have any effect. But the neural cells that fused with ES cell nuclei turned on their own embryonic genes and formed ES cell-like colonies.

    The resulting cells have twice the normal number of chromosomes and therefore are not the kind of reprogrammed cell line scientists are aiming for. But the experiments home in on factors that apparently reside in the nucleus, Schöler says.

    Another approach to finding the magic ingredients has been taken by chemists Peter Schultz and Sheng Ding at the Scripps Research Institute in La Jolla, California, who are screening small molecules in a hunt for those that can turn the clock forward or back in a cell's development. They are on the trail of a small molecule they call “reversin” that will cause a muscle cell to dedifferentiate into a multipotent progenitor cell. “This really does open up the possibility that you could use your own cells to dedifferentiate to some kind of multipotent cell type” that could be used, say, to treat a heart patient, says Schultz.

    The entire subject has become so sensitive that someone undoubtedly will find ethical issues in any new technique. Some scientists are impatient with the philosophizing and want to get on with the work. As Daley says, “Doing a scientific experiment not for a scientific reason but to quell an ethical debate” is not his idea of science. Some say this is analogous to other early panics about new technologies, such as IVF, that have now become widely accepted. But as Braude notes, ES cell technology, more than any biological manipulation that has preceded it, “is challenging the very foundations of some ethical and religious beliefs” about what it is to be human.


    Heightened Security or Neocolonial Science?

    1. Richard Stone

    New restrictions on federally funded research involving the world's most dangerous pathogens are hampering foreign collaborations

    ALMATY, KAZAKHSTAN—Scott Weaver thought he had a green light for a great research partnership. After an expensive security upgrade of his labs and hours of paperwork, the director for tropical and emerging infectious disease research at the University of Texas Medical Branch (UTMB) in Galveston was ready to resume research on the Venezuelan equine encephalitis (VEE) virus in Colombia, Peru, and Venezuela. The mosquito-borne disease, endemic in all three countries, is not the worst of its kind: The alphavirus kills less than 1% of its human victims. But VEE's potential to incapacitate has landed it on a list of “select agents”: several dozen of the nastiest sorts of pathogens that the U.S. government fears could be turned into biological weapons. That designation has thrown up new hurdles for Weaver and his collaborators in South America—and for many other U.S. scientists working overseas.

    In August, the U.S. National Institute of Allergy and Infectious Diseases (NIAID) informed Weaver that under the terms of his two VEE grants, the laboratories of his foreign colleagues must have procedures in place for handling select agents that are equivalent to tough U.S. regulations* imposed last year. “I seriously doubt whether my collaborators in Caracas or Bogotá could ever meet U.S. standards for select-agent security,” says Weaver. “These developing countries cannot afford the kinds of elaborate systems that labs in the U.S. have been required to install,” such as sophisticated security and inventory systems and background checks on employees. He's since had to alter his projects to avoid isolating the VEE virus in the labs south of the border. Because the new policy may force some foreign partners to serve as mere sample exporters, it resurrects “the stereotype of the ugly American: arrogant, demanding, and insensitive,” Weaver charges: “American collaborations will be unwelcome in many developing countries of the world.”

    Although his case may be one of the first, Weaver is not the only researcher feeling the chill. According to a prominent U.S. specialist on select agents, researchers with the U.S. Centers for Disease Control and Prevention (CDC) have seen a curtailment of foreign collaborations on avian flu and viral hemorrhagic fevers. (CDC officials declined to comment.) Scientists at the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in Frederick, Maryland, are experiencing similar constraints on projects involving Congo-Crimean hemorrhagic fever and related diseases. “The important work we need to do will get done,” says USAMRIID public affairs officer Caree Vander Linden, although the details have not been worked out.

    No picnic.

    Venezuelan scientists draw blood from rodents to isolate VEE virus. New NIH rules have crimped projects on this and other select agents.


    U.S. inspectors will soon be heading out to assess lab standards overseas, scientists learned at a closed-door meeting last month. Paula Strickland, acting director of NIAID's Office of International Extramural Activities, told a group at the annual meeting of the American Society of Tropical Medicine and Hygiene (ASTMH) in Miami, Florida, that security teams will include senior microbiologists from CDC's select-agents program. An interagency committee chaired by Strickland with representatives from the U.S. State and Justice departments will determine whether foreign labs “meet minimum biosafety and biosecurity requirements.”

    The stepped-up regulations are the latest example of the clash between scientists' cherished ways of doing business and the urgent need to reduce the potential for bioterrorism, and some researchers say the rules make sense. “It would be very embarrassing for a U.S. collaborator and a U.S. agency to be funding a facility that had a major accident, or one that was involved in a bioterrorism event,” says Paul Keim, an anthrax specialist at Northern Arizona University in Flagstaff.

    But others fear that the tightened security could stifle cooperation. “One doesn't develop productive collaborative relationships with foreign counterparts by announcing upon arrival that ‘from now on we must do things the American way,’” says UTMB arbovirus specialist Robert Tesh. “Each country has its security priorities. The U.S. cannot demand that they conform to ours.” Adds Weaver: “By inhibiting research on the ecology and epidemiology of potential biological weapons in their natural settings overseas, we will be less prepared to respond optimally to the introduction of these agents by a terrorist.”


    After letters containing powdered anthrax were mailed to members of Congress and others in the fall of 2001, the U.S. government crafted tough requirements for scientists it funds to study dangerous pathogens. In addition to tightening security at facilities in which the microbes are kept and studied, U.S. regulations now demand rigorous protocols covering security assessments, emergency response plans, training, transfers of materials, and inspections.

    Under the new NIAID rules, which the institute began developing in 2003, U.S. grantees must submit a dossier on a foreign collaborating institution detailing its “policies and procedures for the possession, use, and transport of select agents.” For what NIAID calls “security risk assessments,” grantees “must be willing to provide the names of all individuals who will have access to the select agents.”

    Weaver says the new rules prompted him to drop his original plan to process field samples potentially infected with VEE virus in South America. Now, he says, he will have all the samples shipped to Galveston. “This seems to have gotten me off the hook for the time being,” he says, in that his colleagues at the National Institute of Health in Bogotá and the Central University of Venezuela and the National Institute of Hygiene in Caracas now won't have to adhere to the select-agent terms. But the change will reduce efficiency and timeliness, he says.

    “Basically, the NIH [U.S. National Institutes of Health] left me with little choice,” because it would have taken “months or years” to bring overseas labs into compliance, Weaver says. Already, the labs in Colombia and Venezuela store many VEE virus isolates in their freezers: Preventing the isolation of a few more strains, he says, will not deny the virus to a potential terrorist.

    Although security at foreign facilities working with select agents generally has been strengthened since the 9–11 attacks, most labs would still run afoul of the new U.S. rules. Many outside the United States appear to be unaware of the regulations. “I haven't heard much,” says Lev Sandakhchiev, director general of the State Research Center of Virology and Biotechnology, a former bioweapons lab near Novosibirsk, Russia, that collaborates with the United States on smallpox research.

    Foreign researchers say they hope to find a way to continue working with U.S. counterparts because it would bolster security in their home countries. “If collaborations will continue, that will inevitably bring the standards up,” says Bakyt Atshabar, director of the Kazakh Science Center for Quarantine and Zoonotic Diseases in Almaty, Kazakhstan, which specializes in studying endemic plague with Pentagon funding (Science, 17 December, p. 2021).


    ASTMH and other societies intend to lobby for a relaxation of the rules. “The approach to this will not be easy,” says Peter Weller, an immunologist at Harvard Medical School in Boston and ASTMH's most recent past president. For one, many agencies will want to weigh in on any change of policy. Second, Weller says, “the facile reply is that you scientists gave the Pakistanis nuclear secrets; how do we trust you on these issues?” In an e-mail response to questions from Science, NIAID officials say they expect no change to the select-agent terms “in the immediate future.”

    But some experts such as Keim say raising global security levels to U.S. standards makes sense. “We should not allow U.S. researchers to avoid regulatory oversight by going abroad. This would certainly apply to human subjects in clinical trials and animal-care standards in animal protocols. Why not security of dangerous pathogens?”

    Critics of the policy say they are not opposed to strengthened security overseas. Rather, they decry how the U.S. government is going about it. NIH “seems to be hell-bent on enforcing the regulations,” says Thomas Monath, chief scientific officer at Acambis in Cambridge, Massachusetts, and president of ASTMH. He wonders whether his company's research on Japanese encephalitis, a select agent, with colleagues in Thailand and Australia will be subject to such oversight. Monath fears that U.S. researchers might be held criminally responsible for violations by collaborators. When he raised this issue with Strickland at the ASTMH meeting, he says, it was apparent that “NIH had neither thought about this nor had any clear response.”

    NIAID officials say they are simply in step with the times; later they plan to adopt standards being developed by the World Health Organization. “We will do what we can to ensure that every possible avenue has been pursued that will allow our NIH-funded researchers to be able to conduct their research safely and securely,” the officials say. Much of that work, it appears, may well have to be done inside U.S. borders.


    Some Countries Are Betting That A Few Seconds Can Save Lives

    1. Dennis Normile

    Japan, Mexico, and Taiwan are investing in early warning systems that can offer precious seconds of warning before a major tremor

    TOKYO—What would you do with 5 to 50 seconds' warning of a major earthquake?

    It's not an academic question. Systems that can detect earthquakes near their source and issue warnings before the shaking starts are in place or being deployed in Mexico, Taiwan, and Japan and are being studied for locales from southern California to Istanbul. Enthusiasts are convinced that short-term warnings can save lives by stopping trains before they pass over damaged track, emptying out elevators, and alerting rescue units. “It is an epochmaking” advance in earthquake safety, says Masato Motosaka, a Japanese earthquake engineer at Tohoku University in Sendai.

    Not everyone agrees, however. Skeptics note that warning systems don't provide enough time to reduce casualties close to the epicenter of an earthquake. They also worry that such systems could divert spending from earthquake preparedness, which they say has the potential to do much greater good. “Warnings only help in some cases,” says Robert Olshansky, an urban planner at the University of Illinois, Urbana-Champaign. “Investing too much of one's money and hopes in a short-term warning system is a distraction from the hard and less sexy work, such as upgrading older structures, that is really needed to improve seismic safety.”

    Faster than a speeding S wave

    Early warning systems are not forecasts. Instead, they detect actual quakes near their source and issue warnings to automated systems and humans up to several hundred kilometers away. They work because electronic signals transmitted through wires or air travel faster than seismic waves moving through the earth. Warning schemes also take advantage of the two types of seismic waves that are generated when a fault ruptures. The first—and faster moving—primary (P) waves radiate directly outward from the epicenter. The secondary (S) waves, which cause the oscillating motions responsible for the most damage, lag by tens of seconds over a distance of a few hundred kilometers. “The P waves carry information; the S waves carry energy,” explains Hiroo Kanamori, a seismologist at the California Institute of Technology (Caltech) in Pasadena. Unfortunately, P waves and S waves would arrive almost simultaneously near the epicenter, making warning impossible where shaking is most intense.

    On alert.

    Nowcast stations are being installed across Japan.


    Farther away from the epicenter, there is time to analyze the signals and automatically generate warnings. After the October 1989 Loma Prieta earthquake in California, the U.S. Geological Survey (USGS) deployed a temporary array of three seismometers that warned workers demolishing a collapsed highway viaduct in Oakland about aftershocks. The system gave workers 23 seconds' notice of S waves from 12 aftershocks stronger than magnitude 3.7.

    Two permanent early warning systems were put in place in the early 1990s in Mexico and Japan. In 1991 the Centro de Instrumentacion y Registro Sismico (CIRES), a private Mexican nonprofit organization, set up a network of 12 instruments along the country's Pacific coast near Acapulco, where seismologists think a magnitude 8 earthquake is overdue. If the system works as intended, residents of the capital city, 280 km away, could get 70 seconds' warning. Schools and some government offices are serviced by dedicated transmission lines, and citizens have access to automated radio broadcasts. Two years ago, a similar system was set up for the city of Oaxaca, in southern Mexico.

    Likewise in Japan, the country's early warning systems are likely to prove most useful for the most devastating earthquakes, those that occur off the Pacific coast where the North American plate is being forced under the Philippine plate. For example, Motosaka says that the Sendai area would receive 15 seconds' warning that the effects of a magnitude 7 to 8 offshore earthquake were about to hit; seismologists give such an earthquake a 40% chance of occurring in the next 10 years.

    In 1992, railway operators started deploying the Urgent Earthquake Detection and Alarm System (UrEDAS) along the country's bullet train lines. After detecting P waves, UrEDAS cuts power to trains in nearby sectors if the anticipated shaking will exceed a given threshold. In February, the Japan Meteorological Agency began deploying what will be the world's most comprehensive early warning system, featuring more than 200 stations throughout the four main islands. Installation of the $90 million network, called Nowcast, began in 2003 and could be completed in 2 years if the money keeps flowing. In December 2000, Taiwan's Central Weather Bureau switched on an islandwide network of 86 seismic stations that alerts the bureau's central office and a hospital, both in Taipei.

    Authorities are still trying to figure out the best way to use early warning systems. Officials at Taiwan's weather bureau receive warnings on their computer screens, “allowing staff to move to disaster response stations a few seconds quicker than if they wait for the shaking to start,” says Yih-Min Wu, a seismologist at the National University of Taiwan involved in setting up the system. Taiwan's high-speed rail line will likely be added to the system once train service begins next fall.

    Call ahead.

    Early warning systems could save lives in elevators and operating rooms.


    Japan's system, partially operational, sends warnings to a select group of regional disaster response centers, private companies, an elementary school, and a university hospital in the Tohoku region northeast of Tokyo. Tohoku University's Motosaka, who is leading a government study of potential warning uses, says earthquake education and drills can be worked into the school curriculum, as is now being done at the Nakamachi Elementary School in Sendai. Pupils have been taught to duck under their desks to avoid falling ceiling tiles and lighting fixtures, and teachers to open doors so they don't jam shut and hinder a postquake evacuation. In a hospital, the warnings could allow surgeons to pause during delicate procedures and give rescue teams extra seconds to prepare.

    The list of possible applications is endless, says Thomas Heaton, a Caltech earthquake engineer and longtime proponent of early warning systems. It includes switching all traffic lights to red, closing valves in oil and gas pipelines, shutting down nuclear power plants, and preparing tsunami warnings. “I don't think anybody knows right now what all the potential applications will be,” says Heaton.

    One unresolved issue is whether to broadcast warnings to the general public. The Mexican system has generated 11 warnings of strong (magnitude 6 or greater) earthquakes in 14 years without a hitch, according to Juan Espinosa-Aranda, director general of Mexico's CIRES. “Contrary to what many expected, we have never had any indications that the warnings resulted in panic,” he says. Part of the reason, says Heaton, may be their benign content: “Ninety percent of the time, the message will be ‘This will be light shaking, relax and enjoy it.’”

    Without warning

    To date, the payoff from early warning systems is scant, proponents admit. In 12 years, operators of Japan's UrEDAS can cite only one case in which the warning headed off a potentially dangerous situation. That occurred in May 2003, when a magnitude 8 earthquake struck northeast of Tokyo: The system halted two trains headed toward a viaduct that had suffered cracks in 23 columns.

    In contrast, a bullet train derailed during the country's most recent severe earthquake, on 23 October in Niigata Prefecture, because the train was too close to the epicenter for a warning to arrive in time. Likewise, no early warning system would have mitigated the devastating 1995 Kobe earthquake, which claimed 5000 lives, because the fault that ruptured runs right under the city. “Warnings don't work” in such cases, admits Motosaka.

    That fact of life, say scientists, means early warning systems should never replace seismic preparedness. “We need to spend money on mitigation and preparedness,” says the University of Illinois's Olshanky. “Making promises of prediction or warnings distracts from this task.”

    Skepticism about earthquake warnings seems greatest in the United States, in part because the most dangerous faults are close to urban areas. Caltech's Heaton says that federal agencies have rejected several of his proposals to test a prototype early warning system for southern California after they received mixed reviews. “Half the reviewers said it was a great idea, and the other half said it's not very useful,” he says.

    To find out who's right, seismologists need hard data. Although they don't wish for misfortune, they know that earthquakes are inevitable. And they are counting on Mexico, Taiwan, and Japan to serve as test beds.


    A Plasma Too Far? Researchers Hunt for Early State of Matter

    1. Charles Seife

    Brookhaven scientists think they've seen evidence of the long-sought quark-gluon plasma. But something's not quite right

    In 2000, scientists at CERN, Europe's high-energy physics lab near Geneva, Switzerland, thought they were on the brink of creating a state of matter not seen since a few fractions of a second after the universe was born. Their colleagues at Brookhaven National Laboratory in Upton, New York, working on a new and more powerful accelerator, were even more confident of success. But nearly 5 years later, no one has claimed credit for making a quark-gluon plasma, an extremely high-energy state in which the fundamental components of protons and neutrons roam free.

    Something interesting is certainly happening within the giant detectors that record the high-energy collisions of heavy particles that scientists hope will lead them to their goal. But what? Researchers confess that they don't understand their prey well enough to know if they've snared it. And what they have captured doesn't behave as it should. Such is life on the frontiers of the quark-gluon plasma. “I'm a little baffled and not sure exactly what to do,” says Thomas Kirk, Brookhaven's associate laboratory director for high-energy and nuclear physics. “If it sounds like I'm frustrated, it's because I am.”

    A near meltdown

    In our frigid universe, quarks, which make up most known matter, are frozen inside hadrons. They are never seen alone, unbound, or roaming free. But in the very, very hot early universe, scientists believe that quarks and gluons, which bind quarks together, swirled and danced for a brief moment before they “froze out” and formed hadrons. Researchers have been trying to recreate that brief moment—the era of the quark-gluon plasma—for years.

    Little bang.

    The particle tracks from a high-speed collision of two gold atoms provide clues about whether quarks and gluons had roamed free.


    A particle collider is like a time machine; the higher its energy, the further back in time it can see. At CERN's Super Proton Synchrotron (SPS), the 3.5-TeV collisions brought scientists to within a few millionths of a second after the big bang. Using enormous magnets, SPS, buried under farmland outside Geneva, smashed lead atoms together at such speeds that the nuclear components— protons and neutrons—cracked open and their contents spilled out. Scientists analyzed the resulting spray of particles, some of which were born out of the quarks and gluons of the lead nuclei, others of which sprang forth from the enormous vacuum-searing energy of the collision itself, looking for signs that quarks and gluons had melted once more and roamed free of their usual constraints.

    The evidence was suggestive, if not conclusive. One promising indicator was a striking lack of a particular type of particle known as the J/Ψ which is made up of a relatively rare quark known as charm and its antimatter counterpart. The dearth might be a sign of free-roaming quarks, scientists argued. Here's why: The charm quark and antiquark are born near each other, out of energy rather than nuclear matter. (Protons and neutrons don't contain any charm quarks or antiquarks.) Neighboring quarks are quite likely to bind to each other during freezing, creating J/Ψs. But quarks that roam about before freeze-out will in all likelihood bind to more common quarks, such as ups and downs, and create particles such as D mesons rather than J/Ψs.

    Liquid center.

    At low energies, particles often stream off collisions in two back-to-back jets, represented by two peaks in this graph (red dots). The disappearance at high energies of one of those jets (blue stars) could represent passage through a liquidlike quark-gluon plasma.


    Sure enough, the SPS team saw many fewer J/Ψs in high-energy collisions than expected. For some physicists, that was a signal that they might have created a quark-gluon plasma. But SPS fell by the wayside in 2000 as CERN shifted its attention to building a much more powerful particle-physics accelerator, the Large Hadron Collider. Until LHC starts up, CERN is pretty much out of the quark-gluon plasma game.

    Luckily, an even higher-energy collider was coming online: the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. RHIC can slam together nuclei of atoms at roughly five times the energy levels of SPS, and early results seemed to confirm that scientists were close to creating the elusive plasma. Although RHIC's four detectors didn't yet have the capability to spot a lack of J/Ψ particles, scientists were seeing other favorable signs. According to Brookhaven physicist Miklos Gyulassy, these signs are “striking” evidence that quarks had been liberated from their shackles. “The data are in for me,” he says.

    One piece of supporting evidence is a phenomenon known as “jet quenching.” When two nuclei collide, scads of particles fly out from the center of the collision, where the temperature is highest. In a low-temperature collision, these particles would carom off one another like billiard balls and spray away from the nucleus in jets. The RHIC collisions created fewer jets than expected. Physicists argued that this “jet quenching” happened because the particles were behaving more like melted clumps of sticky wax than solid billiard balls. By clinging and transferring energy to each other before shooting away with diminished vigor, the objects' nuclei were behaving like a liquid or a gas rather than a solid. And that behavior is exactly what scientists had forecast when protons and neutrons melt, setting their quarks and gluons free.

    The manner in which particles flew away from the collision in the nucleus also was characteristic of a liquid. Instead of acting like hard billiard balls and scattering in all directions after a collision, the particles behaved as if they were in one large, expanding puddle. This effect, which is quantified with a parameter known as “elliptic flow,” showed that the postcollision matter was closer to a collection of melted objects than a clump of solid ones.

    More recent experiments involving the collision of deuterium and gold atoms point in the same direction. Physicists predicted that the effects stemming from a quark-gluon plasma, such as jet quenching, would disappear because the lesser mass of the deuterium doesn't impart enough energy into the smashup to make the nucleons melt. And, indeed, the strange effects disappeared as predicted. In gold-gold collisions, says Gyulassy, there was a marked decrease in the number of twinned jets from the collisions, but with “deuteron on gold, [the twin jets] came back to life again.”

    Frozen out?

    The conclusion seemed obvious: Scientists had created a quark-gluon plasma. So why hasn't RHIC announced the discovery? The answer is that the quark-gluon plasma isn't behaving at all the way physicists expected it would.

    For one, there's no nice, neat phase transition as quarks and gluons change from their ordinary condensed state into some kind of quark-gluon plasma. If you add heat to a chunk of ice near zero degrees Celsius, its temperature rises for a while. Then, all of a sudden, its temperature stops climbing as the ice changes phase from solid to liquid. It resumes its climb once all the ice has melted.

    Not all phase transitions are so nice and neat. “But there was an expectation that we could observe a direct signal of the phase transition that the system would undergo as it cools,” says Jean-Paul Blaizot, a physicist at France's Center for Atomic Energy in Saclay. No such luck. Instead, scientists are left with a handful of phenomena—jet quenching, elliptic flow, and a handful of other atypical observations—that indicate something new is happening but fail to constitute a smoking gun. “None by itself show a completely new state of matter,” he says.

    At the same time, theorists have been shocked by what is spewing forth from RHIC's collisions of two heavy nuclei at high energies. They had expected that the nucleons would evaporate into something resembling a gas. That would give the quarks and gluons a chance to roam about for a few moments before recondensing when the temperature dropped.

    Enormous microscope.

    The PHENIX detector, one of four that adorn Brookhaven's RHIC accelerator ring, explores phenomena on the tiniest scales accessible to humans.


    This isn't at all what has happened, however. The observations of elliptic flow—the very data that helped convince scientists that the nucleus was no longer behaving like a solid—show that the nucleus isn't behaving like a gas, either. Instead of streaming past each other without interacting much, quarks and gluons feel one another's presence quite strongly. As a result, the melted material at the heart of a gold-gold collision behaves like one collective object, like a drop of water, rather than a collection of individual quarks and gluons. In fact, physicists have concluded that the stuff at the center of a gold-gold collision is the most perfectly fluidlike fluid ever discovered.

    This finding undermines one of the original lines of evidence for a quark-gluon plasma. The models that accounted for the lack of J/Ψ particles implicitly assumed that quarks and gluons weren't strongly interacting with each other and that the barely interacting charm quarks would fly away from each other rapidly and recondense with other noncharm quarks. But in a collectively moving liquid—and one that behaves like a liquid very shortly after the collision—the charm quarks don't have the same chance to zoom apart. In other words, if the goop at the center of a collision is behaving like a strongly interacting liquid rather than a weakly interacting gas, the lack of J/Ψ particles is a quandary.

    So although it's clear that something new is happening at the center of the high-energy collisions at RHIC, it's not at all the gas that scientists expected. “Have we created a weakly interacting gas of quarks and gluons? The answer to that question is an emphatic no. We have not,” says Jamie Nagle, a physicist at the University of Colorado, Boulder, and member of one of the RHIC collaborations. However, Nagle says that the data show that the quarks and gluons melt into a strongly interacting liquid whose properties are not yet understood: “That is the reason why I would say that we have not made a discovery yet.”

    Is this liquid the fabled quark-gluon plasma? Blaizot argues that it's difficult to answer the question, “Are we there yet?” when you don't yet know where “there” is. “When you have a not-well-defined problem, it's hard to give a well-defined answer,” he says.

    That's also the dilemma facing officials at Brookhaven, who for half a decade have been poised to proclaim a major discovery. “We can't march people to the lectern and force them to make an announcement,” says Kirk. “Maybe we have a really nice discovery that will just dribble out.”


    Taming the Hyperbolic Jungle by Pruning Its Unruly Edges

    1. Dana Mackenzie*
    1. Dana Mackenzie is a freelance writer based in Santa Cruz, California.

    Answering decades-old questions, two new theorems set limits on how wild some universes can get

    For mathematicians, one of the joys of three-dimensional topology is the chance to study not only the shape of our universe but also the shape of all the universes that might have been. This requires some radical shifts in perspective, because geometric rules familiar in our universe won't hold in universes where the “shape” of space is different. To explore these alien territories, mathematicians study manifolds—abstract spaces that resemble our universe on a small scale but may connect up differently in the large. This has been a banner year for manifolds, thanks to the proof of two major conjectures that date from the 1970s. “The combination of [the two proofs] is indeed a fantastic piece of news,” says Francis Bonahon, a topologist at the University of Southern California in Los Angeles.

    In 1973, a Yale University mathematician named George Mostow proved that many three-dimensional manifolds, or 3-manifolds, can take only one possible shape, provided that every part of the universe is equally “curved.” Measure the curvature of space in your back yard, and you know the shape of the universe. “It was the most influential piece of mathematics in geometry in the last 35 years,” says Howard Masur, a geometer at the University of Illinois, Chicago. But Mostow's theorem applied only to manifolds of finite size. Topologists have struggled ever since to come up with a similar principle for a much more interesting class of manifolds in which, as Frank Sinatra sang about clear days, “you can see forever.”

    Now they have succeeded. Two groups—Ian Agol of the University of Illinois, Chicago, and (jointly) Danny Calegari of Caltech and David Gabai of Princeton University—have shown that a major category of 3-manifolds always have orderly edges, or “tame ends.” A third group, consisting of Yair Minsky of Yale, Jeffrey Brock of Brown University, and Richard Canary of the University of Michigan, Ann Arbor, has shown how to classify the shapes of those tame ends and proved that once you know the shape of the end, you know the shape of the manifold. (The proofs are posted at, 0407161, and 0412006.) The two theorems fit together like a mortise and tenon and resolve several other conjectures from the 1960s and 1970s. Minsky orchestrated the proof of the so-called Ending Lamination Conjecture over a period of 13 years, with major contributions from Masur as well as Canary and Brock. By contrast, the Tame Ends Conjecture began to appear solvable only last year, when Agol, Calegari, and Gabai began working on it.

    Mirror World.

    A “hall of mirrors” in hyperbolic space produces a fantastically filigreed fractal.


    “In my mind, the real achievement is Brock-Canary-Minsky, and Agol-Calegari-Gabai puts a cherry on top—although a very impressive cherry,” says Bonahon.

    Both conjectures—now promoted to theorems, assuming that the proofs check out—pertain to a class of universes known as hyperbolic manifolds. Their geometry is unlike the familiar Euclidean or “flat” geometry, which has been known since ancient Greece. Euclidean geometry is characterized by a property called zero curvature: Parallel lines stay a constant distance apart. In spherical geometry, which has positive curvature, lines that start out in parallel converge, like meridians on a sphere. Hyperbolic geometry has negative curvature; in it, parallel rays splay apart like the flowers in a bouquet. Hyperbolic manifolds, which are the most common type, also tend to be by far the most unruly and mathematically interesting ones, which makes their taming even more remarkable.

    The central question in both of the new results is how to describe the shape of manifolds that don't close up nicely, like a ball or an inner tube, but instead have one or more loose ends. (Most of the manifolds that Minsky and his colleagues considered have two ends, just like a string.) These ends come in various shapes—they can be narrow like a tunnel, or they can flare outward like the bell of a trumpet. Usually geometers prefer to think of the loose ends, or tunnels or arms, as being infinitely long. Thus, someone living in such a universe would never reach the end of the universe but would always see more space stretching out in front of him. It may seem bizarre that mathematicians have chosen the term “end” to describe something that is literally endless. Yet one remarkable feature of hyperbolic space—reminiscent of medieval views of the universe—is that it comes with a “sphere at infinity.” Here the infinite end culminates in a specific shape, such as a disk, in much the way the infinitely long decimal fraction 0.33333… culminates in the number 1/3. Although an inhabitant of the manifold could not see or reach the sphere at infinity, its culminating shape nevertheless has a profound influence on his space.

    The story behind these universes begins more than 100 years ago, when the German geometer Felix Klein began pondering questions like this one: Suppose you stand in a room with perfectly reflecting mirrors—what would you see? Of course, with flat mirrors you would expect a “hall of mirrors” effect, with infinitely many copies of yourself as far as the eye could see. But now imagine that the mirrors are cylindrical (or convex). You would still see infinitely many copies of yourself, but the convex mirrors would make your reflections skinnier and skinnier. After enough reflections, the images of your head would shrink to point-sized dots. Now imagine somehow reaching through the mirrors and connecting all the dots. With just three cylindrical mirrors, you would get a circle, with the real you standing in the middle. With four mirrors, depending on their size and arrangement, you might get a circle, or you might get a jagged, contorted figure that defied Klein's attempts to draw it. Klein was born a century too early to see such figures—now called fractals—become an icon of pop art, thanks to computer graphics. You can think of these fractals as being drawn on the “sphere at infinity,” the canvas at the end of the universe.

    In 1881, Henri Poincaré discovered a far-from-obvious connection between Klein's fractals and hyperbolic geometry. Each fractal corresponds to a particular manifold, and each open-ended manifold has a particular Kleinian fractal at each of its infinite ends. Nearly a century later, William Thurston (then at Princeton University, now at Cornell) showed that the Kleinian fractal defines a geometric structure on the ends of its corresponding manifold. He called that structure an “ending lamination.” Perhaps, he suggested, a manifold's ending lamination would in fact determine the geometry of the whole manifold—in effect, rigidifying it or crystallizing it from the outside in.

    Minsky sneaked up on the Ending Laminations Conjecture in stages. He started with a special case, in which the manifold's ends culminate in an inner tube with a puncture through it. Inner-tube shapes, or tori, are as basic to topologists as quarks are to physicists or cells to biologists. He chose this punctured torus because it was the first case complicated enough to be interesting but simple enough to be solvable.

    Minsky discovered a way to organize all the possible Kleinian fractals for the manifolds that have this punctured-torus-shaped end (see figure, below). You can think of this picture as a sort of dictionary or map of all the possible Kleinian fractals in question. Each point in the map corresponds to a fractal with a different shape. In the colored region, the relatively simple fractals, such as the two at left in the figure below, correspond to geometries in which the punctured-inner-tube-shaped end grows exponentially fast as it moves out to infinity. The boundary of the colored region in the figure looks like a scalloped coastline. Each “inlet” corresponds to cusped fractals like the four shown on the right in the figure. In these cases, the inner tube shrinks to a point in some places as it moves out to infinity. Likewise, the fractals themselves pinch down simultaneously in infinitely many places and fragment the “sphere at infinity” into infinitely many pieces. Each cusped fractal can also be assigned its own address, a simple fraction, with the 3/5 cusp lying between 1/2 and 2/3, 5/8 between 2/3 and 3/5, and so on.

    Organizing principle.

    In this “map” of Kleinian fractals, cusped fractals lie on the “coastline.”


    But amazingly, even irrational numbers like (Graphic5 - 1)/2 = 0.618033985…, which cannot be expressed as a fraction and therefore lie “between” all the inlets, also correspond to Kleinian fractals. These most monstrous fractals are ferociously difficult even for a computer to draw. (See the bottom box in the figure.) In the corresponding geometry, the punctured-torus end neither grows nor shrinks but oscillates in a perpetual state of indecision as it moves out toward infinity. With his collaborators Brock and Canary, Minsky showed that Thurston's ending laminations play the same role as the irrational “addresses” in the above description. But the numbers are more than mere addresses: They are a genetic code. Two ending laminations provide all the information needed to assemble a perfect replica of a two-ended manifold.

    One problem was left to be cleared up. The Ending Lamination Theorem works only for certain hyperbolic manifolds: the ones with “tame ends.” It assumes that each end of the manifold keeps the same general shape as it stretches out toward infinity. A tame end can be visualized as a tunnel that can grow or shrink in width but cannot split into separate tubes or have tubes that merge together. But “wild ends” can change their shape or split into pieces as they go. Agol and Calegari and Gabai found a way to eliminate even the trickiest of these wild ends from consideration, as long as the manifold in question is hyperbolic.

    Infinite fascination.

    Spiralling (left) and cusped (right) Kleinian fractals bound hyperbolic universes.


    According to Alfred Marden of the University of Minnesota, who proposed the Tame Ends Conjecture in 1974, “it was just pie in the sky. No one had the vaguest idea how to prove it. If you ever questioned whether there is linear progress in mathematics, this is a very clear example. This proof could not have been done 30 years ago.”

    Together, the two theorems resolve several other problems that had been open for decades. For instance, one consequence is that Kleinian fractals, if drawn with an infinitely thin line, are either invisible or solid black, in the same mysterious way that all the points in a line color the line black, even though each point is infinitely small. Agol and Minsky believe that the taming of the hyperbolic ocean will lead to progress on other problems as well. Calegari, on the other hand, is afraid that three-dimensional topology will fragment. “The key conjectures tied together people who didn't have a lot in common—specialists in Kleinian groups, hyperbolic geometry, knot theory, foliations, and quantum invariants,” he says. “Now there's less incentive for them to be in the same field.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article