News this Week

Science  01 Sep 2000:
Vol. 289, Issue 5484, pp. 1442

    Researchers Get Green Light for Work on Stem Cells

    1. Gretchen Vogel

    The biomedical community is moving quickly to take advantage of new guidelines from the National Institutes of Health (NIH) for use of human pluripotent stem cells. And so far there are no signs that opponents plan any immediate action to stop the first round of research proposals from being reviewed by an NIH panel.

    The final guidelines, issued last week, allow NIH-funded researchers to derive pluripotent stem cells from fetal tissue, but not from embryos. Scientists may also work with embryonic stem cells, but may obtain them only from private sources and must ensure that derivation meets certain ethical conditions (see box). For example, embryos used to derive cell lines must be freely donated to research as excess embryos created during fertility treatments.

    The NIH spent nearly a year finalizing the guidelines, which researchers hope will allow work leading to the improved treatment of diabetes, Parkinson's, and other diseases. Because the cells are derived from human embryos or fetal tissue, groups who oppose fetal tissue research and abortion have lobbied to block federal funding for such research. NIH received 50,000 public comments on their draft—including thousands of preprinted postcards from opponents.

    Indeed, federal law prohibits NIH from funding work that harms or destroys a human embryo, but a lawyer for the Department of Health and Human Services, NIH's parent agency, ruled in January 1999 that stem cell lines derived from embryos by privately funded scientists could be eligible for funding (Science, 22 January 1999, p. 465). The final guidelines, issued on 23 August, spell out the ethical requirements for scientists who hope to work with such cells.

    Scientists will need to submit evidence to NIHthat the cells they wish to use comply with the guidelines. A committee called the Human Pluripotent Stem Cell Review Group will decide whether the cells qualify for funding. At the same time, the grant application will be judged for scientific merit by a scientific review board. NIH officials say the stem cell committee will meet in December to review applications received by 15 November. Approved applications that receive high marks in peer review will be passed along to the appropriate institute for funding decisions. Despite the multiple layers of review, NIH associate director for science policy Lana Skirboll says that scientists who apply by November could receive funding as early as January.

    Patient advocacy groups, many scientists, and even President Bill Clinton praised the new guidelines. In remarks to reporters last week, Clinton said stem cell research will have “potentially staggering benefits.” Tim Leshan of the American Society for Cell Biology said the guidelines “will certainly allow federally funded scientists to do the work that they want to do.” However, some legislators said they were appalled and vowed to fight the guidelines. Representative Jay Dickey (R-AR) said the guidelines show “obvious disregard of the moral conscience and the laws of our nation.” The guidelines are illegal, he says, and will be opposed either through the courts or through legislation next year to block NIH from funding any research involving the cells.

    The guidelines require researchers to present documentation with their grant application that the stem cells were derived properly. The embryo must have been left over after fertility treatments, the donors cannot receive any compensation for their donation, and they may not designate specific recipients of the cells. To ensure that embryos are surplus, eligible cell lines must be derived from embryos that were frozen. Donors must be informed that the cells derived from the donated embryo may be used indefinitely, possibly even for commercial purposes.

    The new rules also address several problems raised by researchers reviewing the earlier draft, including a requirement that anything that might identify the donors of the embryo be removed from the records. Scientists pointed out that such cells would not pass Food and Drug Administration requirements for cell therapies, which require extensive documentation of a cell line's history. The new guidelines require the donors to be informed of whether identifiers will be kept with the cells.

    James Thomson of the University of Wisconsin, Madison, the first to derive human embryonic stem cells, says his donations were anonymous. So there is no way to trace the precise origins of the cells, some of which may have been derived from embryos that were not frozen. If his current cell lines are not approved, he says, he will derive new ones, a process that could take months. John Gearhart of the Johns Hopkins University in Baltimore, who derived pluripotent stem cells from fetal tissue concurrently with Thomson, says he also will ask NIH to approve his cell lines. He says he received more than 150 requests for collaboration on the day the guidelines were released. Both researchers derived their cells with funding from Geron Corp., a biotech company in Menlo Park, California.

    The University of Wisconsin has set up a nonprofit institute called WiCell to distribute Thomson's cell lines (Science, 11 February, p. 948). However, in its first 10 months of existence, the institute has made only a “half-dozen” agreements with researchers, according to Carl Gulbrandsen, president of WiCell. He says the institute has about 60 agreements pending, which can take months to navigate through the recipient researcher's institution. Although contamination problems also slowed the process down at the beginning, Gulbrandsen says WiCell has sufficient stock on hand to meet the anticipated demand over the next few months.

    WiCell may soon have company. In July, the Juvenile Diabetes Foundation (JDF) announced a request for applications for stem cell research, specifically including derivations of human stem cell lines from embryos. JDF's chief scientific officer, Robert Goldstein, says the foundation will also fund researchers who want to use cells from WiCell or Gearhart, but there is a chance that one cell line will work better for certain experiments than others.

    Roger Pedersen of the University of California, San Francisco, who has been working on human embryonic stem cells with funding from Geron, calls NIH “courageous” for opening the door to further research. He notes that human cells are quite different from the mouse cells that have shown tantalizing promise—becoming pancreaslike cells and even dopamine-producing brain cells. No one has reported keeping the cells alive without a “feeder” layer of supporting cells, he notes, nor can anyone grow a cell line from a single pluripotent stem cell. “There's a lot of work to be done,” he says—and apparently plenty of people eager to get started.


    NIH-funded researchers can work with pluripotent stem cells derived from embryos if privately funded researchers have established the cell line, provided that:

    These conditions are met:
    • Embryonic stem cell lines must be derived only from frozen embryos created for fertility treatment;

    • The decision to donate embryos is separated from fertility treatment; and

    • Embryo donors are told they cannot accept financial or other compensation.

    And they avoid the following:
    • Deriving pluripotent stem cells from embryos;

    • Using stem cells from embryos created specifically for research;

    • Using stem cells from nuclear transfer technology;

    • Combining stem cells with an animal embryo;

    • Using stem cells to create or contribute to an embryo.


    New Report Triggers Changes in the NRC

    1. Andrew Lawler

    Shape up or risk losing customers. A panel of eminent science and engineering administrators has delivered that stern advice to the National Research Council (NRC), the operating arm of the National Academy of Sciences (NAS), in a report on how the council does its business.

    The review, led by Purnell Choppin, president emeritus of the Howard Hughes Medical Institute in Chevy Chase, Maryland, and Gerald Dinneen, a retired Honeywell manager, is the first hard look at the structure of the NRC in 2 decades (Science, 28 April, p. 587). It concludes that the council takes too long to produce many of its reports, is not responsive enough to its sponsors, lacks clear lines of authority, and its staff is too often frustrated and stressed. To fix these problems, the 15-member panel urges the academy “to reduce unnecessary layers of approval,” delegate more authority, appoint a chief management officer, and create “a service-oriented culture.” If NRC leaders don't act, the panel warns, “sponsors may look elsewhere for advice.”

    The academy's senior leaders don't quibble with the recommendations, which were blessed by the NRC's governing board at a meeting earlier this month in Woods Hole, Massachusetts. Indeed, “many of the recommendations are being followed through already,” notes Mary Jane Osborn, a member of the panel and a biologist at the University of Connecticut Health Center in Farmington. “We want all of our reports to be done well, on time, and on budget,” says NAS President Bruce Alberts.

    The proposals would affect not only the 1000 NRC staffers but also the nearly 6000 outside scientists and engineers who serve each year as volunteers on the council's committees, boards, and commissions. The most radical idea would revamp the council's internal structure by merging the 11 commissions that oversee the boards, which in turn oversee the production of reports, into six new divisions. The commissions, arranged largely by clusters of discipline, have been criticized as a bottleneck in the arduous and complex process of approving NRC studies.

    The new divisions would have more authority and responsibility and share one administrative system. They would be organized around broad themes: education and social matters; physics, astronomy, engineering, and energy; food and health; biology, earth sciences, and environment; policy; and transportation. That grouping, panel members say, will allow greater synergy among disciplines. The scores of boards and committees would remain the backbone of the organization, with NRC managers striving over time to reduce their overall number.

    The task force is blunt in its assessment of the council's effectiveness at satisfying its customers—typically federal agencies. “Poor project management and delays in the review process,” it notes, too often result in late delivery of the reports, which are the NRC's bread and butter. The solution, says the panel, is “a more service-oriented approach” reinforced by incentives to meet budget and time goals. One option is more fast-track studies, although Alberts says that reports done in 6 to 8 months “are unlikely to become routine.” The panel also suggests that the council consider holding roundtables as a substitute for the lengthy review process.

    Model organization.

    Changes at the National Research Council will precede completion of a new National Academy of Sciences headquarters, set to open in 2002.


    The governing board should look at the bigger picture and leave the details to others, according to the panel. In particular, the panel says Alberts should shift some duties to his fellow presidents, who lead the National Academy of Engineering and Institute of Medicine, and give responsibility for daily operations to a chief management officer, who will be current Executive Officer William Colglazier. “As president, I plan to rely on a more focused staff management structure, reporting through [Colglazier],” says Alberts.

    The panel had more trouble with the issue of broadening the pool of volunteers. It found that “there is too much reliance on a limited number of known individuals,” and too few women and minorities are tapped early in their careers. Yet only eight of 128 people who responded to a question about expanding participation in NRC studies suggested adding minorities, women, or young researchers to council bodies. Despite some carping, volunteers seem pleased with how the NRC operates. A survey of nearly 1500 people found that 87% would serve again, and 92% were satisfied or very satisfied with the quality of the NRC work.

    With regard to staff, Alberts says he will emphasize professional development and improving communication “so that help can be provided before things go wrong.” The initial reaction to the proposals by staff seems positive. “People aren't jumping up and down,” says one staffer who requested anonymity, “but we're optimistic.” Colglazier says the plan will be finalized in November and implemented by the end of the year.


    Chemists Toy With the Preprint Future

    1. Robert F. Service

    After watching their physics colleagues explore the digital landscape of electronic preprints over the past decade, chemists are sending out a survey party of their own. Last week, the giant publishing house Elsevier Science launched the first electronic archive for chemistry preprints through its ChemWeb subsidiary. The new site ( will be a common repository for reports on a wide range of chemistry topics and a forum for authors and readers to discuss the results. But ChemWeb could face an uphill battle in convincing authors to post their papers on the site, as many of the field's premier journals decline to accept papers that have already been posted on the Web.

    Physics envy.

    Elsevier is hoping its chemistry preprint archive will prove as popular as the Los Alamos physics archive, use of which by U.S.-based users is shown above.


    ChemWeb's new preprint service is modeled closely on the physics preprint archive started in 1991 by Paul Ginsparg at Los Alamos National Laboratory in New Mexico, which today serves as a storehouse for some 146,000 articles. Although readers of the new chemistry preprints will be able to rank the papers, there will be no formal peer review, says ChemWeb's preprint manager James Weeks. The service is free to both authors and readers. (They need only register with ChemWeb, which is also free.) ChemWeb, says Weeks, hopes that its new service will generate enough Internet traffic to lure advertisers to fund the site.

    For now, about all the site is attracting is heated debate. “A preprint server is highly controversial among chemists,” said Daryle Busch, president of the American Chemical Society (ACS), speaking at the society's national meeting in Washington, D.C., last week. Busch, a chemist at the University of Kansas, Lawrence, says he and his colleagues are lured by the Web's speed, wide dissemination, and low cost of publishing new scientific results. But many researchers fear that the absence of peer review will reduce the quality of submissions and force readers to wade through electronic mounds of poor-quality results in search of tidbits of worthwhile science. Says Peter Stang, a chemist at the University of Utah, Salt Lake City, “It's a dilemma.”

    Apparently, it's one that a broad cross section of chemists are struggling with. According to Robert Bovenschulte, head of ACS publications, the association conducted a survey of some 8000 of its members last summer on the question of non-peer- reviewed electronic preprints. The results “are a very mixed bag,” Bovenschulte says. “A lot of people were in favor of it. A lot of people were against it.”

    Nevertheless, the new preprint archive likely faces a tough future, because ACS journal editors themselves are lined up against it. ACS, the world's largest scientific membership organization, with 161,000 members, also publishes many of the premiere journals in the field including the flagship Journal of the American Chemical Society. But nearly all ACS journal editors consider posting results on the Web to constitute “prior publication,” says Bovenschulte. (Science maintains the same policy.) As a result, Bovenschulte says, those ACS journals will not publish papers that appear first on ChemWeb's preprint server. And that, says Ralph Nuzzo, a chemist at the University of Illinois, Urbana-Champaign, would convince him and most of his colleagues not to post their articles on ChemWeb. “If I couldn't publish my paper [in a conventional journal], I probably wouldn't do it,” Nuzzo says.

    In an effort to find a compromise, Weeks says ChemWeb will remove the full text of papers from the site when they are published in a print journal, keeping an abstract and a link to the journal article. But Bovenschulte says ACS journals would still not consider such papers, because the results would already be public knowledge.

    Not all journals are playing hardball. Ginsparg points out that American Physical Society journals, including the prominent Physical Review Letters, not only publish articles already posted on the Los Alamos preprint server, but even provide the electronic connections for authors to submit to the journals at the click of a button.

    Elsevier's own journals will publish articles that appear first on ChemWeb. Indeed, Elsevier—which is ACS's chief competitor in the chemistry journal publishing business—may be counting on ChemWeb to give its journals an edge among some chemists. Elsevier officials may be hoping that researchers interested in distributing results quickly will then send their articles to Elsevier journals, says Bovenschulte. For Elsevier, he says, “this could be considered a cost of attracting the best authors.”

    Whatever the motivation, chemistry preprints are long overdue, says R. Stephen Berry, a chemist at the University of Chicago. The culture among chemists—with their history of close ties to industry—is more conservative than that among physicists, says Berry. Still, Berry believes that chemistry preprints have a shot. “We just have to wait and see if it works,” he says. “But this is the kind of experiment we should be doing.”


    Possible New Way to Lower Cholesterol

    1. Dan Ferber*
    1. Dan Ferber is a writer in Urbana, Illinois.

    Clinicians may soon be able to mount a multipronged attack against cholesterol, the artery-clogging lipid whose buildup in the body is a major contributor to heart attacks and other cardiovascular diseases. Millions of people take drugs that lower cholesterol levels by blocking the body from making it. But we also consume the lipid in our diet, and today's drugs don't do much to keep our body from taking it in; nor do they take advantage of our body's ways of getting rid of excess cholesterol. New results could change that.

    In work reported on page 1524, a team led by molecular pharmacologist David Mangelsdorf of the University of Texas Southwestern Medical Center in Dallas has pinpointed a biological master switch in mice that controls three pathways that work together to both rid the body of excess cholesterol and prevent its absorption from the intestine. “This is a real tour de force,” says Steve Kliewer, senior research investigator at Glaxo Wellcome Inc. in Research Triangle Park, North Carolina. “It's exciting because it suggests an entirely new mechanism for reducing cholesterol.” This might be done, for example, with drugs that turn up the activity of the master switch, a protein known as the retinoid X receptor (RXR).

    Three ways to go.

    The drug LG268 fosters cholesterol elimination from the body by stimulating ABC1-mediated export of the lipid from macrophages and intestinal cells and also by inhibiting CYP7A1, a key enzyme needed for bile acid formation by liver cells.


    The findings are a serendipitous outgrowth of previous test tube experiments by several groups showing that RXR teams up with any of several other proteins to turn on genes involved in cholesterol metabolism. For example, the Texas team found 3 years ago that RXR and a protein called the liver X receptor (LXR) work together to activate genes whose protein products are needed in the liver to break down cholesterol to bile acids, which are then excreted into the gut. This suggested that drugs that boost the activity of LXR might help the body rid itself of cholesterol.

    To test this idea, postdoc Joyce Repa turned to a drug called LG268, which is a so-called rexinoid. These drugs bind to, and activate, RXR, which then teams up with its partner proteins, including LXR. Thus, the researchers expected that LG268 would boost LXR activity and stimulate bile acid formation.

    To test that expectation in mice, Repa gave the drug to animals fed a high-cholesterol diet, which would ordinarily cause cholesterol accumulation in the liver. Sure enough, LG268 reduced these high liver cholesterol levels. But the researchers got a surprise when they conducted a second test. They redid the experiments on mice that cannot make LXR, expecting to see cholesterol pile up in the liver. Instead, the cholesterol content of the animals' livers plummeted. “We couldn't figure out why that was happening,” Mangelsdorf says.

    Further tests pointed to the explanation: Rather than speeding cholesterol breakdown to bile acids, LG268 exerts a powerful block on cholesterol absorption from the gut. At first, the researchers had no idea how the drug does this. They tested its effects on about 100 different genes involved in various aspects of lipid metabolism, but the experiments came up empty. Then, about a year ago, a clue appeared.

    Other researchers discovered that people with Tangier disease, a rare hereditary condition that causes high blood cholesterol concentrations and severe atherosclerosis, have a defect in a protein called ABC1. They also have very low levels of high-density lipoprotein, which helps rid the body of cholesterol by carrying it back to the liver, the organ where most cholesterol breakdown occurs. “It was just like a light went on,” Mangelsdorf recalls. “Bingo! Maybe [ABC1] was sitting in the intestinal cell and pumping [the cholesterol] back out” so that it wasn't absorbed into the blood, and LG268 was assisting in that process.

    That's exactly what seems to be happening. The researchers found that LG268 ups production of ABC1 in cells of the intestinal wall, causing the lipid to pass right through the intestine without being absorbed. What's more, the drug turned out to activate cholesterol transport out of immune cells called macrophages. That's important, because cholesterol-laden macrophages help trigger the formation of artery-blocking atherosclerotic plaques. Activating ABC1 might thus help reverse the early steps of plaque formation, Mangelsdorf says.

    The Texas group also found that LG268 stimulates ABC1 production by specifically boosting the activity of RXR-LXR pairs, and it has another surprising effect as well. The drug also boosts the activity of RXR paired with a protein called FXR, a partnership that reduces the production of bile acids by the liver. That should also help inhibit cholesterol absorption, because the bile acids dissolve cholesterol and other lipids in the gut, thus facilitating the absorption of these otherwise water-insoluble materials. Bile acids and cholesterol that fail to be absorbed or reabsorbed by the gut are excreted in the feces.

    Despite the cholesterol-lowering potential of the rexinoids, drug researchers caution that the current drugs may not be usable because of their side effects. For example, a rexinoid derived from LG268 is approved for treating certain types of late-stage cancer and is being tested on others, but it raises levels of lipids called triglycerides in the blood, which could worsen obesity and cardiovascular disease. That may be acceptable for people with late-stage cancer who “have no other choice,” says Vincent Giguère, a molecular biologist at McGill University Health Centre in Montreal. But “side effects become a big issue” for otherwise healthy people who may take cholesterol-lowering drugs for decades. Drugs that target LXR rather than RXR might be safer, because they would activate a smaller group of genes, Giguère suggests. Still, he adds, “these findings augur well for the future of cholesterol-controlling drugs.”


    'Ultimate PC' Would Be a Hot Little Number

    1. Charles Seife

    If gigahertz speeds on a personal computer are still too slow, cheer up. Seth Lloyd, a physicist at the Massachusetts Institute of Technology, has calculated how to make PCs almost unimaginably faster—if you don't mind working on a black hole.

    Lloyd has used the laws of thermodynamics, information, relativity, and quantum mechanics to figure out the ultimate physical limits on the speed of a computer. His calculations show that, in principle, a kilogram of matter in a liter-sized container could be transformed into an “ultimate laptop” more than a trillion trillion trillion times as powerful as today's fastest supercomputer. Although presented in whimsical terms, other scientists say Lloyd's work marks a victory for those striving to figure out the laws of physics by investigating how nature deals with information.


    At 1051 operations per second, Seth Lloyd's black-hole laptop would be the last word in computing.


    “It's incredibly interesting—bold,” says Raymond Laflamme, a physicist at the Los Alamos National Laboratory in New Mexico. In addition to its theoretical importance, Laflamme says, the study shows what lies ahead. “Right now we are on roller skates. [Lloyd] says, ‘Let's get on a rocket.’”

    Lloyd's unconventional calculations are based on the links between information theory and the laws of thermodynamics, specifically entropy, a measure of the disorder of a system. Imagine dumping four balls into a box divided into four compartments. Roughly speaking, entropy is a measure of the probabilities of how the balls can land. “Ordered” outcomes (such as all four balls landing in a single compartment) are rare and have low entropy, while “disordered” outcomes (such as two balls in one compartment and a single ball in each of two others) are more common and have higher entropy.

    In 1948, Bell Labs scientist Claude Shannon realized that the thermodynamic principle of entropy could also apply in the realm of computers and information. In a sense, a system such as a box with balls in it or a container full of gas molecules can act like a computer, and the entropy is related to the amount of information that the “computer” can store. For instance, if you take your box and label the four compartments “00,” “01,” “10,” and “11,” then each ball can store two bits' worth of information. The total amount of information that a physical system can store is related to entropy.

    In the 31 August issue of Nature, Lloyd uses this principle to show that a 1-kilogram, 1-liter laptop could store and process 1031 bits of information. (A nice-sized hard drive holds about 1011 bits.) Then he figures out how quickly it could manipulate those bits, invoking Heisenberg's Uncertainty Principle, which implies that the more energy a system has available, the faster it can flip bits. Lloyd's ultimate laptop would convert all of its 1-kilogram mass into energy via Einstein's famous equation E = mc2, thus turning itself into a billion-degree blob of plasma. “This would present a packaging problem,” Lloyd admits with a laugh. The computer would then be capable of performing 1051 operations per second, leaving in the dust today's planned peak performer of 1013 operations per second.

    But processing speed is only half of the story. If you really want to speed up your computer, Lloyd says, you must also slash the time it takes to communicate with itself—that is, to transfer information back and forth. The trick, he says, is to squeeze the computer down to the most compact possible size. Lloyd shows that a computer made of the most compressed matter in the universe—a black hole—would calculate as fast as a plasma computer. It would also communicate in precisely the same time that it takes to flip a bit—the hallmark of the ideal computer. Coincidence? Perhaps not, Lloyd says: “Something really deep might be going on.”

    At present, scientists have no idea how to turn a laptop into a black hole (Windows 98 jokes aside). But Laflamme says that just thinking about such extreme scenarios might illuminate deep physical mysteries such as black holes. “It's not just what insight physics brings to information theory, but what information theory brings to physics,” he says. “I hope that, in the next 10 or 15 years, a lot of insight into physics will be due to quantum computing.”


    Neutron Stars Imply Relativity's a Drag

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Matter warps space; space guides matter. That, in a nutshell, is Einstein's general theory of relativity. Now three astronomers in Amsterdam may have confirmed a much subtler prediction of Einstein's: warped space-time with a twist.

    The general theory explains how the sun's gravity curves the surrounding space (actually space-time), bending nearby light waves and altering the orbit of Mercury. The new finding, based on x-rays from distant neutron stars, could be the first clear evidence of a weird relativistic effect called frame dragging, in which a heavy chunk of spinning matter wrenches the space-time around it like an eggbeater. “This is an extremely interesting and beautiful discovery,” says Luigi Stella of the Astronomical Observatory in Rome, Italy.

    Peter Jonker of the University of Amsterdam, the Netherlands, and his colleagues Mariano Méndez (now at the La Plata Observatory in Argentina) and Michiel van der Klis announced their results in the 1 September issue of The Astrophysical Journal. To describe such exotic behavior of space-time, Jonker goes beyond the astrophysicist's standard image of a bowling ball resting on a stiff sheet.

    “Frame dragging is comparable to what happens when you cover the ball with Velcro and rotate it,” Jonker says. The effect occurs only in the immediate neighborhood of very massive, swiftly rotating bodies. To study it, astronomers have to observe distant neutron stars—the extremely compact leftovers of supernova explosions, whose near-surface gravity is so strong that they make ideal test-beds for general relativity.

    Using data from NASA's Rossi X-ray Timing Explorer, Jonker and his colleagues found circumstantial evidence for frame dragging in the flickering of three neutron stars in binary systems. The flickering spans a wide range of x-ray frequencies. According to theoretician Frederick Lamb of the University of Illinois, Urbana-Champaign, the most prominent “quasi- periodic oscillations” probably come from orbiting gas that a neutron star tears off its normal-star companion. The hot gas accretes into a whirling disk and gives off x-rays as it spirals toward the neutron star's surface at almost the speed of light.

    The new evidence comes in the form of less prominent peaks close to one of the main frequency peaks. These so-called sidebands showed up only after the researchers carefully combined almost 5 years' worth of data. The Amsterdam astronomers say the peaks could be due to frame dragging, which would cause the accretion disk to wobble like a Frisbee. The wobble frequency would imprint itself on the main frequency peak, just as amplitude modulations do on the carrier wave of a radio broadcast.

    Some physicists, however, are unconvinced. Lamb says calculations done with his Illinois colleague, Draza Markovic, show that the frequency separation between the main signal and the sidebands is probably too large for the sidebands to have been caused by frame dragging. A similar false alarm occurred 3 years ago, he says, when Stella and Mario Vietri of the Third University of Rome cited a low-frequency, 60-hertz x-ray flicker in a couple of neutron stars as evidence of frame dragging (Science, 7 November 1997, p. 1012). The frequency of that earlier flicker clashed with theoretical calculations by Lamb's group and others. Lamb suspects that the flicker arises from a neutron star's intense magnetic field interacting with the accretion disk. Although the sidebands aren't as far out of step with theory, he says, “it's unlikely that [they] are produced by frame dragging.”

    Even so, the sidebands are “a very important result,” Lamb says. “The discovery of sidebands is a real breakthrough, regardless of what causes them. This may be the key to unlocking what is generating the main oscillations.” They may also provide information on the mass, the radius, and the physical makeup of neutron stars.

    But Stella says frame dragging can't be so lightly dismissed. Taken as a whole, he says, the sidebands and his earlier evidence “fall together in a very nice fashion. The frequency differences pose no problem at all.” Indeed, in a paper submitted to The Astrophysical Journal, Dimitrios Psaltis of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, presents a model of a relativistically oscillating disk that overcomes the frequency problem.

    The Amsterdam astronomers hope to use the Rossi satellite to study the neutron stars in more detail and look for sidebands in other sources. If the sidebands are indeed caused by frame dragging, Van der Klis explains, their frequency should shift along with that of the main oscillation in a specific way that will provide a decisive test of the hypothesis. “In principle,” he says, “these kinds of observations could prove Einstein right or wrong.”


    Forest Fire Plan Kindles Debate

    1. John S. MacNeil

    Forest fires burning in the western United States have already scorched over 2.5 million hectares this summer. Now a federal proposal to prevent them by paying loggers to cut smaller trees is generating heat among ecologists, who say the approach may not be right for all forests—or all fires.

    Leaders of western states have sharply criticized the Clinton Administration for not doing enough to prevent the blazes, the worst in nearly a century. They say that recent policies, including suppressing wildfires and logging only mature trees, have allowed western forests to grow unnaturally dense with young trees and made them more vulnerable to fire. Reacting to that criticism, the Administration said last week that it will soon release a plan to dramatically expand an experimental approach to fire prevention that emphasizes aggressive cutting of smaller trees. Although officials of the Interior and Agriculture departments are still working out the plan's details, it is expected to include paying loggers nearly $825 million a year to remove trees too small to be commercially valuable from 16 million hectares of western forests.

    The plan draws heavily from insights into fire control on federally managed lands made by ecologist Wallace Covington of the Ecological Restoration Institute at Northern Arizona University in Flagstaff. In one case, for example, the Forest Service paid professional loggers to remove 90% of the trees from a 36-hectare swath of low-altitude ponderosa pine in the Kaibab National Forest near Flagstaff. When a wildfire unexpectedly swept through the area last June, it burned the sparsely populated stand far less severely than the denser surrounding forest.

    Pete Fulé, a member of Covington's team, says that drastic thinning of the plot is the reason. With less fuel, the flames could no longer leap from treetop to treetop, he says, and when the fire spread along the ground it ignited only the underbrush. Mechanical cutting is necessary, Fulé says, because thinning forests with controlled burns “has not proven effective, at least in many instances.”

    But environmentalists say the widespread logging would harm forests, not help them. And some scientists say other combinations of cuts and burns may achieve the same results with less disruption. Covington's approach “doesn't use as wide an array of possible tools as we're using,” says Phil Weatherspoon of the Forest Service's Pacific Southwest Research Station in Redding, California. He is involved in an 11-site project that is examining various fire prevention schemes, from mechanical cutting alone to just prescriptive burns. Forest managers, he says, should get data on the potential costs and ecological consequences of various approaches before proceeding.

    Heavy thinning also may not address other causes of the recent fires, says Bill Baker, a geographer at the University of Wyoming in Laramie. Before settlers began grazing livestock in western forests, he notes, grasses competed with the young trees that now clog the landscape. “What's missing [from Covington's approach] is an emphasis on restoring grasses,” says Baker. “Without it I don't think it's going to work.” And Tom Swetnam, an ecologist at the University of Arizona in Tucson, thinks hot, dry weather brought on by La Niña climate patterns may have contributed to the severity of this year's fires—not just the accumulation of combustible young trees. As a result, he says, “there is some danger that [Covington's model] might be overextrapolated in the West.”

    Covington and his supporters agree that it would be a mistake to treat all forests the same. “We've got a score of forests, all of which burn differently,” says Steve Pyne, an environmental historian at the University of Arizona who is involved with Covington's project. But Pyne defends the Arizona site as representative of a common western ecosystem. “I think we understand why [ponderosa pine forests] are burning and what to do about it,” says Pyne.

    Despite their disagreements, both sides say that federal officials need to do more to prevent future wildfires. “The problem is not that we're doing too much, but that we're not doing enough,” says Craig Allen, an ecologist with the U.S. Geological Survey in Los Alamos, New Mexico. The challenge is to come up with a plan flexible enough to fit all the nation's hot spots.


    Homegrown Quartz Muddies the Water

    1. Erik Stokstad

    Next to volcanoes or earthquakes, mudstones are hardly a glamorous subject for geologists. But these widespread strata are an important source of hydrocarbons that migrate into petroleum deposits, and they can reveal much about Earth's history—if they are read correctly. Now a team of geologists has found that a telling feature of many mudstones may have been misinterpreted, throwing into question conclusions about everything from climate to ocean currents.

    Mudstone consists mostly of clay, washed from the land to the sea. It also contains fine grains of quartz. The size and distribution of these grains can reveal how far they traveled from shore, the strength of the currents that carried them, or even whether they took an airborne journey from a desert. Such inferences assume that quartz silt, like the clay, came from the continents. However, Jürgen Schieber of the University of Texas, Arlington, and his colleagues show in this week's issue of Nature that in some mudstones, most if not all of the quartz silt may have formed in place, probably from the dissolved remains of silica-bearing organisms.

    If this kind of homegrown, or authigenic, quartz silt is common, geologists may need to reexamine some of their reconstructions of past environments, including climate. A new “silica sink” could also affect the calculations of how much dissolved silica drifts between mudstone and sandstone. This migration is a prime concern of petroleum geologists, because silica can plug up the pores in rock that might otherwise hold oil. The finding “makes life more complicated,” says Kitty Milliken, a geologist at the University of Texas, Austin, who studies mudstones, “but it gives us the tools to be clear and figure it out.”

    The main evidence for the local origin of quartz silt comes from an analogy with authigenic quartz sand that Schieber observed several years ago. The quartz had precipitated inside sand-sized, hollow algal cysts—tough, protective bodies that algae commonly form when they reproduce. These cysts had been partially compressed by overlying sediment, leaving them with characteristic dents and projections. The same shapes turned up in quartz silt when Schieber and Dave Krinsley of the University of Texas and the University of Oregon examined slices of late Devonian (370-million-year-old) laminated mudstone, called black shales, from the eastern United States. The grains have concentric rings that look as if they were precipitated sequentially. Bordering the quartz grains are amber-colored rims that resemble the walls of algal cysts. Taken together, these characteristics distinguish authigenic from continental quartz, Schieber says.

    To double-check the diagnosis of authigenesis, Schieber and Lee Riciputi of Oak Ridge National Laboratory in Oak Ridge, Tennessee, focused an ion microprobe at quartz silt in the shale samples. Quartz silt they had pegged as authigenic from its appearance had oxygen isotope values typical of other kinds of quartz precipitated at low temperatures—and three times higher than that of quartz silt that was not homegrown. They knew that this “imported” quartz had come from metamorphic rocks in distant mountains, because it has a mottled texture typical of metamorphic quartz.

    What's most surprising, experts say, is the amount of authigenic quartz in these shales. In some samples, Schieber found that all the silt had grown in place. By volume, the authigenic silt may make up 40% of the shale. The presence of so much homegrown silt may have skewed geological interpretations of mudstone, Schieber says. Mistaking authigenic quartz silt for windborne silt, for example, might lead one to postulate desertlike conditions on land, when in fact the climate may not have been particularly dry. Authigenic quartz could also make it hard to estimate distance from the ancient shore, especially in broad expanses of mudstone that accumulated slowly, such as the late Devonian shales of North America.

    How important these findings are depends in part on whether other times and places typically produced shales similarly rich in homegrown quartz. Lee Kump, a geochemist at Pennsylvania State University, University Park, points out that algal cysts tend to be most abundant during particular periods, such as times of stressful environmental conditions, so fewer of these hosts may be deposited in mudstone during happy times. Schieber believes that quartz grains might form in other fossil pores or the spaces between particles. In any case, he's already shown that the truth behind even the most ordinary rocks can be clear as mud.


    Physicists Glimpse How Quasicrystals Boogie

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    If you have ever tapped a fine wineglass with a fork, you know crystals sing. Now, scientists have proved that quasicrystals, the slightly unpredictable cousins of crystals, can also dance. A new series of rapid-fire photographs has finally captured the expected do-si-do of atoms in the changing latticework of a quasicrystal. Although scientists had observed defects in quasicrystalline structures left behind by the flip-flops, called phasons, this is the first time anyone has spotted a real phason in action.

    Unlike humans, molecules shiver less when they get cold. And as the molecules chill out, they are more amenable to bonding with their neighbors. The usual result is a crystal—a periodic pattern of identical clusters of atoms, in which every distance is an exact multiple of the size of the fundamental atomic cluster. It is an elegant picture, and for more than 150 years scientists believed that crystallization was the inevitable result of dropping temperatures.

    Rhombus rumpus.

    Phasons tear through a quasicrystal, shifting the irregular latticework shown in this electron micrograph.


    They were wrong. In 1985, Danny Schectman of the Technion-Israel Institute of Technology in Haifa, Israel, discovered an aluminum alloy that cools to form a stable quasi-periodic structure that never exactly repeats. He called the structure a quasicrystal. In contrast to crystals, a quasicrystal has two length scales, says physicist Michael Widom of Carnegie Mellon University in Pittsburgh, Pennsylvania. Some quasicrystals, for example, mix two distinct three-dimensional structures, one hexagonal, the other pentagonal.

    Quasicrystals know how to jump and jive. If you pluck one of the wires of a regular crystal, a vibration called a phonon hums through the entire crystal. The single crystalline length scale implies that the phonon is the only possible distortion of the crystal. Extending the connection between length scales and distortions to quasicrystals, theorists predicted that quasicrystals support an extra kind of oscillation called a phason. Phasons rearrange the quasicrystal structures by making individual atoms jump as much as a few angstroms. But no one had ever seen the wiggles caused by a passing phason.

    Now, physicist Keiichi Edagawa and his collaborators at the University of Tokyo have for the first time used a high-resolution electron-tunneling microscope to capture the metamorphosis of a quasicrystal on film. They first heated an aluminum- copper-cobalt mixture to 1173 degrees Celsius, then cooled it to room temperature to form a quasicrystal of interlocking hexagonal and pentagonal rhombi. A series of photographs revealed a column of atoms jumping approximately 1 nanometer, the team reports in the 21 August Physical Review Letters. The jump changes a hexagonal rhombus to a pentagonal one and makes an adjacent pentagonal rhombus become hexagonal. Within minutes, the column jumps back and flips the rhombi back to the original configuration.

    “This is a breakthrough, because we can now see the dynamical effects of phasons,” says physicist Paul Steinhardt of Princeton University. But it leaves an important question unanswered: Why do quasicrystals form? Most scientists believe that quasicrystals are the lowest available energy state, so cooling molecules must eventually settle into that state, just as a marble must roll to the bottom of a bowl. Widom, on the other hand, supports the so-called “entropy model” that says quasicrystals continuously flip through a nearly infinite number of equally likely and constantly changing configurations. The new imaging technique may help scientists decide between the two.


    Tracking the Human Fallout From 'Mad Cow Disease'

    1. Michael Balter

    An Edinburgh task force studies cases of variant Creutzfeldt-Jakob disease, trying to find out just how the patients got infected and how many of them there may ultimately be

    EDINBURGH, SCOTLAND— When neurologist Andrea Lowman is called in on a case, the news is seldom good. The patient she had come to see earlier this summer was no exception. A young woman in her early 20s had been admitted to a hospital in England after her speech became increasingly slurred and she began having difficulty walking. By the time Lowman examined her, she was almost totally incoherent, her body jerked with involuntary movements, and she was suffering from ataxia, a loss of motor coordination.

    After looking over the young woman's medical charts and talking with her parents—who were keeping a sorrowful vigil by their daughter's bedside—Lowman confirmed the preliminary diagnosis the woman's own physician had arrived at: Creutzfeldt-Jakob disease (CJD), an incurable malady of the brain and nervous system. Moreover, because of the patient's youth and the pattern of her symptoms, Lowman suspected that she was suffering from a new form of the affliction—called variant CJD (vCJD)—which has been linked to eating beef or other products from cattle infected with bovine spongiform encephalopathy (BSE), or “mad cow disease.”

    Two or three times each week, Lowman travels from her office at the National CJD Surveillance Unit in Edinburgh to visit another suspected victim of CJD. U.K. health authorities created the unit in May 1990 in the wake of the BSE epidemic, which erupted in the mid-1980s and affected thousands of cattle each year for more than a decade. BSE had been linked to an abnormal, apparently infectious protein called a prion, which may have entered the bovine food chain when ground-up carcasses of prion-infected sheep were included in animal feed. And despite the insistence at the time by agricultural officials and farm industry organizations that British beef was safe, health experts were worried that the disease might spread to humans—a nightmarish possibility that came true in 1996 when the surveillance unit reported the first cases of vCJD.

    In the years since, the unit has continued to study the vCJD epidemic closely, looking for clues about exactly how the disease was transmitted to humans. On her travels across the United Kingdom, for example, Lowman is accompanied by a research nurse, who asks the patients' families detailed questions about what their relatives ate, down to the brand of baby food they consumed. This job has only increased in importance as the death toll continues to climb. During the past few weeks, the team's work has been making new headlines. In the 5 August issue of The Lancet, the researchers, along with other U.K. collaborators, reported for the first time that it is seeing a real increase in vCJD incidence, amounting to a 23% annual rise between 1994 and the present.

    The number of confirmed or probable vCJD cases in the United Kingdom is still relatively small—a total of 80 as Science went to press—but “this is the first time we have had good statistical evidence of an upward trend,” says neurologist Robert Will, the surveillance unit's director. Where that upward trend will ultimately lead is, however, highly uncertain. A new estimate by epidemiologist Roy Anderson's team at Oxford University, published in the 10 August issue of Nature, now puts the maximum number at 136,000, far less than their previous estimate of 500,000—and, the authors note, the actual toll could turn out to be much lower.

    Equally unclear is the exact source of those infections. Although most scientists believe that human consumption of BSE-contaminated meat products is the most likely explanation for the rise of vCJD, they are still unsure about which products were responsible. Some researchers are now hoping that an unusual “cluster” of five vCJD cases centered on the Leicestershire County town of Queniborough, which is currently under intense scrutiny by epidemiologists, will provide some answers. Knowing what kinds of food products were infected “might be important for correctly modeling the epidemic and knowing how many cases to expect,” says Philip Monk, the county's public health consultant.

    Watching and waiting

    When the Edinburgh team, which is funded by the U.K. Department of Health and Scotland's Executive Health Department, was formed, there were as yet no signs that BSE had infected humans. But health experts had good reason to be concerned. They already knew that BSE-infected cattle had been slaughtered for food—indeed, some 750,000 infected animals eventually entered the human food chain. And research during the previous decade had strongly implicated prions in some human neurodegenerative diseases such as kuru, a CJD-like disease discovered in the Fore people of New Guinea and thought to be transmitted directly or indirectly through cannibalism.

    The government asked Will, one of the United Kingdom's leading experts on CJD, to head the new unit. He recruited James Ironside, a highly respected neuropathologist, to join him, and together with a small team of staff and consultants the pair set about monitoring every case of CJD or CJD-like symptoms in the country. “The aim was to look at the incidence and pathological features of CJD in the U.K.,” says Ironside. “We wanted to see if anything was changing that might be attributable to BSE. But at that stage we had no idea of what we might be looking for—an increase in typical cases, a different type of disease, or nothing at all.”

    For 5 long years the team watched and waited, logging in more than 200 cases of CJD. But every case turned out to be a previously recognized variety of the disease. Most were the so-called “sporadic” form, which has no known cause and usually appears in older patients. Then, in late 1995, the vigilance paid off. From the nationwide network of neurologists and pathologists Will and Ironside had organized, they learned that two teenagers had been diagnosed with CJD, followed soon afterward by a case of CJD in a 29-year-old patient.

    These cases were striking for a number of reasons. The patients were unusually young. They showed an atypical clinical pattern, including psychiatric symptoms and ataxia very early in the course of the disease. And microscopic examination of their brain tissue revealed that it was studded with clumped deposits of prion protein, called “florid plaques,” reminiscent of those seen in kuru and very distinct from the more diffuse pattern of brain damage usually seen in sporadic CJD.

    By 6 April 1996, when the surveillance unit and its collaborators published this bad news in The Lancet, 10 cases of vCJD had been identified. The onset of a new disease hard on the heels of the BSE epidemic, and at that time restricted to the United Kingdom (although there are now several vCJD cases in France), led the researchers to conclude that infection with BSE was “the most plausible interpretation” of the findings. This view soon received considerable support when researchers at the Institute for Animal Health in Edinburgh reported that the prion strain apparently responsible for vCJD was nearly identical to that identified in cattle infected with BSE.

    Sticking to the data

    The news that humans had likely been infected with BSE hit the United Kingdom like a bombshell. It led to the near-bankruptcy of the British cattle industry and was a key factor in the defeat of the Conservative government, which had generally downplayed the danger from BSE, by the Labor Party in the 1997 parliamentary election. With the media frenzy and occasional public panic swirling around them, Will and his team have painstakingly collected the data needed to shed light on how the epidemic got started and where it may be going. Simon Cousens, a statistician at the London School of Hygiene and Tropical Medicine who collaborates closely with the surveillance unit, describes the team as constantly walking a tightrope between “scare mongering and creating panic, or being accused of covering things up.”

    On the rise.

    A new analysis shows that the vCJD incidence and death rate are going up steadily. (The dotted line is the fitted underlying trend calculated from the actual data points.)

    CREDIT: ANDREWS ET AL., THE LANCET 356 (9228) (5 AUGUST 2000)

    The team has consistently shied away from making predictions about the future course of the epidemic, preferring to stick to the data it already has in hand and taking care not to exaggerate the numbers. So far, says Will, “there are more farmers who have committed suicide because of vCJD than people who have actually been victims of the disease.” The study reported last month in The Lancet, which concludes that the incidence is going up, is based on a statistical reanalysis of existing data, using the date of onset of disease rather than date of death to define when the case occurred. Because some patients live longer than others after diagnosis, this provides a more sensitive indicator of vCJD incidence, says surveillance unit epidemiologist Hester Ward. As for making projections of the eventual case toll, Ward says, “I don't think we will be able to tell the size of the epidemic until we've reached the peak and started coming down.”

    Those researchers bold enough to make projections, such as Anderson's Oxford team, have had to continually adjust their figures. The researchers, who had earlier predicted by mathematical modeling a maximum toll of 500,000 cases, have now capped their estimate at 136,000 over the coming several decades—while emphasizing that the real numbers will probably be much lower.

    In making their predictions, the team assumes that the slaughtering of infected herds and other safeguards have put a stop to new human infections with the BSE prion. And the maximum estimate of 136,000, says Oxford mathematical biologist Neil Ferguson, is based on another assumption: that the incubation period for vCJD—that is, the time between initial prion infection and the development of symptoms—is 60 years or more. But this, he adds, is highly unlikely. “We can't say what the incubation period really is, but it is unheard of that a disease has an incubation period that long,” Ferguson says. A more realistic maximum is likely to be about 10,000 cases.

    Yet, although the number of potential cases might be lower than once feared, researchers remain determined to try to solve the riddles posed by vCJD. In particular, they want to know why the disease occurs almost entirely in younger people—the average age of the victims identified so far is some 30 years less than that for sporadic CJD—and what food products might have transmitted it. So far, the only clue is the finding that vCJD incidence in the northern half of the United Kingdom is about twice that in the south. “We have no explanation for this,” says Ward. However, the team is considering a number of hypotheses, including the possibility that northerners eat more “mechanically recovered meat,” a major ingredient in products such as hotdogs and sausages—and a suspected source of BSE infection because it contains much more nervous-system tissue than would be found in a nicely trimmed steak.

    New hope of getting an answer has been raised by a cluster of five vCJD cases diagnosed over the past few years in people living either in the town of Queniborough or within a 5-kilometer radius of it. Such clusters are the meat and potatoes of epidemiological work, because they provide researchers with the opportunity to identify risk factors common to all the cases. A previous suspected cluster, in Kent County, evaporated when it turned out to be due only to chance. But the cluster in Queniborough—a town of only 3000 people—seems different. “The probability of getting that many cases so close together in that size population by chance is extremely small, about 1 in 500,” says Cousens. “These cases are linked in some way.”

    Even so, identifying the source of these infections may be difficult. Although the families of the victims have been given the surveillance unit's standard questionnaire, Will says that “trying to get dietary habits secondhand from relatives is notoriously unreliable. There is a potential for bias in the study. Everyone knows the hypothesis we are testing”—that meat or meat products were responsible. Nevertheless, Monk told Science, he has developed his own hypothesis about the source of infection in the town, which he declines to state publicly at this point to avoid bias in the study. Monk is now testing his hypothesis by asking every parent in Queniborough with children aged 19 to 35 to fill out a new questionnaire about what they fed their offspring between 1975 and 1990, the period during which most exposure to BSE is likely to have taken place. “I am confident that we will find the link between these cases,” he says.

    Will says that although this knowledge would come too late to help victims of vCJD, it could be important to their families, many of whom are worried that the brothers and sisters of their stricken children might have eaten the same products and thus also face a risk of dying from the disease. And this information might help Lowman comfort the distraught family members she sees each week, by convincing them that they could not possibly have known that the food they gave their offspring was infected. “The parents often feel very guilty,” Lowman says. “They are terribly upset that they might have exposed their own children to something that made them ill.”


    How to Produce Better Math and Science Teachers

    1. Jeffrey Mervis

    In two new reports on improving science and math education in the United States, National Research Council panels call on universities and school districts to share responsibility for educating teachers and suggest that new Ph.D.s are an untapped source for high school teachers.

    Schools, Universities Told to Forge Links

    Universities train most of the nation's science and math teachers. But it's the job of local school districts to ensure that they keep up with their field once they enter the classroom. That bifurcated system needs to be ended, says a new report* from the National Research Council (NRC), if the country hopes to improve student performance in math and science. That message is likely to be repeated next month, sources say, when a high- profile commission issues its recommendations on how to improve the quality of the nation's math and science teachers—and puts a price tag on the reforms.

    “Universities have to attract students to their education departments, but after they graduate and find jobs as teachers they are no longer a client of the university,” says panel member Mark Saul, a teacher at Bronxville High School outside New York City and an adjunct professor of mathematics at City College of New York. “And school administrators have to deal with so many noneducational crises that they're happy if the kids are in their seats and there's a licensed teacher in each room. As a result, attention to the actual act of instruction gets lost.”

    The NRC panel says that the best way to improve teacher education is to make it a continuum, with school districts taking more responsibility for the initial preparation of new teachers and university faculty playing a bigger role in ongoing professional development. The change will require both sectors to work together more closely. It also recommends that universities improve the content of undergraduate science and math courses for prospective teachers, model appropriate practices for teaching those subjects, and do more research on the art of teaching and how students learn. In turn, school districts should make better use of teachers who have mastered these skills, giving them more opportunities to share their knowledge with their colleagues and with student teachers.

    Such a partnership already exists in Maryland, notes panelist Martin Johnson, a professor of mathematics education at the University of Maryland, College Park, in the form of four Professional Development Schools (PDSs). PDSs bring together prospective teachers and experienced staff in a formal arrangement that goes beyond both regular student teaching and standard after-school workshops. “In the past, we would send students to a school and they'd be assigned to one teacher,” says Johnson. “We're asking the school to incorporate the student teacher into a broader range of experiences, with input from other faculty members as well as other teachers.”

    Jim Lewis, head of the math department at the University of Nebraska, Lincoln, and co-chair of the NRC committee, compares this approach to training doctors. “Medical students take courses from both research and clinical faculty,” he explains, “and their residencies are overseen by practicing physicians. Likewise, an experienced classroom teacher may be a better mentor [to a prospective teacher] than an education professor who focuses on research.” That shift, says Lewis, will allow research faculty to devote more attention to helping experienced teachers stay on top of their field through advanced courses, summer research projects, and other professional activities.

    The National Science Foundation, which paid $425,000 for the report and two related activities, has already begun to support the types of partnerships the NRC panel calls for. It has asked for $20 million next year to expand a program on university-based Centers for Learning and Teaching with teacher training as one of three primary foci.

    The NRC report also dovetails with the pending recommendations of a blue-ribbon federal commission headed by former U.S. senator and astronaut John Glenn. “I was struck by the amount of overlap,” says Linda Rosen, executive secretary to the commission, whose report is due out on 3 October ( “There's a growing sense that we have to break down the barriers between elementary and secondary schools and higher education and bring all the available talent to bear on the problem of math and science teacher education.” Rosen says the commission will flesh out the NRC's findings “by laying out a set of strategies and price tags that makes clear who needs to do what.”

    Although Lewis welcomes the heightened attention on teacher education, he says that reports won't help unless they are backed up by a national consensus that teachers count. “The schools [in Lincoln, Nebraska] start this week, but they'll close early if it gets too hot because they lack air conditioning,” he says. “I'll bet that you work in an air-conditioned building. So why can't teachers? Because we aren't willing to pay what it would cost.”

    Can New Ph.D.s Be Persuaded to Teach?

    U.S. schools will need to hire 20,000 math and science teachers a year for the next decade to handle a growing student population and high rates of retirement, according to government estimates. Where they will come from is anyone's guess, as schools are already having trouble finding qualified people. To help fill the gap, a National Research Council (NRC) committee suggests tapping a talent pool that is relatively underrepresented among teachers: newly minted Ph.D.s.

    In a report* issued last week, the committee says many more recent science Ph.D.s would be willing to teach high school science and math if the government helped with the transition, if the certification process were compressed, and if they could retain ties to research. The committee recommends that the NRC help states with pilot projects that, if successful, could be expanded nationwide. But some educators are skeptical, noting that Ph.D.s may not be properly trained and that the research and teaching cultures are very different.

    “If public schools could place an ad that read: ‘Good salaries, good working conditions, summers off, and tenure after 3 years,’ I think they'd get a good response from graduate students,” says Ronald Morris, a professor of pharmacology at the University of Medicine and Dentistry of New Jersey in Piscataway and chair of the NRC panel, which last summer surveyed 2000 graduate students and postdocs as well as interviewing professional educators. “But most Ph.D.s don't know about the opportunities, because they are generally far removed from the world of K-12 education.”

    The report notes that while 36% of respondents say they had considered a K-12 teaching job at some point in their training, only 0.8% of the scientific Ph.D. workforce is actually working in the schools. “That's a significant pool of talent that we're ignoring,” says Morris, who acknowledges that none of his 40 postdocs over the years has chosen to go into high school teaching.

    Professional educators, however, warn that several issues must be resolved, including the teaching skills of recent Ph.D.s and how well they would fit into a high school environment. “I think it's a great idea,” says Mike Lach, a high school physics teacher in Chicago who just completed a sabbatical year in Washington, D.C., working on federal legislation to improve math and science teaching (Science, 4 August, p. 713). “But teaching is hard, and those in higher education traditionally don't have much respect for classroom teachers.” Mark Saul, a Ph.D. math teacher in Bronxville, New York, as well as an adjunct professor at City College of New York, puts it this way: “Ph.D.s are a peg with a different shape than the current hole for schoolteachers.”

    Morris agrees that high school teaching isn't appropriate for all Ph.D.s. But he believes that an array of incentives, including federally funded fellowships for retraining and summer research projects, might be just the ticket for those looking for a way out of a tight academic job market.

    • *Educating Teachers of Science, Mathematics, and Technology: New Practices for the New Millennium, 2000 (

    • *“Attracting Science and Mathematics Ph.D.s to Secondary School Education,” National Academy Press.


    Transposons Help Sculpt a Dynamic Genome

    1. Anne Simon Moffat

    These mobile elements cause considerable reshaping of the genome, which may contribute to evolutionary adaptability

    More than 50 years ago, geneticist Barbara McClintock rocked the scientific community with her discovery that maize contains mobile genetic elements, bits of DNA that move about the genome, often causing mutations if they happen to land in functioning genes. Her findings were considered so outlandish that they were at first dismissed as anomalies unique to corn. But over the years, transposons, as the mobile elements are called, have proved to be nearly universal. They've turned up in species ranging from bacteria to mammals, where their movements have been linked to a variety of mutations, including some that cause diseases and others that add desirable diversity to genomes (Science, 18 August, p. 1152). Only in the past few years, however, have researchers been able to measure the rate at which transposons alter the composition of genomes, and they are finding that the restructuring they cause is more extensive than previously thought.

    Researchers have known for about 20 years that transposons can expand the genome, resulting in the repetitive DNA sequences sometimes called “junk,” but the new work indicates that transposons can also contribute to substantial DNA losses. What's more, these changes can be rapid—at least on an evolutionary scale. “The level of genomic dynamism is way beyond what was thought,” says geneticist Susan Wessler of the University of Georgia, Athens.

    The rate of transposon-mediated genomic change can vary, however, even among closely related organisms. The findings may thus help explain the so-called “C-value paradox,” the fact that the size of an organism's genome is not correlated with its obvious complexity. Plants, for example, are notorious for having a 1000-fold variation in their genome sizes, ranging from the lean 125-million-base genome of Arabidopsis to the extravagant genome of the ornamental lily Fritillaria, which at 120 billion bases is about 40 times the size of the human genome. There are also hints that the environment can influence transposon activity, which in turn may help an organism adapt to environmental changes.

    Until recently, researchers tended to focus on the stability of the genome over evolutionary time. There is ample evidence, for example, that sequences of many key genes, such as those that determine body plan, are conserved across diverse genera. The discovery, about 10 years ago, of synteny, that many genes remain grouped together in the same relative positions in the genome no matter its size, also suggested that genomes were models of stability. The potential for significant fluidity in the genome was largely ignored until a few years ago when a small number of groups began to take a different perspective, using molecular techniques to probe genomes on a large scale.

    For example, work done 2 years ago by Purdue University molecular biologist Jeffery Bennetzen and Phillip SanMiguel, who is now at the University of California, Irvine, suggests that maize used amplification of retrotransposons, elements that copy themselves with the aid of RNA, to double its genome size from 1.2 billion to 2.4 billion bases 1 million to 3 million years ago—a very short period in evolutionary time. They based this conclusion on their finding that maize carries many more retrotransposons than its close relative, sorghum. The threat of “genomic obesity” was often mentioned. “It's remarkable the genome doesn't explode,” says Bennetzen.

    New work shows that plants have ways of counteracting transposon expansion, however. University of Helsinki retrotransposon specialist Alan Schulman and colleagues at the John Innes Centre in Norwich, U.K., report in the July issue of Genome Research that retrotransposons can also be eliminated from the genome. The most common retrotransposons in plants carry duplicated sequences on each end called long terminal repeats (LTRs), and these can lead to something called intrachromosomal recombination, in which the LTRs temporarily join up and the DNA between them is excised. When this happens, one of the LTRs is left behind. Schulman and his colleagues analyzed the barley genome for these molecular “scars,” and they found a lot of them, indicating that many transposons had been lost. In a commentary in the same Genome Research issue, molecular biologist Pablo Rabinowicz of Cold Spring Harbor Laboratory in New York says these results suggest that “recombination between LTRs is an efficient way to counteract retrotransposon expansion, at least among certain grasses.” He cautions, however, that it's not clear how widespread the phenomenon is.

    Evolutionary biologist Dmitri Petrov, first as a graduate student in the Harvard lab of Daniel Hartl and, most recently, at Stanford University, has also found evidence of significant genome fluidity in insects. In work begun in the mid-1990s, Petrov and his colleagues used the Helena group of transposons from Drosophila virilis and other fruit fly species as tools for studying genomic juggling. By monitoring sequence changes in Helena transposons in eight Drosophila species, the researchers learned that copies of this element lose DNA at a high rate—20 times faster than in mammals.

    Petrov does not know what causes the shrinkage, although he suggests that it might be due to spontaneous mutations or errors in copying the DNA. But whatever the cause, he says, “I was extremely surprised by the Drosophila data. I thought the rate [of genome loss] would be the same as for mammals.” That wasn't the only surprise, however.

    Last February, Petrov, J. Spencer Johnston, an entomologist at Texas A&M University in College Station, and Harvard colleagues showed that Hawaiian crickets (Laupala) lose DNA more than 40 times more slowly than Drosophila does, even though the two insect species are closely related (Science, 11 February, p. 1060). In this work, the researchers used the same analytic technique with a different transposon, Lau1, in nine Laupala species. Because the Laupala genome is 11 times larger than that of Drosophila, Petrov hypothesizes that its slow loss of DNA may account for its bulk. He is now testing whether that idea holds up by measuring the rate of DNA loss in various insects, including flies, ants, butterflies, mosquitoes, damselflies, and grasshoppers.

    The big question mark, however, is what does all this genomic restructuring do for the organism? A small genome may be helpful because it can replicate faster, resulting in a faster cell cycle and shorter generation time. But work reported in the 5 June issue of the Proceedings of the National Academy of Sciences by Schulman, along with colleagues at the Agricultural Research Centre in Jokioinen, Finland, and the University of Haifa in Israel, suggests that large genomes may have their own advantages.

    The researchers collected specimens of the wild ancestor of cultivated barley from various microclimates in “Evolution Canyon,” Mount Carmel, Israel. When they then looked at the plants' content of a particular type of retrotransposon, called BARE-1, they found that it is up to three times more abundant in barley plants growing at the canyon rim than in those grown near the bottom of the canyon. Their evidence suggests that this may be because plants at higher elevations lose their transposons more slowly than plants farther down. The fact that plants at the top of the canyon both gain more copies and lose fewer suggests, Schulman says, that the elements may confer some advantage.

    He and his colleagues speculate that a larger genome, achieved through the ample presence of retrotransposons, may help plants deal with the more stressful high and dry areas of the canyon, for example, by influencing the physiological machinery that enables plants to seek or retain water.

    Consistent with this idea, Stanford University plant scientist Virginia Walbot showed last year that shorter wavelength ultraviolet light can activate a particular Mutator transposon in maize pollen, a result that suggests that sunlight, likely more plentiful at higher elevations, may also be an environmental force involved in genomic restructuring. That remains to be demonstrated, but plant scientists say that Schulman's identification of the BARE-1 element, numerous copies of which exist in the barley genome, as an agent of genomic restructuring opens the way for a new level of experimental studies.

    One possibility is to test whether plants with more elements are able to thrive in more stressful conditions. Another is to see whether transcription of the BARE-1 element changes under different environmental conditions. Georgia's Wessler says there is now “a clean molecular system to get at the important questions.” The results that come from such studies of BARE-1, and other mobile genetic elements, should help to explain how and why some plants and animals have come to have genomes of extraordinary size, often much larger than that of humans.


    A Ruckus Over Releasing Images of the Human Brain

    1. Eliot Marshall

    A plan to have brain scientists deposit data in a public center at Dartmouth has drawn a flurry of objections; researchers are drafting data-sharing principles

    For most of this summer, leading brain researchers have been fuming over a plan to force them to share raw data. They became upset when Michael Gazzaniga, a psychologist at Dartmouth College in Hanover, New Hampshire, told researchers publishing functional magnetic resonance images of the brain in the journal he edits—the Journal of Cognitive Neuroscience (JCN)—that they are expected to submit their raw data to a public database he is developing at Dartmouth. They became more agitated when a representative of the Dartmouth database implied that JCN may not act alone: Other editors, he told a meeting of brain mappers, would also insist that authors submit their raw data to Dartmouth.

    Those events touched off a rebellion. Galvanized by the Dartmouth project, brain scientists have spent the past 10 weeks e-mailing one another and organizing detailed responses. They complain that the Dartmouth archive—which is getting under way this fall—is not ready for prime time. They warn that if the project goes forward as planned, it could compromise the privacy of research subjects, get tangled up in technical knots, and rob authors of the credit they deserve. But even as they rattle off these complaints, a few brain scientists also concede that Gazzaniga's preemptive move may have done some good: It has got everyone talking about how to build a public database that really works. Such a database would be useful for combining results from different studies.

    Last month, the Organization for Human Brain Mapping (OHBM)—a coalition of scientists around the world interested in imaging the brain—responded to the commotion by establishing a task force under the leadership of Jonathan Cohen, a psychologist at Princeton University. His task: Elicit a consensus and draw up a set of data-sharing “guidelines” supported by the entire field. This will be their response to the Dartmouth initiative, laying down ground rules for cooperation. “For the journals,” says Cohen, “we want a list of things they might want to consider before they decide to endorse any database.” For authors, the panel will try to establish guidelines on such incendiary issues as how long it's reasonable to withhold data. Cohen plans to have a draft ready for review by the OHBM executive council in “late October,” before the Society for Neuroscience meeting in November.

    Many leaders in the brain-imaging community say the task force will have a tough job finding an approach to data-sharing that people can agree on. The complexities of reporting experimental results from brain scans, they note, are greater than in fields such as genome sequencing and crystallography, where the experimental protocols are standardized and the data are far more concrete. Many feel that the Dartmouth group doesn't appreciate these difficulties. According to one prominent leader who requested anonymity: “It was a political tour de force that they got the money [to establish the database],” but “they're totally clueless about what they're up against. Hopefully, they're learning.”

    The scientists who started the rumpus seem to be taking the flak in stride. Gazzaniga, a founder of the Cognitive Neuroscience Society and reputed by peers to be a scientific impresario and skilled fund-raiser, says: “I actually was blindsided by this whole thing. I was talking to people who think this is a great idea and were trying to help make it work. Then, bingo, we get the other side.” Although he has recently softened his demand for immediate data release, he says friends have advised him that the backlash he's seeing is normal: “People yell and scream and demand a hold on the data,” he says, and “I understand their concerns. … There will be a few bumps and noises, and then it will smooth out.”

    Marcus Raichle, a brain-imaging researcher at Washington University in St. Louis and chair of Gazzaniga's database advisory board, adds that the government “has provided the money for us to generate this valuable data, and it ought to be used in the most efficacious way. … If the people doing the human genome and chemists and others do this kind of databasing, we should be doing it as well.”

    Build it, but who will come?

    The Dartmouth project began, Gazzaniga says, when he seized an opportunity to fund an old idea. The notion of creating a shared archive of brain-imaging data “had been kicking around the community for a long time, and nobody was doing anything about it,” according to Gazzaniga. When the National Science Foundation (NSF) showed an interest in making “infrastructure” grants to beef up the biology end of social and cognitive science, Gazzaniga moved. He proposed a public archive of magnetic resonance imaging (MRI) of the human brain. After clearing an NSF technical review, the project won a 5-year, $4.5 million grant, including a small contribution from the National Institute of Mental Health, and an additional $1 million from the Keck Foundation (Science, 29 October 1999, p. 880).

    Computer scientists are enthusiastic about the project, Gazzaniga says. They believe they can use the archive to “come up with new ways to do meta-analyses, new ways of mining the data” to discover connections in the brain that aren't detectable in a single experiment or set of studies. Gazzaniga also says graduate students at universities that can't afford to run a sophisticated brain-scanning laboratory will be able to tap into and use high-quality data at the new center.

    Money in hand, Dartmouth assembled the machines and the staff in 1999, and Gazzaniga prepared to launch the National Functional MRI Data Center (NfMRIDC) in the fall of 2000. But when Gazzaniga asked for submissions, many scientists balked, arguing that the whole project was premature. The field hasn't even agreed on a standard format for reporting data, they say.

    Cohen and others note that archiving has long been a “knotty issue.” OHBM members have sparred over proposals for a single data file format, and a decade-old effort—a consensus brain map begun by neuroscientist Peter Fox at the University of Texas Health Sciences Center in San Antonio—has had difficulty getting useful input. Cohen, for example, says that because of these challenges, the Texas project “has not been an unmitigated success.” Images are often made to assess brain changes in subjects performing various behavioral tasks, and one U.S. government researcher who asked not to be named says: “The big problem was how to describe the behavioral task in sufficient detail that the data would be meaningful.”

    John Mazziotta, editor of the journal NeuroImage and leader of another consensus-building effort called the Probabilistic Atlas of the Human Brain at the University of California, Los Angeles (UCLA), agrees that “we need technical tools first” before creating a common database. For 7 years, he says, his group and other major brain-imaging centers have been trying to create a toolkit to describe the architecture of the brain. “It still isn't ready,” he concedes. He notes that even within a lab, there are great variations in the behavior examined, the types of stimuli used, the methods of recording responses, and the analytical software used.

    Dartmouth's solution to the compatibility problem is to finesse it, at least for now. Staff engineer Jeff Woodward says the database will receive data in any format authors want to offer. “Methods of converting from one format to another are pretty well known,” Woodward says, and the center will convert archived files to the format requested by the user. “At this point, we don't want to try to impose any standard,” he adds, as the technology is changing so rapidly.

    Compulsory sharing?

    The skirmishing over technical standards pales in comparison to the fighting over whether authors should be compelled to release their raw data to a database. Raichle believes that past efforts like the Texas project suffered because data submission was “totally voluntary.” He likes Gazzaniga's solution: Ask everyone to adhere to a new norm of releasing their data to the archive as a condition of getting a paper published.

    To advance this policy, Gazzaniga says, he consulted leading journal editors by e-mail. He says most responded favorably. And to set an example, he adopted the policy for JCN. He commissioned a dozen papers by leading researchers for a special edition of JCN and asked authors to submit supporting data to the NfMRIDC. All agreed. Gazzaniga also wrote to recently published JCN authors inviting them to submit source data.

    One of those who received Gazzaniga's invitation, Isabel Gauthier, a psychologist at Vanderbilt University in Nashville, Tennessee, responded with a public dissent. She and about 40 colleagues co-signed a letter to leading journals opposing release of data on publication. (Gauthier's letter and responses from Gazzaniga and others are on her Web site,

    Gauthier stresses the author's right to control her own work, noting in her letter that the raw data from a set of experiments may produce more than one paper and shouldn't be released with the first publication. “The nature of fMRI data,” Gauthier writes, is that it's hard to separate what's “relevant to a published paper from data that is destined to another manuscript.” She argues that authors should decide when data are made public.

    Gazzaniga's hope that other journals would follow JCN's lead was already beginning to dissolve. When computer scientist Javed Aslam of the Dartmouth center briefed a group of brain mappers in Bethesda, Maryland, in June, he said that major journals endorsed Gazzaniga's data-release policy. But two journal editors in the room got up, according to scientists present, and said they'd never heard of it.

    Other editors, including Nature Editor Philip Campbell and Science Editor-in-Chief Donald Kennedy, after receiving petitions from brain mappers, have decided to avoid any fixed policy for now. Kennedy says: “We have not endorsed the JCN policy, nor is data release required for publication in Science. We … have decided to wait for a consensus to develop in the imaging community. …” Campbell has written that the Nature journals do not have “any immediate intention of imposing conditions of deposition on fMRI data,” as this would be “premature.” Arthur Toga, Mazziotta's colleague at UCLA and an editor of NeuroImage, adds: “Any individual or autocratic suggestion as to how this should be done is absurd. … We live for the people who read the journal” and wouldn't try to impose unwanted standards.

    Gazzaniga has now amended JCN's policy to state that authors may hold their data private for an undetermined amount of time after submitting an article. But he says he has not retreated from the view that the data must be shared after a reasonable delay.

    Seeking a consensus

    Over the next few weeks, Cohen's task force will try to determine what the norms should be. Among other issues, the group will consider how to deal with claims that the Dartmouth data-sharing scheme could put personal privacy at risk because raw brain-scanning data can be used to reconstruct a skull surface—even the outlines of a face. Gazzaniga responds that all personal data will be stripped from submissions, and that his team is “working on” a software block that prevents facial reconstruction.

    But the lack of a common data format remains a major barrier, one that will not be solved without the cooperation of the entire field. OHBM past president Karl Friston of the Wellcome Department of Cognitive Neuroscience at University College, London, U.K., says that OHBM leaders recognized long ago that establishing analytical comparability is the toughest issue to resolve. He believes that if all researchers had the software needed to analyze experimental results from other laboratories, data sharing would occur spontaneously. For that reason, he says, Cohen and other leaders of OHBM have been working with the National Institutes of Health to create publicly available software tools.

    It seems risky to try to create a shared database before a set of common analytical tools is in hand, Cohen says. But for the moment, he must deal with the “acute” issue of deciding whether—and how—the field should help the new Dartmouth data center get under way. And he says he feels a heavy responsibility: His entire field, and people in fields far removed, are watching to see how the brain mappers respond.

  14. Tissue Engineers Build New Bone

    1. Robert F. Service

    Bone repair may be one of the first major applications of tissue engineering; efforts to encourage the growth of new bone using novel matrices, bone morphogenic proteins, gene therapy, and stem cells are all showing promise

    Mending broken or damaged bones is a hit-or-miss business. Orthopedic surgeons have become adept at manipulating, pinning, and immobilizing fractures, giving the body's natural bone-healing processes an opportunity to knit the broken pieces together. In recent decades, they have also learned to graft bone from elsewhere in the body to repair major damage from accidents or disease: Every year doctors in the United States alone perform about 450,000 surgical bone grafts. But some fractures simply refuse to heal, and bone grafting adds to the pain of recovery. At times, this procedure can't even be attempted because “in many patients the quality and quantity of bones you can harvest is not sufficient,” says Scott Bruder, a bone tissue engineering expert at DePuy, a Johnson & Johnson company based in Raynham, Massachusetts. Now, however, many researchers believe bone repair is entering a new era that could make painful grafts and unmended bones a thing of the past.

    In several clinical trials now under way or nearing launch, researchers are testing novel ways to replace damaged bone. Research teams, primarily in the United States and Europe, are implanting biomaterials laced with molecular signals designed to trigger the body's own repair mechanisms. They are also culturing a class of bone marrow stem cells—versatile cells that can develop into bone, cartilage, and other tissues—and transplanting them into the damaged area. And they are attempting to repair damage by gene therapy, transfusing cells carrying genes that produce key bone-repair proteins.

    These trials mark the latest wave of progress in the burgeoning field of tissue engineering, in which researchers are trying to grow replacement tissues to repair damaged organs such as livers, hearts, and bones. Although the field is still maturing, tissue engineers working with bone are beginning to pull ahead of the pack. “Tissue engineering has made great strides,” says Steven Goldstein, who directs orthopedic research at the University of Michigan, Ann Arbor, “but lots of tissues are not ready for prime time.” That's not the case with bone, says Goldstein: “There has been more success in bone than anyplace else.” Adds David Mooney, a tissue engineer at the University of Michigan, Ann Arbor, “If you compare it to the challenge of engineering a complete internal organ, bone is thought to be realizable in a much nearer time scale.” Tissues such as the kidney and lung consist of numerous cell types that must be arranged in the proper three-dimensional structure and coaxed to express particular genes at different times. Structural tissues such as bone and cartilage are not as complex, Mooney notes. Goldstein adds that because the body naturally replaces, or “remodels,” old bone with new, all that is needed is to get this regenerative process up and running smoothly. “If you can kick off repair, the normal process of remodeling helps you quite a bit,” Goldstein says.

    That promise has sparked intense commercial interest in bone engineering. Companies ranging from biotech start-ups to traditional orthopedic powerhouses are jumping into the field. And although most of their efforts remain in the research stage, one company, Stryker Biotech in Hopkinton, Massachusetts, already has a product. It has applied to the Food and Drug Administration (FDA) for approval to market a collagen matrix composite infused with a natural protein that signals bone marrow cells to turn on the process of bone regeneration. Indeed, the commercial stakes are so high that some researchers are worried that patent claims, and a reluctance to test competing technologies in combination, could delay progress in the field.

    Molecular scaffolding

    Like civil engineers building a new structure, bone engineers start by erecting scaffolding: They insert a matrix of special material into gaps in bone. This molecular scaffolding lies at the heart of all the new tissue engineering approaches.

    Surgeons have used matrices made from materials such as collagen and hydroxyapatite for decades to coax the patient's own cells to colonize the damaged area and form new bone. The technique has been particularly successful in filling small divots, but it often has trouble fixing larger defects, says Mooney. So he and others have been looking for better materials. Antonios Mikos at Rice University in Houston, Michael Yazemski at the Mayo Clinic in Rochester, Minnesota, and their colleagues, for example, have been working on a plastic precursor that can be injected into the repair site, where it quickly polymerizes and hardens into a porous matrix capable of holding new bone cells. As new bone grows in, the plastic matrix breaks down into natural metabolites that are then excreted from the body. Thus far, says Yazemski, work in animals has shown that the biodegradable polymer not only sparks new bone growth over time, but also provides needed mechanical strength and appears fully biocompatible.

    Building on such successes, tissue engineers have recently achieved more dramatic results when they give the matrix a helping hand—by seeding it with bone growth factors. The approach owes its early progress to a bit of serendipity. In 1965, Marshall Urist, an orthopedic surgeon at the University of California, Los Angeles (UCLA), was studying how minerals deposit on the collagen-based matrix on which bone naturally forms. When he implanted demineralized fragments of rabbit bone in muscle tissue, he found that new bone was created at the site. Something in the bone matrix itself, it seemed, was coaxing cells in the muscle to start producing new bone at this unusual site. That something turned out to be a class of proteins called bone morphogenic proteins (BMPs). But “it took 25 years to purify [BMPs],” says A. Hari Reddi, the director of the Center for Tissue Regeneration and Repair at the University of California, Davis.

    Reddi's lab was one of several that set out to track down these chemical signals. In the mid-1970s, Reddi and his colleagues showed that proteins in natural bone matrix first attract stem cells from the bone marrow, then spur them to proliferate and become bone-producing osteoblasts. A few years later, Reddi's group isolated the first of these proteins, which later came to be known as BMP-7. But it wasn't until 1989 that researchers at Creative Biomolecules in Hopkinton, Massachusetts, cloned the gene for BMP-7, a development that opened the door for researchers to produce a recombinant version of the protein that they could then add to matrix implants. Shortly thereafter, researchers at Genetics Institute in Cambridge, Massachusetts, cloned the gene for BMP-2—a similar cell signal.

    These signaling proteins quickly proved that they could kick start the bone-regeneration process. Throughout the early 1990s, researchers at Genetics Institute and Stryker Biotech—which owned the rights to Creative Biomolecules' work with BMP-7 for orthopedic applications—completed a series of animal studies showing that their BMPs seeded on simple collagen matrices prompted rapid healing of bone defects, while similar defects remained unhealed in control animals. Stryker Biotech launched the first human clinical trial in 1992 for troublesome “nonunion” fractures that had not healed in over 9 months. According to Stryker president Jamie Kemler, the trial's results show that implants of BMP-7 on a collagen matrix generate new bone as well as, or better than, autografts of healthy bone transplanted from another part of the patient's body. The company is currently awaiting FDA approval to begin selling its matrices. Genetics Institute, too, is nearing the end of similar clinical trials with BMP-2.

    But every great promise has its fine print, and this method of bone building may have limitations, too. Some researchers point out that when BMPs are released naturally by cells, mere nanogram quantities of the proteins per gram of bone matrix are enough to trigger the bone repair cascade. Yet microgram quantities of BMP per gram of matrix material—over six orders of magnitude higher—seem to be needed to produce the same effect with an artificial matrix. Although there are no known health problems associated with such high BMP concentrations, the cost may be high, potentially thousands of dollars per treatment.

    Gene therapy

    In an effort to get signaling molecules to the cells they trigger, researchers have turned to a field that has had its problems lately: gene therapy. Gene therapists have had a struggle delivering on the field's early promise in part because cells carrying therapeutic genes express them only for a short time. But short-term expression may be enough for remaking bone, Michigan's Goldstein notes. In a flurry of papers last year, researchers from labs in the United States and Germany reported promising early results. In the July 1999 issue of the Journal of Bone and Joint Surgery, for example, orthopedic surgeon Jay Lieberman and his colleagues at UCLA reported using an adenovirus carrying a gene that produces BMP-2 to transfect bone marrow cells. They then seeded and grew the transfected cells on a demineralized bone matrix, which they implanted into surgically produced gaps in the leg bones of rats. The treated bones healed normally, while those that received control preparations—either with a non-BMP-producing gene or just the matrix alone —did not heal.

    Using a simpler approach, Goldstein and his Michigan colleagues have produced similar results in dogs. Instead of using cells infected with a transgenic virus, Goldstein's team uses circular fragments of DNA called plasmids containing a gene that codes for a protein called human parathyroid hormone, which, like BMPs, helps stimulate the natural bone repair cascade. They trap the plasmids in a polymer matrix, which they implant into a surgically made gap in the leg bones of dogs. In the July 1999 issue of Nature Medicine, Goldstein's team reported that surrounding cells picked up the plasmid DNA and expressed it for up to 6 weeks. The treated bones were fully repaired. Again, no effect was seen in control animals. Bone tissue engineering, says Goldstein, “looks to be an area where gene therapy can have one of its earliest, greatest successes.”

    Based on this and earlier successes with their plasmid gene therapy approach, the Michigan group formed a San Diego- based start-up called Selective Genetics to move the technique into the clinic. The company says that after showing widespread success in animals, they are gearing up to launch a phase I safety trial of the approach in humans.

    New cell sources

    Some researchers worry that these promising techniques may ultimately hit a roadblock: a shortage of stem cells. Although transplanted signaling molecules attract stem cells to the repair site and cause them to differentiate, the supply may not be sufficient to repair major damage. So several groups are trying to supplement natural stem cells with cells grown in culture.

    Unlike embryonic stem cells, which can differentiate into any one of the more than 200 cell types in the body, bone marrow stem cells have a more limited repertoire. They are already committed to develop into cells that form a broad class of tissues, including bone, cartilage, and tendons, as well as heart, muscle, and neural tissues. And although they are produced throughout the life of animals, their numbers appear to decline with age, says Arnold Caplan, who directs the Skeletal Research Center at Case Western Reserve University in Cleveland. In newborns, bone marrow stem cells—also called mesenchymal stem cells (MSCs)—account for 1 out of every 10,000 bone marrow cells. That number drops to 1 in 100,000 in teens, 1 in 400,000 in 50-year-olds, and 1 in 1 million to 2 million in 80-year-olds.

    That's bad news for anyone who has lost large sections of bone in an accident or through cancer. Animal studies show that BMP therapies and other cell- signaling approaches have trouble mending gaps larger than about 25 centimeters because they can't recruit enough stem cells to the area, says Annemarie Moseley, president and CEO of Osiris Therapeutics, a Baltimore, Maryland-based tissue engineering start-up. In these cases BMPs begin by recruiting stem cells to the ends of the healthy bone and regenerating new tissue toward the center of the gap, but “if you look at the center of the matrix you don't see any evidence of bone growth,” says Moseley. The same problem hampers a related approach of simply harvesting healthy bone marrow from a patient and transplanting it in the repair site. “You can put as much marrow in there as you want, but it won't help” if there aren't enough stem cells, says Caplan.

    For that reason, Caplan, Osiris, and others have been working to implant stem cells directly into bone repair sites. Caplan's lab helped launch the field about 12 years ago when they first isolated MSCs and came up with a means to expand cell numbers in culture. Since then, Caplan, DePuy's Bruder, Moseley, and others have experimented with a variety of MSC-based implants. In 1989 and 1990, for example, Caplan's group published papers showing that MSCs seeded on a porous, calcium-based ceramic substrate could heal 8-millimeter gaps in the leg bones of rats. They have since reproduced these results for larger bone defects in larger animals. These and other successes prompted Caplan in 1992 to launch Osiris Therapeutics, which aims to carry the approach to humans.

    Since the early 1990s, Osiris has shown that the MSC-based therapy works in rats, rabbits, and dogs. And today the company is preparing to launch a phase I safety trial with MSCs in humans. Pamela Robey, a cell biologist with the National Institute of Dental and Craniofacial Research (NIDCR) in Bethesda, Maryland, has made similar progress. Robey says her group has shown that stem cells seeded on a matrix—hydroxyapatite in this case—work to seal large bone gaps in mice, rats, rabbits, and dogs. She is also awaiting FDA approval to launch human clinical trials.

    Still, MSCs have their own drawbacks. The biggest concern is time. The current procedure involves extracting stem cells from a patient, growing them in culture, and transplanting them back into that same person, a process that takes weeks. Not only does this rule out emergency repairs, but it also makes the procedure expensive, says Bruder. To get around this problem, Osiris has been experimenting with implanting MSCs from one animal into another, hoping to come up with cell-based implants that surgeons can simply remove from the freezer and implant in a patient's body. The approach has potential, says Moseley, because MSCs don't express the cell surface markers that T cells recognize in rejecting implanted tissue. Thus far, studies on about 40 dogs and “untold numbers” of rats have showed that the transferred cells not only do not spark an immune reaction, but go on to form normal bone, she says.

    Putting the pieces together

    As researchers push different approaches to tissue engineering and companies stake out their claims on technologies, commercial competition is heating up. And that worries some researchers, who fear that it may make it hard to determine which strategies work best. “I don't think it's clear to me or the field in general which of these techniques is useful for different applications,” says Michigan's Mooney. Adds Bruder: “Companies are worried that combination therapies will be superior to their single bullet” and are therefore reluctant to test their products along with those of their competitors.

    So strong is this concern, says Robey, that it has kept her from working with BMPs. “One of the reasons I turned to stem cells was because I couldn't get BMPs to do my work,” she says. And the result is that progress on determining the most effective combinations is slow. Last year, for example, researchers at Osiris and Novartis collaborated to transfect MSCs with the gene for BMP-7, seed them on matrices, and implant them in rats. The results were excellent, says Moseley, but she says the research has since been dropped because Stryker Biotech owns key rights to BMP-7.

    Stryker's Kemler says his company is not trying to quash competition but is pursuing its own “proprietary” combination therapies, which he declines to specify. Nevertheless, Robey and others say the balkanized landscape of intellectual property in tissue engineering prevents them from testing novel therapies. “I do consider that to be a real logjam, and I am not sure how that will be broken,” says Robey. Moseley says she believes the logjam will eventually give way as the field matures over the next few years. Says Caplan: “Tissue engineering is just getting off the ground.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution