News this Week

Science  11 Sep 1998:
Vol. 281, Issue 5383, pp. 1578

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Should Engineer Witnesses Meet Same Standards as Scientists?

    1. Jocelyn Kaiser

    Five years ago the U.S. Supreme Court gave trial judges more authority to throw out testimony from scientists that doesn't meet strict tests of scientific validity. Now the court may be ready to rule on whether judges should apply the same rules to testimony from other kinds of technical experts.

    The high court has agreed to rule on a case, Kumho v. Carmichael, involving the testimony of an engineer who claimed that a defective tire led to an accident. At issue is whether his testimony should have to meet scientific standards. Late last month the National Academy of Engineering (NAE) filed a brief in support of the tire company, urging the court to set the same rules for engineers in this case that it does for scientists. But the case is likely to extend far beyond the engineering community to everyone from accountants to forensics experts. “The extension to engineering is an important clarification, but in the background is the whole question of how medical testimony is going to be treated,” says Joe Cecil, a researcher at the Federal Judicial Center in Washington, D.C., who found in a 1991 study that 40% of expert witnesses in federal civil cases are from medical and mental health fields and only 10% are scientists.

    Although pro-business groups have lined up in support of the principle that technical testimony must be grounded in rigorous science, organizations that represent people who bring product liability suits argue that crucial evidence from many kinds of experts who do not publish their findings could be shut out. “It could really undermine the ability of experts to testify based on their experience and knowledge,” says Sarah Posner of Trial Lawyers for Public Justice, a group in Washington, D.C.


    The backdrop for Kumho is a 1993 decision, Daubert v. Merrel Dow Pharmaceuticals, in which the Supreme Court called for trial judges to act as “gatekeepers” and screen out unreliable scientific testimony (Science, 2 July 1993, p. 22). Until then, the prevailing standard was whether testimony was generally accepted by the scientific community. The court said judges should instead use four criteria: empirical testability, peer review and publication, rate of error of a technique, and its degree of acceptance. In some cases this has helped to get novel technologies into courtrooms, including DNA evidence, notes Cecil. But more often it has allowed judges to exclude testimony, especially in product liability cases, deemed to lack scientific validity.

    The Supreme Court left open whether Daubert could be used to assess other kinds of expert testimony, and circuit courts have been split on the issue. In Kumho, a minivan owned by the Carmichael family of Alabama blew a tire in 1993, leading to an accident that killed one of their children. The family sued Samyang Tire Inc. (now Kumho Tire Co.), the tire's manufacturer, offering testimony from a mechanical engineer who claimed a defect had caused it to fail. A trial court rejected the testimony, saying it didn't meet the four Daubert factors, and dismissed the case. But the 11th Circuit Court found that it was wrong to apply the Daubert principles, ruling that the engineer's testimony was “more like a beekeeper['s]” than a scientist's because it relied on observations and experience.

    Kumho's lawyers argue that expert engineers should meet the Daubert standard and that this would “drive the quality of such expert evidence in the right direction by ensuring the reliability of their analyses and methods before admitting their testimony.” Washington, D.C., attorney Richard Meserve, who filed the NAE's amicus brief, agrees: “Should engineering [be subject to the same] reliability call? The brief says yes … especially where something failed.”

    The families have yet to file their brief, but they argued in a response to Kumho's petition that the tire expert's testimony shouldn't be judged by the Daubert criteria because it was “based upon technical and specialized knowledge as opposed to his application of scientific principles and theories.” Their attorney, Robert Hedge of Mobile, Alabama, says that although Daubert may apply to some types of nonscientific testimony, there are “literally thousands of areas of expertise,” from tire analysis to a surgeon's assessment of a herniated disk, where an expert's opinion is based on experience and “there's no error rate, no peer review, and it can't be tested.”

    Some legal observers say that requiring judges to apply Daubert to all technical experts could cause confusion. “Peer review and publication in some careers just doesn't make any sense,” says Margaret Berger of Brooklyn Law School. The reliability of the testimony is more important than whether it meets Daubert criteria, she says.

    Berger adds that “I think a lot of this is, ‘My discipline is as good as your discipline.’” In a sense, NAE agrees. It asserts in its brief that engineering “is founded on scientific understanding” and can be judged by the same principles.


    Cattle Diet Linked to Bacterial Growth

    1. Jennifer Couzin

    Food safety experts have been losing ground against bacterial contamination. The most threatening strains, like Escherichia coli O157:H7, continue to pop up in spite of increasingly stringent food safety standards, be it in beef from a Nebraska-based company, Japanese radishes, or Wyoming tap water. On page 1666, a research team from the U.S. Department of Agriculture (USDA) and Cornell University offers findings that support a novel explanation for the increased numbers and virulence of E. coli outbreaks over the past decades. The problem, they say, may stem in part from diet changes among beef cattle.

    The digestive tracts of cattle nurture some of the most virulent strains of E. coli, which can later find their way into beef and also into other foods that come in contact with infected manure. Since the Second World War, cattle diets have shifted from hay to starchy grain feed. And the Cornell team, including USDA microbiologist James Russell, postdoc Francisco Diez-Gonzalez, graduate student Todd Callaway, and undergraduate Menas Kizoulis, now shows that the digestive systems of cows fed hay generate less than 1% of the E. coli found in the feces of grain-fed animals. What's more, bacteria from the grain-fed animals were much more resistant to acid, making them more likely to survive in the human stomach and cause infection.

    “This [research] is in a class by itself,” raves Gary Schoolnik, chief of the infectious disease division at Stanford University Medical School. “[It] opens the door to a whole field of research that needs to be done.” Schoolnik suggests deliberately infecting cows with the O157 strain, so that researchers can directly compare its incidence in animals fed hay and grain diets rather than focusing broadly on the bacteria as Russell's team did. More work will also be needed to test a practical implication of the new finding: that switching cattle to hay a few days before they are slaughtered could limit the frequency of dangerous E. coli outbreaks.

    The researchers began by surveying 61 Cornell-owned cows that were consuming different types of feed. One group was eating hay or grass, which is naturally rich in fiber, while the other two received either 60% or 80% corn diets. After at least 3 weeks on the diets, the three students tackled the not-so-pleasant task of removing fecal samples from the cows' rectums and determining their E. coli counts.

    They found that E. coli flooded the digesta of the high- and midlevel grain groups, with more than 6 million cells in every gram. But among animals fed hay, researchers logged a mere 20,000 cells per gram. When the samples sat for an hour in acid similar to that in the human stomach, virtually all E. coli in the hay-group digesta were destroyed; in the 80% grain division, 250,000 per gram survived—more than enough to sicken an individual if the O157 strain is present. “We were absolutely shocked by the difference,” says Russell. “We never found an animal that didn't agree with the trend.”

    Russell attributes this dramatic variance to the digestive tract of cattle, which has a hard time breaking down starch. Consequently, large amounts of grain can pass into a cow's intestines undigested. This triggers a fermentation process that provides more nutrients for the bacteria to grow on, as well as releasing acid, thus exposing the E. coli to an environment that selects in favor of acid-resistant strains. This theory got a boost when Russell's team found that the colonic contents of grain-fed cattle were up to 100 times more acidic than those of animals given hay.

    Not all microbiologists were convinced by the data in the paper, however. Michael Doyle, who directs the Center for Food Safety and Quality Enhancement at the University of Georgia, Griffin, argues that lauryl sulfate broth, used to determine the numbers of E. coli by dilution, is no more selective for E. coli than other bacteria and would not reveal an accurate count. “The methods as they're written” don't make sense, he says. Russell counters that although lauryl sulfate isn't a foolproof selection method for E. coli, “the results were confirmed by other tests.” For example, the researchers showed that, as expected for E. coli, the bacteria could grow in a medium containing lactose, releasing carbon dioxide gas as an end product.

    If further work confirms the connection between diet and bacterial growth, the cattle industry might help keep E. coli O157:H7 out of the food supply by switching cattle off grain before slaughter. Russell says their work showed that “in 5 days on hay, you can eliminate all acid-resistant E. coli.”

    It may not be easy to persuade the cattle industry, however. “I think people in feed lots are going to be hesitant to institute a change” in cattle diet, says Fred Owens, a ruminant researcher at Optimum Quality Grains in Des Moines, Iowa. Owens cites logistical problems, such as having to transport and store large quantities of hay, as well as a potential drop in market value should the cows' weight fall while on hay.

    But many microbiologists believe the costs might be worth it. “I think whatever steps we think make sense we ought to consider doing,” says John La Montagne, deputy director of the National Institute of Allergy and Infectious Diseases. He adds, “E. coli O157 is a big problem, potentially a very big problem.”


    Senate Committee Votes Boost for NIH

    1. Eliot Marshall

    Biomedical researchers can chalk up another big advance on Capitol Hill: The Senate Appropriations Committee last week approved a bill that would raise the National Institutes of Health (NIH) budget by almost $2 billion, to $15.6 billion, a massive increase of 14.7%. This is much more than Congress has offered other research agencies, and $800 million more than the NIH increase proposed by the White House. If the bill is approved as written, it would put NIH on track for doubling its budget within 5 years, an ambitious goal set by health research advocates and congressional leaders early this year (Science, 10 April, p. 196). The bill would also establish a new earmark: At the behest of Appropriations Committee Chair Ted Stevens (R-AK), it includes a $175 million set-aside in NIH's budget for prostate cancer research. This year, NIH is spending about $114 million.

    But before any of these plans come to fruition, congressional aides say, a few roadblocks must be cleared away. Written by the Labor and Health and Human Services subcommittee chaired by Senator Arlen Specter (R-PA), the Senate bill proposes to spend more money on jobs and education programs than was allocated to the subcommittee by budget chiefs. The bill gets around this problem by deferring costs and recalculating accounts in ways that leave even seasoned congressional hands befuddled.

    One academic lobbyist who attended the bill's markup on 3 September says that Senator Pete Domenici (R-NM), chair of the budget committee, seemed ready to go along with a “rescoring” process that would make available about one-third of the money needed to float this bill. But it's not clear how Specter and the subcommittee's top Democrat, Tom Harkin (IA), will find the remainder.

    The political roadblocks could be formidable, too. Mainly because conservatives and moderates differ so sharply, the House has not yet acted on an NIH funding bill drafted by a subcommittee chaired by Representative John Porter (R-IL). This proposal would give NIH a $1.2 billion increase (9.1%). But other parts of the bill would end funding for popular summer jobs and home heat subsidy programs. Even moderate Republicans have refused to support these cuts, and President Clinton has said he would veto the bill. This problem must be solved before the House and Senate can agree.

    Congress has only a couple of weeks left to resolve these issues before the fiscal year ends on 1 October. Already, Republicans are talking about the need to pass “one or two” stopgap funding resolutions to keep the government afloat as they wheel and deal.


    RNA-Splicing Machinery Revealed

    1. Dan Ferber*
    1. Dan Ferber is a science writer in Urbana, Illinois.

    For proteins in human cells, teamwork often beats working alone. Many proteins gather in complexes that contain up to dozens of components and help cells replicate DNA, turn on genes, and perform other key tasks. It can take years of work to isolate and identify the proteins in these complexes, then track down their genes. But teaming up pays off in the biology lab, too. By pairing a new high-speed technique for analyzing proteins with a database of partial gene sequences, a European group fingered nearly all of the parts of a critical piece of protein machinery in one fell swoop.

    In the September issue of Nature Genetics, Matthias Mann of Odense University in Denmark, Angus Lamond of Dundee University in Scotland, and their colleagues report that they have identified 44 components of the human spliceosome, a multiprotein machine that splices the noncoding sequences out of newly minted RNAs to produce messenger RNAs, the cell's templates for protein production. Having an almost complete parts list for the spliceosome should help researchers figure out how it works. The feat, achieved while Mann and Lamond were both at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany, also proved the worth of the database of human gene fragments called “expressed sequence tags” (ESTs), which some genome experts once dismissed as a poor substitute for the complete gene sequences to come from the Human Genome Project. “They have leapfrogged over what would have been years of work,” s “expressed sequence tags” (ESTs), which some genome experts once dismissed as a poor substitute for the complete gene sequences to come from the Human Genome Project. “They have leapfrogged over what would have been years of work,” s” says Francis Collins, director of the National Human Genome Research Institute. “The significance goes beyond spliceosomes, although that's significant enough.”

    Although researchers had been working on the human spliceosome for 2 decades, they had only identified about half of its proteins, Lamond says. To find the remaining ones, the team fished out intact spliceosomes from cultured human cells and separated them into what appeared to be 69 individual proteins. With a protein-splitting enzyme, they digested each protein component into shorter pieces. They then analyzed each piece by a technique called nanoelectrospray mass spectrometry, pioneered by Mann's group, which rapidly and accurately identifies amino acid sequences by shattering the protein fragments and comparing the mass of the resulting pieces.

    Next, the EMBL team compared the amino acid sequences with the growing public EST database, known as dbEST, searching for gene fragments that might code for them. They managed to find matches for 65 of the 69 potential spliceosome components. They then sequenced the full lengths of DNA represented by the ESTs to get the complete genes, which yielded insights into the nature and function of the proteins. Some of the 65 proved to be variants of the same protein, so in the end the team was left with 25 known spliceosome proteins and 19 new ones. To confirm that all the new proteins were really part of the spliceosome, the researchers linked the gene for each one to the gene for a fluorescent protein. They showed that the hybrid genes produced proteins that glowed in parts of the nucleus where spliceosomes were expected.

    The spliceosome analysis “is a technical tour de force,” says spliceosome researcher Thoru Pederson of University of Massachusetts Medical School in Worcester. Although it took 3 years, the researchers say they could perform it in a matter of months now that they have proven the techniques. Their success shows, for example, that the dbEST database now contains fragments of almost all the genes for the spliceosome—and, most likely, for many other multiprotein complexes, says molecular biophysicist Charles Cantor, of Sequenom Inc., a genomics company in San Diego. That had been in doubt because no one knew exactly how many of the more than 50,000 human genes were represented by the database.

    The spliceosome work is also a pioneering example of “proteomics”—the effort to get from genome sequence data to protein function by analyzing many proteins at once. “We've all been excited by the genomics project,” Pederson says. “This is the beginning of the proteomics project.”


    Leptin Sparks Blood Vessel Growth

    1. Marcia Barinaga

    You can hardly find two hotter biomedical research areas these days than angiogenesis, the growth of new blood vessels, which has emerged as an exciting new target for anticancer drugs, and obesity, a field that was energized 4 years ago by the discovery of leptin, an appetite-suppressing hormone made by fat cells. Now, in a curious twist, biochemist M. Rocío Sierra-Honigmann of Yale University and her colleagues have forged a direct link between those two fields. They report on page 1683 that leptin triggers angiogenesis in experimental animals.

    “No one would have thought that leptin has anything to do with angiogenesis. This is a paper that is going to change people's thinking,” says angiogenesis pioneer Judah Folkman of Harvard Medical School in Boston. Just what leptin's double duty means for the workings of the body isn't clear yet. But findings by Sierra-Honigmann's team and others suggest several intriguing possibilities. One is that leptin contributes to the formation of the new blood vessels needed when fat increases in volume. Leptin may also spur blood vessel growth in the maturing egg and early embryo and in healing wounds as well.

    Sierra-Honigmann decided to see whether leptin promotes angiogenesis because of a chance discovery she made a year and a half ago while helping out her husband, Jaime Flores-Riveros, who was then at the Bayer Research Center in West Haven, Connecticut. He had engineered cultured cells to make the cell surface receptor through which leptin exerts its effects—a receptor then known to be found mainly in the brain. Sierra-Honigmann was using antibodies to confirm that the cells actually contained the receptor. As a negative control for the antibody test, she used endothelial cells—the type of cells that form blood vessels. To her surprise, those cells scored positive, indicating that they naturally contain the leptin receptor. “It kept me awake at night,” says Sierra-Honigmann. “If I were an endothelial cell, why would I want leptin receptors?”

    To find out, Sierra-Honigmann enlisted the help of two postdocs from a nearby lab, Guillermo García-Cardeña and Andreas Papapetropoulos, who study angiogenesis. They found that leptin causes cultured endothelial cells to aggregate, forming tubes that resemble the early stages of blood vessels. Then with the help of Peter Polverini of the University of Michigan School of Dentistry in Ann Arbor, the team tested whether leptin would cause new blood vessels to form in the corneas of rats, the “gold standard” for an angiogenic molecule. Leptin passed the test.

    Apparently, says Folkman, “nature used one molecule for two functions.” In addition to its well-known role of controlling appetite and metabolism, leptin may, he suggests, “drive the blood vessels to match the fat.” It does not appear to be essential for that second job, however, because the copious fat tissue in mutant mice that completely lack leptin manages to recruit an adequate blood supply.

    But embryologist Jonathan Van Blerkom and his co-workers at the University of Colorado, Boulder, found a hint of another role for leptin-induced angiogenesis last year. They discovered that leptin is made in human ovarian follicles, which is where eggs mature until they are ready to be released and fertilized. What's more, Van Blerkom's team found that the protein is packaged with two known angiogenic factors in the follicle and in parts of the egg that develop into the cells responsible for forming the placenta. These findings led Van Blerkom to wonder if leptin is angiogenic. The discovery that it is, he says, suggests that it could help the follicles generate the many new blood vessels they produce as they mature and help the young embryo itself induce the mesh of blood vessels in the placenta.

    Wound healing also depends on blood-vessel growth, and researchers had noted that healing is slow in leptin-deficient mice. In preliminary experiments, Sierra-Honigmann and her colleagues have now shown that extra leptin can speed healing. “A normal wound in a mouse heals in 5 to 7 days,” she says, but with leptin treatment “it is completely healed by day 3 or 4.” I “it is completely healed by day 3 or 4.” I” Ihen treated topically with leptin.

    The leptin-angiogenesis connection raises another possibility as well: that, like all other known angiogenic factors, leptin may be deployed by some cancers to recruit blood vessels. Folkman's team is checking to see if any tumors make leptin, which could then serve as a target for controlling cancer growth. Leptin made by tumors could in some cases also contribute to the appetite and weight loss that are common in cancer, he says. With all these new potential roles for leptin, this already famous protein is poised for even wider stardom.


    NSF Draws Up Plans for $70 Million Plane

    1. Jennifer Couzin

    A new $70 million jet designed to probe Earth's upper atmosphere has been cleared for a budgetary takeoff by the policy-making body of the National Science Foundation (NSF). Last month's approval by the National Science Board puts the new plane in line for inclusion in the agency's fiscal year 2000 budget request, to be submitted later this month to the White House.

    Backers of the project say the aircraft, called the High-Performance Instrumented Airborne Platform for Environmental Research (HIAPER), will provide much-needed capability to explore the tropopause, the area between the upper and lower atmospheres that features a vital exchange of solar energy and contains the tops of thunderstorms and hurricanes. Little is known about this key region, says Ron Phillipsborn, a commander in the National Oceanic and Atmospheric Administration (NOAA) corps. “It's like the dark side of the moon.”

    View this table:

    But some scientists worry that the cost of outfitting and operating the aircraft could drain money from smaller, university-based research programs, as well as from important upgrades to existing platforms. “I'd like to see more money put into low-altitude aircraft,” says Judy Curry, a professor of aerospace and atmospheric sciences at the University of Colorado, Boulder, who has flown on both low- and high-altitude planes. She and others worry that NSF will concentrate resources on HIAPER to the detriment of other low-altitude craft operated by the National Center for Atmospheric Research (NCAR) in Boulder, which is also expected to manage HIAPER.

    The idea for HIAPER grew out of a series of workshops on U.S. experimental aircraft capabilities. Scientists complained that the existing NCAR planes could not carry heavy equipment at high altitudes nor fly in icy cloud conditions or through most violent storms. They asked for a sophisticated high-altitude aircraft that could perform these tasks and more. NSF's answer, nearly a decade later, is HIAPER.

    A modification of a top-of-the-line corporate jet, HIAPER would offer a total package not available in a single existing aircraft. “I think it's fantastic,” says Naomi Surgi, mission manager of NOAA's weather services. “NSF can benefit tremendously from this kind of platform.” NCAR's stable includes a 30-year-old Lockheed Electra owned by NSF and a newer, leased C-130. HIAPER would be able to fly almost twice as high, for almost twice as long, as the Electra, which is scheduled to be retired from service in 2005 if HIAPER takes off. The proposed jet is also much more powerful than the C-130 (see chart). Among other agencies, NASA's high-altitude jets are already heavily oversubscribed, while NOAA's new Gulfstream must be available when needed for hurricane study. That means a scientist flying a mission between June and November could be forced to abandon a research project suddenly if a major storm developed.

    The projected costs of HIAPER are causing some concern, however. A proposal to seek more than half the money in the first year of a 4-year construction cycle must first earn a spot in a special account for new research facilities that is part of NSF's upcoming budget submission. If the project is funded, NSF estimates annual operating costs at $3 million, the same level as the Electra's. “We do not expect to have any [cost] surprises,” says Cliff Jacobs, whose section oversees NCAR, which receives about 65% of its $80 million annual budget from NSF.

    Still, says Curry, “people shouldn't have their blinders on as to how much this is going to cost.” Joined by other academic and NOAA scientists, Curry wonders if NCAR can afford the additional personnel and instrumentation that HIAPER will require. And although many researchers welcome the idea of having access to a more capable plane, they nevertheless say that NSF is flying into uncharted skies. “There's not a lot of experience” in maintaining a research jet of this caliber, says Michael Rodgers of the Air Quality Lab, part of the School of Engineering at Georgia Institute of Technology in Atlanta, making any cost estimates unreliable. Whether NCAR will manage to operate HIAPER without diverting staff and funds from other projects, he says, remains to be seen.

    A bigger problem than routine maintenance is the cost of making full use of HIAPER's capabilities. NSF has budgeted only about $100,000 a year for upgrades to the aircraft, but some scientists say the figure could soar far higher if several new instruments were to be purchased and the airframe modified to accommodate them. “An experimental airplane has to keep evolving,” says meteorologist Ed Eloranta of the University of Wisconsin, Madison.

    Despite those issues, the science board had no qualms about approving the request at its August meeting. “This was a relatively clean item,” says recently retired board member John Hopcroft, dean of engineering at Cornell University, who led a review of the proposal. He said the board was convinced that the plane would require few modifications to the initial package of Doppler radar, air-probing sensors, spectrometers, and other equipment to be installed. HIAPER's price tag, he added, represents only a tiny slice of NSF's annual $3.5 billion budget. And NSF has reviewed its cost projections for HIAPER with officials from the Air Force and NOAA, says Jacobs.

    NCAR officials are also confident they can handle HIAPER, which they see as the inevitable next step for atmospheric research. “Scientists want to go higher, further, and stay up longer,” says Warren Johnson, assistant director of NCAR's Atmospheric Technology Division. “We believe it's time for a high-performance jet aircraft.”


    Report Paints Grim Outlook for Young Ph.D.s

    1. Constance Holden

    In what surely will make depressing reading for aspiring researchers, a report released this week by the National Research Council (NRC) argues that the supply of newly minted Ph.D.s in the life sciences vastly outstrips the availability of desirable jobs. Putting the imprimatur of authority on the well-known plight of those laboring in the trenches, the report states that young life scientists these days are trapped for years in low-paid and transitory postdoc positions. “I call it the La Guardia effect,” says panel chair Shirley Tilghman, a molecular biologist at Princeton University. She has a vision of “a lot of trained scientists who are circling, burning up very important and useful fuel, and waiting for their turn to land.”

    Every young life scientist knows colleagues who have struggled to find jobs, and the report* sees no reason to expect that the hard times will soon come to an end. “There is no sign in the data that this [problem] is going to peak,” says Tilghman. So the panel recommends a painful remedy: To trim the swelling Ph.D. ranks, it calls on universities to freeze the size of their programs and to develop no new ones “except under rare and special circumstances, such as a program to serve an emerging field or to encourage the education of members of underrepresented minority groups.”

    Untenurable position.

    The percentage of life scientists with faculty appointments 9 to 10 years after receiving their Ph.D.s has plummeted.


    The current Ph.D. glut appears to have begun building about a decade ago. Until 1987, the number of new Ph.D.s in the life sciences increased at an annual rate of roughly 1%. Since then, however, the rate has averaged about 4% a year, climbing to 5.1% in 1996. Overall, the number of new life sciences Ph.D.s has grown from 5399 in 1987 to 7696 in 1996, a 42% increase. If such a growth rate is sustained, the report says, the number of new life sciences Ph.D.s each year could double in just 14 years. Swelling the ranks “could adversely affect the future of the research enterprise,” the report says, by breeding “destructive” competition and suppressing scientific creativity by causing scientists to play it safe.

    The Ph.D. surge has already deeply chilled job prospects for today's grads. The proportion of Ph.D.s holding permanent jobs 5 or 6 years out has decreased from 89% in 1973 to 62% in 1995. “The average life scientist [nowadays] is likely to be 35 to 40 years old before obtaining his or her first permanent job,” says the report. As a result, morale is sagging: “The feelings of disappointment, frustration, and even despair are palpable in the laboratories of academic centers.”

    The report takes a dim view of alternative careers as a means to ease the plight of young life scientists. Competition for science-related jobs in law, journalism, business, or precollege teaching is stiff and the pay is often low, the panel states. “I wish I had a dollar from every graduate student who said they wanted to be a science writer,” says Tilghman. Says the report: “Our analysis suggests that opportunities in these fields might not be as numerous or as attractive as advocates of alternative careers imply.”

    Instead, the NRC panel advocates some old-fashioned belt-tightening. It recommends that federal agencies take greater control over the number of Ph.D. students by supporting graduate study through training grants and individual fellowships, rather than through research grants. Limiting the number of grad students a principal investigator can hire could help constrict the pipeline, Tilghman explains. The panel also recommends that the government subsidize “career transition” grants so some postdocs can set up their own research projects even before they have obtained permanent posts. The Ph.D. degree itself, the committee affirms, should neither be diluted nor redesigned: It should “remain a research-intensive degree, with the current primary purpose of training future independent scientists.”


    Transfer of Protein Data Bank Sparks Concern

    1. Eliot Marshall

    On 19 August, structural biologist Joel Sussman got a call no manager wants to receive: A federal official phoned to say that funding will soon be withdrawn from the Protein Data Bank (PDB), a catalog of molecular images and structural data Sussman runs at the Brookhaven National Laboratory on Long Island, New York. The National Science Foundation (NSF), he was informed, has decided to shift the contract for managing the database to Rutgers University in New Brunswick, New Jersey. As news of the decision—agreed to by PDB's other sponsors, the National Institutes of Health and the Department of Energy—began to filter out last week, it kicked up a ruckus among crystallographers. As one of them says: “We feel it was done behind our backs.” Some want the decision reviewed.

    The contract at the center of this tussle is small, about $2 million per year. But as Sussman says, its impact has been “huge.” Thousands tap into the database daily via the Internet, logging 1.5 billion hits per year. (Some journals, including Science, require that crystal structures be deposited in the PDB at the time of publication.) Sussman, who also holds a half-time appointment at the Weizmann Institute of Science in Rehovot, Israel, says he was “surprised” and “shocked” by the decision to yank funding for PDB, which he views as “an international resource held in trust” by Brookhaven. He claims that Brookhaven has sharply improved the efficiency and user-friendliness of the system, after a rocky period about 5 years ago. He and others who use PDB are asking: Why tinker with a system that seems to be working well?

    The move shouldn't have come as a complete surprise, however. NSF announced in 1994 that it would put the PDB contract up for renewal in 1998. It chose this year's winner after a confidential peer review and a series of site visits that began last spring. The winning team is an experienced three-member coalition headed by Rutgers structural biologist Helen Berman. It includes Philip Bourne of the University of California, San Diego, and Gary Gilliland of the National Institute of Standards and Technology in Gaithersburg, Maryland. Berman says she cannot comment until a contract is awarded. But she notes that the coalition has already created a database that integrates PDB files and other structural data in a single format; her group demonstrated it at the Protein Society meeting last July. This new team, which is expected to take over from PDB on 1 November 1999, is proposing to implement the new system rapidly.

    Nevertheless, when news about the decision leaked last week to Long Island's daily paper, Newsday, NSF officials and members of the peer panel were bombarded with calls and e-mails. One bioinformatics group, for example, posted an exchange between a French researcher who questioned the decision and Mary Clutter, NSF's assistant director for biological sciences. Without identifying the winner, Clutter wrote that “the decision was based on plans for the future and not on current or past performance.”

    Crystallographers read this to mean that Brookhaven was doing an excellent job, but that Rutgers promised more exciting new software. They worry that the review panel may have been wowed by promises of new technology, at the risk of losing reliability. Although Brookhaven is also planning to install a new database next year, its top priorities, says biology chair William Studier, were to improve efficiency and make PDB more accessible.

    “We are concerned about the potential damage in terms of stability” during a transition to a new manager, says Axel Brunger, a structural biologist at Yale University. He says a dozen Yale colleagues—including Paul Sigler, Thomas Steitz, Donald Engelman, and Donald Crothers—signed a letter asking NSF for more information and possibly a second review. He is upset that the six-member review panel appears to have included only two crystallographers. But Brunger concedes that he hasn't seen the winning proposal, which may be excellent.

    In a phone interview with Science, Clutter declined to elaborate on her comments. But she acknowledged that “I've been getting e-mails from all over the world … asking if we're out of our minds.” S “I've been getting e-mails from all over the world … asking if we're out of our minds.” S” So the research community.

    Like NSF staffers, members of the review panel, chaired by bioinformatics researcher Sylvia Spengler of the Lawrence Berkeley National Laboratory in Berkeley, California, declined comment. But one panelist, speaking on condition of anonymity, said the review involved a “very difficult choice between two very competent groups of structural biologists.” H “very difficult choice between two very competent groups of structural biologists.” H” Hnd in this case, NSF appears to have opted for the more adventurous course.


    So Far, So Good for SOHO

    1. Alexander Hellemans*
    1. Alexander Hellemans is a science writer in Naples, Italy.

    Engineers who have been gingerly trying to bring the Solar and Heliospheric Observatory (SOHO) back from suspended animation have so far detected no permanent damage to the $1 billion spacecraft. The long process of thawing out the frozen satellite, which spun out of control and lost power after a series of ground-control errors in June, will take several more weeks. But officials from NASA and the European Space Agency (ESA) are now hopeful that they can bring the spacecraft back to life. That optimistic assessment was provided at a press briefing last week by NASA and ESA officials, who also released a final report confirming that errors by an overworked control team caused the spacecraft's problems.

    Controllers reestablished contact with SOHO last month and directed the spacecraft to begin recharging its batteries (Science, 14 August, p. 891). That allowed them to turn on electric heaters to thaw the hydrazine propellant, which froze when the spacecraft's solar panels were turned away from the sun. The main tank is now thawed, and they are warming the pipes that connect the hydrazine tank to the thrusters outside. It is a delicate operation that may take up to 2 more weeks, since a quicker thaw could burst the pipes. But “so far the recovery [has been] fairly smooth,” says Berhard Fleck, ESA deputy project scientist for SOHO at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

    Fleck says the telemetry, power, and control systems appear to be undamaged. Once the thawing is completed, controllers will test the mechanism that adjusts the craft's position and attempt to stop the spacecraft's slow spin. “Finally,” says Fleck, “we will point the spacecraft with thrusters back to the sun.”

    The final report on what caused the mishap contained few surprises. A panel of NASA and ESA scientists confirmed that the spacecraft spun out of control during routine maintenance procedures largely because of two software errors in preprogrammed command sequences and the decision by ground controllers to turn off one of the craft's gyroscopes, which detect roll, because they thought it was faulty (Science, 24 July, p. 499). “What should have been done instead was really to stop the operation and go into detail in the telemetry to identify exactly what caused the loss of the configuration of the spacecraft,” says ESA Inspector-General Massimo Trella. The controllers were, however, under pressure to find a quick solution: “Any downtime for the scientific mission was considered to be a very heavy penalization,” Trella says.

    The report also pointed to several underlying factors that contributed to the accident. The computer display of telemetry data was not user-friendly, a situation recognized in 1994 but still not remedied. And when the SOHO mission was extended from 2000 to 2003, NASA agreed to the extra time, “provided that there is some streamlining of the operations so that costs can be lowered,” says Trella. This led to certain procedures being rewritten, with some simplifications. “By modifying certain procedures, which were correct for the last 2 years, they introduced some errors,” says Trella.


    Five Researchers Die in Plane Crash

    1. Michael Balter

    Jonathan Mann, former director of the World Health Organization's (WHO's) Global Program on AIDS (GPA), and his wife, Mary Lou Clements-Mann of Johns Hopkins University in Baltimore, were killed last week in the crash of a Swissair jet on its way to Geneva from New York. Mann was a charismatic and outspoken epidemiologist who earned high marks for his dedication to the fight against AIDS, even from those who did not always agree with his sometimes scathing criticisms of public health leaders. Clements-Mann, a virologist, was an expert on AIDS vaccine development.

    Also killed in the crash were two physicists from Brookhaven National Laboratory on Long Island, New York: Klaus Kinder-Geiger, a German citizen who specialized in the quark-gluon plasma—the state of matter that existed moments after the big bang—and Per Spanne, a Swedish citizen who had helped pioneer techniques for medical x-ray imaging and cancer therapy. Spanne, a guest researcher at Brookhaven, was head of the medical x-ray facility at the European Synchrotron Radiation Facility in Grenoble, France. A fifth scientist on board, Roger Williams, was en route to chair a WHO meeting on early identification of heart disease. He was a cardiovascular geneticist at the University of Utah, Salt Lake City.

    Mann and Clements-Mann were traveling to attend a meeting in Geneva on AIDS vaccines convened by UNAIDS, the United Nations' special program on AIDS, which replaced the GPA in 1994. In a press statement, UNAIDS Executive Director Peter Piot praised Mann as “a visionary global leader in the fight against AIDS” who “tirelessly promoted a response to the epidemic based on respect for human rights and human dignity.” Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland—an institute of the National Institutes of Health (NIH) which coordinates federally funded AIDS vaccine trials—said the deaths of Mann and Clements-Mann are “a double great loss to the HIV/AIDS effort.” Mann was the dean of Allegheny University School of Public Health in Philadelphia.

    Appointed GPA's first director in 1986, Mann quit 4 years later, publicly accusing then-WHO chief Hiroshi Nakajima of a lack of commitment to fighting the disease. Earlier this year, he drew fire from many of his colleagues after accusing NIH of dragging its feet on AIDS vaccine research. “He said things that rattled some people,” says Fauci. “But he did it to push a cause he believed in. He was the conscience of the field.”


    Bad Economy Is Good News for R&D

    1. Dennis Normile

    Tokyo—Japan's recession is once again proving to be a boon to science. Last week government R&D ministries put in their bids for a slice of a $71 billion pie that the government has promised to distribute over the next 18 months to revive a stagnant economy. At the same time, along with their regular 1999 budget requests, they submitted proposals that include tapping into a one-time $1.1 billion appropriation to boost the nation's research and development infrastructure. However, the requests offer no relief for existing scientific facilities that have been forced to reduce operating expenses to shrink a large budget deficit (Science, 1 May, p. 669).

    The latest developments reflect the impact of an 8-year economic slump. While ministries have been told to cut spending, politicians have looked for ways to stimulate the economy and reward constituents. The result has been a succession of sizable supplemental spending packages that provide a vehicle for funding projects rejected in the annual budget cycle. This summer, in an effort to create jobs, the Diet also created the R&D infrastructure fund that, although targeted at information technology, environmental facilities, and other science projects, seems open to all comers.

    The Science and Technology Agency (STA), for example, is asking for a $2 billion slice of the government-wide supplemental pie in addition to a 5.3% increase, to $5.6 billion, in its regular budget. Big winners include brain, information science, and genetics research programs already under way, as well as a planned research vessel for ocean drilling. The Ministry of Education, Science, Sports, and Culture (Monbusho) has requested a jump of 22%, to $1 billion, in funding for its competitive Grants-in-Aid for Scientific Research program that supports small teams at universities and national labs.

    The proposed budgets are likely to be trimmed in negotiations with the Ministry of Finance before they are finalized by the end of the year. And the details of the supplemental spending bill won't be resolved until next month. Still, the country's overall R&D spending seems certain to rise sharply from its current level of $25.7 billion.

    In particular, STA is asking for $41 million to begin construction of a $500 million ship capable of drilling up to 3500 meters into the sea floor. “It's the project's official ‘go’ from the STA,” says Takeo Tanaka, ocean drilling program officer for the Japan Marine Science and Technology Center, which is overseeing the project slated for completion in 2003. A 59% boost, to $197 million, in information science includes adding a genome informatics component to a new Genetics Frontier Research Center at the Institute of Physical and Chemical Research (RIKEN) in suburban Tokyo and simulating how the brain processes information as part of RIKEN's Brain Science Institute.

    Although scientists welcome the increased funding, some worry that these special programs and supplemental budgets are distorting Japan's scientific portfolio by emphasizing applied fields over basic science and hardware over actual research. Akiyoshi Wada, a biophysicist at the Science Council of Japan, the nation's most prestigious scientific association, says government officials tend to focus on high-profile new buildings and equipment while providing too little for intangible operating expenses. “[Government] officials don't understand how real science is carried out,” he says.


    The Brain's Engine of Agility

    1. Ingrid Wickelgren

    Although the cerebellum's role in simple movements has long been appreciated, only recently have scientists begun pinning down how it coordinates complex, multijoint movements

    When neurologist Helge Topka sees a patient at University Hospital in Tübingen, Germany, the patient often stumbles toward him with a wide, irregular gait, sometimes taking tiny steps and other times large ones. The patient may tell Topka, in a voice wavering between a shout and a whisper, that cashiers will no longer accept her checks because they can't recognize her signature. Colleagues, she laments, accuse her of drunkenness as she stumbles around the office, slurs her words, and fumbles with cups and pens.

    But the patient is not an alcoholic. To the contrary, she never drinks alcohol, because it makes her movements even more uncoordinated. She suffers instead from a degenerative disease of the cerebellum, two apple-sized lobes at the back of the brain that house more than 100 billion neurons. Neurologists have long known that the cerebellum helps coordinate movements, but how it performs that task has been a foggy mystery. Now the fog is lifting: In the past few years, neuroscientists have begun to decipher the cerebellum's precise role in creating smooth, accurate movements. “It's an exciting time for the cerebellum,” says James Houk of Northwestern University Medical School in Chicago. “After neglecting it for a long time, people are starting to appreciate it again.”

    In the past, neurobiologists focused mainly on the cerebellum's role in controlling single-joint actions, such as a simple flex of an elbow or ankle. But those studies provided only vague clues about how the brain coordinates the vast majority of our movements, which involve several joints. Until recently, these complex movements were considered too difficult to study. In the past few years, however, improved technology, such as video-tracking devices combined with computers to help analyze movements, has enabled researchers to get a much better fix on just what the cerebellum does to coordinate such movements.

    These studies, involving both humans with cerebellar lesions and lab animals, show, for example, that the cerebellum predicts and adjusts for the multiple forces on a limb during a complex movement, including those propagating from one joint to another. If a person picks up a hammer, say, the cerebellum will activate the extra muscle force needed to operate the arm under the new physical conditions. It also controls the relative timing of various muscle contractions to ensure the speed and accuracy of a maneuver, so that when a person performs an act such as eating, the fork enters the mouth and not the eye. “It's fascinating,” says Jonathan Hore, a neurophysiologist at the University of Western Ontario. “Only recently have we gained insight into what's really involved in making a multijoint movement.”

    Less clear at the moment are the neural mechanisms underlying cerebellar control of movement. But even so, researchers expect that the work may yield insights into how to improve coordination in the hundreds of thousands of people with cerebellar damage due to genetic disease, viral infections, strokes, alcoholism, or normal age-related loss of cerebellar neurons.

    Soldiers of misfortune

    Although studies of the cerebellum's role in complex movements were rare until recently, British neurosurgeon Gordon Holmes provided some early clues about its function in work he did in the 1930s on men who had suffered cerebellar lesions while serving in World War I. Holmes would attach a light bulb to a former soldier's finger and then take long-exposure photographs to outline the path of the finger as the man tried to bring it to his nose. Typically, he would overshoot the target several times, a problem that Holmes attributed to loss of synchronization of the man's shoulder and elbow movements as a result of the cerebellar damage.

    Holmes didn't have the tools to determine exactly why the movements were so out of kilter, and in the following decades, most scientists focused on the cerebellum's role in the easier-to-study single-joint actions. Such studies hinted that the cerebellum deals with timing and force issues during movements. They showed that people and monkeys with cerebellar lesions display 25-millisecond delays in reacting to a command to make a single-joint movement. The test subjects also had problems in adjusting for changing forces on a joint.

    But by the early 1990s, some researchers began to have trouble understanding how such small deficits in single-joint movements could account for the severe disabilities displayed by patients like Topka's in performing tasks such as walking, where more than one joint has to move. Indeed, experiments with both monkeys and human stroke patients showed that cerebellar lesions could disrupt complex movements while leaving single-joint movements relatively unaffected.

    In 1993, for example, neurologist Thomas Thach, Howard Goodkin, and their colleagues at Washington University School of Medicine in St. Louis found that a man who had suffered a stroke that destroyed the part of the cerebellum controlling his right hand could flex and extend his right wrist, and move single joints of his right thumb and fingers. But he could not combine those individual movements to write, pick up an object, or reach his hand to a target. This indicated, Thach says, that the cerebellum is “specifically wired to bring the parts [of the body] together.”

    Exactly how the cerebellum does that didn't become clear, however, until 2 years ago when Amy Bastian, Thach, and two colleagues videotaped seven healthy subjects and seven patients with cerebellar damage rapidly reaching toward a small ball suspended in front of them. The hands of the normal subjects went straight to the ball, but the patients' hands first reached beyond the target, and then looped back. The problem resembled what Holmes had seen in his World War I soldiers, but the videotapes enabled the Washington University team to analyze the motions more precisely.

    The researchers computed the rotational forces, or torques, on each joint—wrist, elbow, and shoulder—needed to produce those motions. The net torque at each joint is the sum of torques from gravity, the muscle acting at the joint, and interaction torques, which arise when two joints move relative to each other and can either assist or impede a movement. In the cerebellar patients, the interaction torque turned out to dominate the rotational forces on each joint, suggesting that the patients were having trouble balancing forces on their joints during a multiple-joint movement. “One of the things it looks like the cerebellum is doing is controlling interaction torques across multiple joints,” Thach concludes.

    A recent study by Topka and his colleagues at Tübingen supports that idea. They traced the excessive motion seen in nine patients with cerebellar disease to their inability to generate muscle torque patterns that compensate for either interaction torques or gravity while the limb is moving. And that result ties in with 1996 findings by Steve Massaquoi and Mark Hallett at the National Institute of Neurological Disorders and Stroke.

    When these researchers asked nine patients with cerebellar atrophy to rapidly draw a straight line on a tablet, they found that the abnormally curved lines the patients drew could be explained by too little torque at the shoulder relative to the elbow. Hallett suggests that the patients had “difficulty in the rapid creation of force,” making it impossible to balance the forces produced by different muscles during a fast movement.

    Timing it right

    Balancing forces is not the only problem the cerebellum solves in preserving agility and grace. It also coordinates the timing of joint movements. The first quantitative evidence for this came last year in experiments performed on cats by neurobiologists James Bloedel, Vlastislav Bracha, and their colleagues at the Barrow Neurological Institute in Phoenix. First the researchers trained the animals to reach for a vertical bar and use it to trace an L-shaped template.

    Using software that correlates the speeds of the various joint movements, the researchers found that under normal conditions, all the joints of the cats' “arms” moved in synchrony. But when the Phoenix team temporarily inactivated sections of the cats' cerebellums with a drug, this coordination was lost; the joints tended instead to move in sequence—first one joint, say, a shoulder, then another, and so on.

    New results from Hore's team at Western Ontario also emphasize the importance of the cerebellum in timing movements. Two years ago, the researchers determined what it takes for 10 healthy male recreational softball players to hit a target 3 meters away by throwing tennis balls at it. Coils taped to the men's fingers, hands, and arms relayed joint positions to a computer, and a microswitch on their middle finger was triggered as soon as they released the ball. To hit the target, the researchers found, a thrower had to open his fingers to release the ball within a particular 2-millisecond time frame in the course of his throwing movement. The subjects often missed because their release times varied by about 10 milliseconds.

    Now, in as-yet-unpublished work, Hore's team has shown that in nine patients with cerebellar damage from strokes, the timing of ball release varied by as much as 50 milliseconds, resulting in “major disorders in throwing accuracy.” These patients also showed major timing irregularities at other joints, indicating that “the cerebellum is of great importance for accurate timing in complex multijoint movements,” Hore says.

    Wires, clocks, and loops

    The neurological underpinnings of the cerebellum's actions are not yet clear, but recent results point to at least three possibilities. Based mainly on what's known about the neuronal anatomy of the cerebellum, Washington University's Thach suggests that a neural network in the cerebellum's outer layer, the cerebellar cortex, governs movement coordination. He notes that in the cortex, millions of nerve axons called parallel fibers link the Purkinje cells, which carry signals from the cerebellar cortex to other parts of the brain and body. Thach believes each Purkinje cell governs a specific muscle and that the parallel fibers function as wires that bind different muscles together for coordinated movements. The strength of the connections, or synapses, between the parallel fibers and Purkinje cells, the theory goes, would determine the force and timing of muscle contraction.

    Bastian, Thach, and their colleagues provide some indirect evidence for this hypothesis in a study that will appear in the October Annals of Neurology. When the researchers studied motor skills in five children whose cerebellar cortexes were cut in the center to remove brain tumors, they found that the children could reach, pinch, speak, and kick normally, and could even hop on either leg. However, none of them could coordinate their two feet to walk a straight-line tandem gait, the classic drunk drivers' test. “This is the kind of deficit you'd expect if parallel fibers crossing the midline—which have been cut—coordinate the two sides of the body,” says Thach.

    Neurophysiologist and cerebellum expert Rodolfo Llinás of New York University Medical Center isn't convinced. For one thing, he asserts, the connections between parallel fibers and Purkinje cells are too weak to produce the muscle-combining effects Thach is postulating. Llinás, along with NYU colleagues Eric Lang and John Welsh, has been amassing support for a different hypothesis: that the cerebellum's coordination of complex movements is orchestrated by a structure called the “inferior olive” located just below the cerebellum.

    This idea is also based partly on neuronal anatomy. The olive sends so-called climbing fibers up to the cerebellum where they wrap around Purkinje cells like vines around a tree, exerting a powerful influence on them, Llinás says. In addition, since the early 1980s, Llinás's group has gathered clues that the inferior olive cells in rats fire at regular, clocklike intervals every 100 milliseconds. This ticking occurs in concert with natural actions including tongue-flicking and whisker-twitching. Such findings have led Llinás to propose that the olive acts like a clock to set the pace of every move we make. “Movements are generated in steps of 100 milliseconds,” he contends. “You hiccup your way through life.”

    In this view, subgroups of olive cells that become simultaneously active determine which muscles contract or relax on a given tick. These subgroups are selected by linked groups of Purkinje cells in the cerebellar cortex that send inhibitory messages through the cerebellar nuclei to the olive. These messages prevent the entire olive from “ticking” by temporarily walling off particular groups of olive neurons, or so the theory goes.

    But this hypothesis is also controversial. In studies published in 1995 and 1997, Thach and Jeff Keating, now at the University of Pennsylvania, saw no evidence of clocklike activity when they recorded from inferior olive nerve endings in the monkey. “Our problem with the clock is, we couldn't hear it tick,” Thach says.

    Of course, neither Thach's nor Llinás's theory may be entirely right. For one thing, both ignore a possible role for the motor cortex, a part of the brain's surface that initiates voluntary movements. Indeed, researchers such as Houk at Northwestern believe that communication between the motor cortex and the cerebellum powers coordination.

    In support of this idea, the Northwestern team cites experiments in which they've shown that inactivating parts of either structure in animals can produce coordination deficits. In addition, their recent data suggest that the neurons of the cerebellar nuclei, which control various body parts, are organized in a way that would make it impossible for the parallel fibers to recruit them in groups for various actions, as the other theories hold.

    Houk proposes instead that muscle groups are recruited as neurons in the motor cortex and cerebellum alternately talk to each other, causing excitation to spread from neuron to neuron in each brain structure. As a neuron controlling a hand, say, excites one for the wrist and then the shoulder, a person might reach forward to grab something. “You need pathways to the cerebellum to amplify what the motor cortex is doing and coordinate the muscles,” Houk says.

    All these theories still require a few leaps of faith—and more work to determine which is correct. But researchers are finally getting close to understanding just what the cerebellum does to keep our joints in sync.

    Additional Reading


    Where Are Our Motor Memories?

    1. Ingrid Wickelgren

    When Mark McGwire sends a home run into the stands, give some credit to his cerebellum, which fine-tunes the swing of his bat as he whips it through the air (see main text). But what role the cerebellum played in learning and improving that fearsome swing—or any other kind of complex motor skill such as writing or driving a car—is not clear.

    There's little doubt that the cerebellum is needed for the simplest types of motor learning. More than a decade of research by neuroscientist Richard Thompson of Stanford University and others has shown that animals with impaired cerebellar functioning can't learn conditioned reflexes, such as associating a tone with a puff of air so that eventually the tone by itself can cause them to blink their eyes.

    And evidence by Mark Hallett's group at the National Institute of Neurological Disorders and Stroke, as well as that of Thomas Thach at Washington University School of Medicine in St. Louis, strongly suggests that the cerebellum is involved in simple adaptive learning, such as adjusting one's body movements to a visual world that has been shifted by distorting spectacles. Healthy people can adapt to this change well enough to hit a target with a ball. Patients with cerebellar damage cannot. Moreover, Thompson's and Thach's teams have shown that damage to the cerebellum impairs memories for conditioned reflexes or adaptive behaviors learned before the damage occurred. This suggests that the cerebellum stores the neural changes that result from learning these actions—the memories.

    But when it comes to complex actions like drawing a picture, there's little proof that the cerebellum plays this storage role. In fact, when James Bloedel, Vlastislav Bracha, and their colleagues at the Barrow Neurological Institute in Phoenix set out to test this idea, they found some evidence to the contrary.

    In work in press in the Journal of Neurophysiology, the Phoenix team showed that cats could remember a complex task they had learned—tracing an L-shaped template—after their cerebellums had been temporarily inactivated. And even untrained cats with inactivated cerebellums could learn this task—albeit very clumsily. But when their cerebellums had recovered, the animals set about relearning the task, presumably to bring their cerebellums into the act and make their movements more accurate. The researchers conclude that the cerebellum does not store memories for complex motor skills, but it is involved in learning the ideal motion for performing them.

    These studies imply that the cerebellum fine-tunes complex motor skills rather than storing the memories for them. Still, the cerebellum might direct some other brain structure to form the proper memories. Clearly, more experiments will be needed to pin down the roots of McGwire's swing.


    Tipping Off Arabidopsis

    1. Dennis Normile

    Beijing—Some 2000 scientists gathered here from 10 to 15 August for the Eighteenth International Congress of Genetics, which was sponsored by the International Genetics Federation. Among the topics discussed were a new gene that may provide insights into stem cell division and recent progress in understanding plant development.

    Although researchers have been making progress in identifying the many genes that control plant development, there's still a big gap in their knowledge. Currently, they have little information about how plants coordinate the activities of the cells that give rise to the various plant tissues so that they are made in the right proportions at the right time. But at the meeting, plant geneticist Elliot Meyerowitz of the California Institute of Technology (Caltech) in Pasadena reported new results in which he and his colleagues have identified two genes from the plant Arabidopsis thaliana whose protein products apparently help coordinate the activities of cells in the shoot apical meristem, the undifferentiated cells at the growing tip of a plant that produce the stems, leaves, and flowers.

    The work indicates that the genes are part of a communication pathway in which one group of cells tells another when to start and stop dividing, as well as when to start differentiating into the different plant tissues. “Before this, we knew that [cell-to-cell] communication must be happening [in plant development], but we didn't know how it happened,” says Xing Wang Deng, a Yale University geneticist who also works with Arabidopsis. Meyerowitz has “convincingly shown” the molecular mechanisms involved, Deng adds.

    The Meyerowitz team came to its conclusion by working with a variety of mutants in which normal meristem organization is disrupted. Ordinarily, the meristem consists of three different populations of cells, arrayed in three layers, each with a somewhat different function. Cells in the uppermost layer only replicate themselves, helping maintain the proportions of three meristem layers. Cells in the middle layer also contribute to maintaining meristem proportions but have the additional role of helping to make the leaves and flowers. And cells in the innermost layer begin the production of reproductive organs and the stem. To coordinate these activities, Meyerowitz says, “all the cells have to know where they are and communicate with other cells.”

    One of the mutants that helped the Caltech team get a handle on these communication pathways is CLAVATA1, which features a meristem that grows up to 1000 times the normal size while maintaining the correct proportions of the different layers. That outcome indicates that the layers are not getting the signal to stop dividing but somehow are still coordinating their relative growth rates. As a result of this enlarged meristem, plants produce flowers with multiple organs and multiple nested ovaries. In 1997 the Caltech team cloned one gene, CLV1, that can, when mutated, produce these effects, and more recently cloned a second gene, CLAVATA3, that appears to be a partner of CLV1. Mutations in both genes produce the same changes, a result that suggests that the proteins made by the two genes must be on the same communication pathway.

    The structures of the two proteins, as well as the expression patterns of their respective genes, provided further evidence for that idea. The Caltech team found that CLV1 is expressed only in the cells at the center of the innermost layer of the meristem and is apparently located in the cell membrane. Its structure suggests that it is a so-called kinase receptor, a membrane protein that picks up signals from other molecules and transmits them to the cell interior by adding phosphate groups to one or more intracellular proteins. In contrast, CLV3 is only expressed in cells in the layers above the group of cells expressing CLV1, and its structure suggests that it may be secreted by those cells.

    Based on these results, Meyerowitz proposes that the cells at the center of the top two layers communicate with those of the innermost layer by releasing the CLV3 protein, which then binds to CLV1 on the cells in the inner layer of the meristem, triggering its activity. As a result, CLV1 presumably phosphorylates a protein that signals the nucleus to tell the cell to stop dividing or start differentiating. Although Meyerowitz and his colleagues haven't yet identified that protein, the team has found that a protein phosphatase, known as KAPP, originally identified by a separate group, counters CLV1 activity, possibly by removing phosphates from that same target. Meyerowitz has hypothesized a yet unknown feedback mechanism from the inner cells to the upper layers that coordinates their division and differentiation.

    There also may be different kinase receptors that can receive signals from neighboring cells not only for each of the three different layers but for the central and peripheral zones of each layer. But he is just beginning to understand this mechanism. “If we can understand this communication network better, we will then have control over the aboveground morphology of plants and be able to design the type of plants we want,” he says.

    Deng agrees that such knowledge should prove very useful. “It's the first time [for someone] to determine the two counterparts [in plant cell signaling], the receptor and the ligand,” he says. He adds that Meyerowitz's work should provide a foundation not only for elucidating the rest of this particular signaling pathway but also for setting out the approach needed to uncover additional pathways.


    The Dividing Line for Stem Cells

    1. Dennis Normile

    Beijing—Some 2000 scientists gathered here from 10 to 15 August for the Eighteenth International Congress of Genetics, which was sponsored by the International Genetics Federation. Among the topics discussed were a new gene that may provide insights into stem cell division and recent progress in understanding plant development.

    Stem cells, although few in number, are the workhorses of developmental biology. As the only cells capable of dividing, they're needed to maintain and restore our tissues. That task requires them to divide asymmetrically, producing exact copies that will continue to divide as well as cells differentiated for specialized functions, say, to build muscles or intestinal or liver tissue.

    How stem cells accomplish this asymmetric division has long been a mystery, but at the Beijing meeting, cell biologist Haifan Lin reported that his team at Duke University Medical Center may have identified a key element for at least one type of stem cell division. They've cloned a new gene that is necessary for the asymmetric division of the cells that produce the eggs in the fruit fly ovary. The discovery may have wider significance as well, because other organisms, ranging from plants to humans, have related genes that might also be involved in stem cell division.

    The Lin team's discovery is an outgrowth of previous work by his group and others on the anatomy of the Drosophila ovary. This has shown that the germ line stem cells are located in a tubular structure called a germarium, where they are in contact with the terminal filament cells at one end. There the cells divide asymmetrically. The daughter cell formed adjacent to the terminal filament cells becomes a stem cell, while the daughter formed at the opposite pole of the stem cell eventually develops into the oocyte and its associated cells. For this asymmetric division to occur, Lin's group found, the stem cells have to be properly aligned and in contact with the terminal filaments of the germarium for successful asymmetric division into stem cells and egg cysts.

    The Duke team then went on to clone one of the genes thought to be responsible for producing this alignment. They began by generating mutants, randomly inserting disruptive DNA fragments into the genome and then screening for flies in which germ line stem cells had lost their normal position in contact with the terminal filament cells. Tracing the location of the inserted elements then allowed the researchers to identify and clone the disrupted gene that caused the abnormality. They confirmed the role of the gene, which they called piwi, by inserting wild-type piwi into mutant embryos and showing that it restored the normal stem cell alignment and division.

    They also found that piwi is expressed in the terminal filament cells during stem cell division. They inactivated piwi in the germ line stem cells and still got normal asymmetric division. Lin says, “piwi is only required to be active in the terminal filament cells [for germ line stem cell division].” This implies that piwi is involved in some cell-to-cell signaling that serves to align the germ line stem cells with the terminal filament cells.

    The bigger question, however, is whether piwi homologs are involved in similar mechanisms in other organisms. Lin and his colleagues believe they are. They have found that piwi codes for a novel protein and have identified structurally similar genes in species as diverse as the roundworm Caenorhabditis elegans, the plant Arabidopsis, and in human testes. “These structural similarities suggest functional homology,” he says.

    Scott Emmons, a molecular geneticist who studies C. elegans at Albert Einstein College of Medicine in New York City, says the results are interesting and important. But he cautions that piwi's activity “may be particular to [Drosophila's] germ line.”

    Lin admits they have yet to prove that piwi has a wider role. Although homologous genes in Arabidopsis are known to be involved in division of cells in the meristem, which contains plant stem cells, their role in that division is not clear. For their part, the Duke workers want to see whether piwi is expressed in C. elegans, and if so where, as a first approach to determining whether it's involved in stem cell division. Ultimately, he adds, “we also have a lot of hope about [verifying] this function in mammals.”


    Early Start for Lumpy Universe

    1. Govert Schilling
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    A survey of the early universe just completed with the Hubble Space Telescope reveals a cosmos that looks strikingly mature considering its youth. In today's universe, gravity has swept galaxies together into vast clusters. The new survey, by a team from Carnegie Mellon University, suggests that millions of clusters could have already formed when the universe was half its current age.

    The findings, reported in a paper to appear in the Astronomical Journal, are not the first or even the strongest evidence that giant structures had formed in the early universe. But while earlier observations had identified single clusters or at most a handful of them, the Carnegie Mellon team has found 92 cluster candidates. “This is the largest sample available,” says Rogier Windhorst of Arizona State University in Tempe, opening up the possibility of carrying out detailed comparisons with theoretical predictions. And like other observations of primordial structures, the finding suggests that the universe has a lower density of matter than cosmologists once suspected, because structure formation in a dense universe would be “retarded” by the gravitational attraction of surrounding material.

    Using Hubble's Wide Field and Planetary Camera, Carnegie Mellon's Richard Griffiths, Kavan Ratnatunga, Eric Ostrander, and Robert Nichol photographed more than 800 small patches of sky over the past 6 years. The project, known as the Medium Deep Survey, found 92 “overdensities” of galaxies in the vicinity of large elliptical galaxies, which are known to populate the cores of dense clusters. Based on the colors of the elliptical galaxies—the expansion of the universe reddens light from distant objects—the team believes a quarter of these cluster candidates are at distances of more than 7 billion light-years. Because the Medium Deep Survey covered a mere 0.00125% of the sky, the sample implies that the total number of distant galaxy clusters must be more than 7 million.

    While Windhorst calls the results “an important piece of work,” others are cautious. “One has to be quite careful to ensure that these are ‘real’ clusters,” says theorist Neta Bahcall of Princeton University. Because the team has not measured all the galaxies' redshifts, a clearer indicator of distance, some of the “clusters” could actually be chance juxtapositions of nearer and farther galaxies. “Redshift observations of many galaxies are needed, which will show if each cluster is real and will yield … its mass,” she says. Griffiths and his colleagues have made their sample publicly available so that others can begin measuring redshifts and determining whether the universe was as precocious as it looks.


    Neutrinos Throw Their Weight Around

    1. David Kestenbaum

    The discovery that these ghostly particles have a trace of mass raises startling possibilities about the universe on the smallest and largest scales

    From the beginning, neutrino physics has been a slow-motion drama. Wolfgang Pauli reluctantly postulated the elusive particles in 1930, but it was another 25 years before the late Frederick Reines and his colleague Clyde Cowan succeeded in detecting one. And the crucial question of whether these tiny particles have a small mass, or no mass at all, has endured for half a century. “Neutrino mass has been discovered about four times,” says Oxford University physicist Donald Perkins, “and undiscovered about twice.”

    This summer, however, the plot took a decisive twist when physicists working with the giant Super-Kamiokande detector in Japan announced that they had seen solid evidence for neutrino mass: There were too few muon neutrinos streaming in from the upper atmosphere, presumably because they were switching to a different, undetectable kind. The switching, and the way it varied depending on the path from the upper atmosphere to the detector, was a trick only neutrinos with a tiny allotment of mass could pull off (Science, 12 June, p. 1689). The Super-Kamiokande data are “quite convincing,” says Perkins. “For the first time,” agrees Harvard University physicist Sanjib Mishra, “one of these flimsy [pieces of evidence] has become extremely compelling.” And now comes the second act: working out what this trace of mass could mean for how the universe is put together on the very largest and smallest scales.

    Theorists are speculating, for example, about how the gravitational pull of swarms of neutrinos might have gently sculpted the distribution of galaxies in the early universe. They are also puzzling over the peculiar fact that neutrinos are so light, which may hint at the existence of unseen, heavier particles that could help sew up some holes in the so-called Standard Model—physicists' current picture of subatomic particles and how they interact. And they are worrying about how to reconcile the Super-Kamiokande result with the weaker hints of mass from other experiments—a picture, say some, that could only be made consistent by assuming the existence of another neutrino, even more ghostlike than known neutrino types. The papers are appearing at a pace rarely seen in the slow-moving world of neutrino physics. “I can't even keep up with them all,” says Edward Kearns, a Boston University physicist and Super-Kamiokande collaborator.

    The unreasonable lightness of neutrinos

    The Standard Model, it is sometimes said, predicts that neutrinos should weigh exactly nothing. But that was purely to save ink, says Lincoln Wolfenstein, a physicist at Carnegie Mellon University. “We knew the mass was very small,” he says, noting that experiments would have revealed a sizable electron neutrino mass decades ago. “And zero is a very nice small number. That's really the level at which it was done.” It's easy to expand the Standard Model's equations to embrace the neutrino's newfound heft, but that doesn't explain the deeper questions. “The real mystery is why neutrinos are so light” compared to other particles, says physicist Paul Langacker of the University of Pennsylvania, Philadelphia. “That's what I'm dying to know.”

    Super-Kamiokande's measurement pins down only the difference in mass between two neutrino types and so indicates only that one of them must have a mass of at least 0.07 electron volts, less than a millionth of the puny electron's mass. But earlier experiments had set an upper limit, showing that the electron neutrino, for instance, must have a mass less than 1/30,000 of the electron's.

    Despite years of contemplation, physicists have come up with only one good way to explain why neutrinos should have a mass so close to zero. The idea, called the “seesaw mechanism,” requires the existence of an elusive superheavy neutrino, sometimes called the “N” particle. Without this particle, the three ordinary neutrinos (the electron, muon, and tau neutrinos) would be massless. But the N, if present, could mingle with them and, in a sense, share a bit of its mass. Mathematically, the heavyweight leverages the three neutrinos out of masslessness as on a seesaw. The heavier the fourth neutrino, the lighter the three everyday ones.

    Some physicists see this hint of a superheavy neutrino as the first step to a long-sought grand unified theory (GUT) that would unite all the forces and particles in a single framework. (Current theories have yet to yoke the strong nuclear force to the electromagnetic force and the weak force responsible for radioactive decays.) The unification plans inevitably invoke heavy particles to pull everything together.

    “It's fantastically exciting,” says Pierre Ramond, a theorist at the University of Florida, Gainesville. “This is the first time in 20 years that we've had something beyond the Standard Model.” Others are less willing to let light neutrinos catapult them so far into the unknown. “It's a beautiful idea,” says Mishra, “but the predictions are about as accurate as the military's bookkeeping. … Which is to say, poor.”

    The seesaw has a number of fearless advocates, however. “It fits like a glove,” says Frank Wilczek, a theorist at the Institute for Advanced Study in Princeton, New Jersey. Wilczek champions a particular GUT that goes by the name of SO(10). In this framework, the heavy neutrino appears naturally and gives light neutrino masses in line with the data from Super-Kamiokande, he says. “All the ugly ducklings turn into beautiful swans.”

    Others complain that SO(10) introduces some ugly ducklings of its own. In its simplest form, SO(10) predicts over 100 weighty new particles. “You get a ridiculous number of particles,” says physicist Joseph Lykken of Fermi National Accelerator Laboratory in Batavia, Illinois. Although these heavy particles would have been seen routinely only in the first superhot instants after the big bang, they would pop up at awkward moments even today. In particular, under the uncertainty principle of quantum mechanics, one could momentarily appear at any time and catalyze the decay of the normally stable proton. Super-Kamiokande and other experiments have looked for the telltale flash of light that would signal a proton's demise in their underground tanks of water but have seen nothing. Wilczek is undaunted: “It would be churlish not to take [neutrino mass] as a hint that these wild ideas are right on track.”

    One puzzle, too many pieces

    The superheavy neutrino isn't the only strange guest that the discovery of neutrino mass is introducing to physics. Physicists are also toying with an additional lightweight neutrino, inspired by what may be a conflict between the Super-Kamiokande results and other, weaker hints of neutrino mass. These come from two other kinds of experiments—one looking at neutrinos from the sun and another studying neutrinos generated by particle accelerators. Both kinds of experiments, like Super-Kamiokande, look for signs that neutrinos experience a periodic identity crisis and transform back and forth from one type to another as they travel.

    The shifting can happen because neutrinos as we observe them are actually mixtures of quantum mechanical waves—musical chords, in effect—in which each wave corresponds to a different neutrino type. Mass differences between the neutrino types result in wavelength differences, so the notes in the chord beat against each other. The result is that the identity of the observed neutrino can switch back and forth, in time with the beating. Thus the frequency of this “oscillation,” and hence the probability of observing an identity switch at a particular point, depends in part on the mass difference between neutrino types.

    Because there are three kinds of neutrinos, it's only possible to form two independent differences. Unfortunately, the data from solar, atmospheric, and accelerator neutrinos together point to three mass differences. “I think nearly everyone agrees we can't explain all three experiments,” says Wolfenstein. Either one experiment is wrong, or something very strange indeed is going on.

    Many physicists say that of the three, the accelerator-based work by the LSND collaboration at Los Alamos National Laboratory in New Mexico is the most vulnerable. LSND watches for muon antineutrinos to switch to electron antineutrinos. By 1996 the team had captured what looked like 20 or so such events. More recently, however, a similar experiment, Karmen in England, looked but saw no evidence for such a transition. The Karmen team now claims to have largely ruled out the possibility that LSND has seen oscillations. “[Karmen] sees no hint at all,” says Langacker. But many observers interviewed by Science felt that Karmen's limited data did not support such strong conclusions. “I think the jury is still out,” says Harvard's Mishra.

    “The LSND issue is pivotal,” says George Fuller of the University of California, San Diego. If the result holds up, he and others say, it points somewhere no one wants to go—to the existence of an extra, lightweight “sterile” neutrino. “There's no alternative,” Fuller says. Some of the neutrinos from the sun or the upper atmosphere could then transform into sterile neutrinos, so called because they essentially never interact with matter. With that extra outlet, all the data could be accommodated. “I think we'd best tread lightly in that direction,” Fuller cautions. “This is a totally made-up particle.” And it's one that could never be detected.

    Cosmic ovens

    Sterile neutrinos might, however, act as an unseen hand shaping the abundance of heavy elements in the cosmos. Some astrophysicists speculate they may be necessary to explain how the stellar explosions called supernovae can cook up heavy nuclei in the observed quantities. “It works extremely well,” says Baha Balantekin, a physicist at the University of Wisconsin, Madison, who studies supernovae. “It's almost too convenient.”

    A supernova begins as a heavy star that runs out of fuel and collapses under its own weight. The enormous pressure is thought to force neutrons to join up with existing atoms and forge heavy elements such as gold. At the same time, a shock wave spawned by the collapse heats the material, creating a storm of neutrinos in the star's core. The neutrinos flee and carry away almost 99% of the energy of the explosion, says Balantekin. On their way out, however, some of the electron neutrinos would convert neutrons to protons and stall the formation of heavy elements. But if these neutrinos can oscillate into sterile neutrinos, models of the element-forming process more easily produce the observed abundances. He cautions, though, that the models are still too uncertain to make definite predictions.

    One way out of some of these theoretical quagmires would be to get a firmer idea of exactly how massive ordinary neutrinos are. A raft of current or planned experiments should sharpen the picture of neutrino mass (see table). And Joel Primack, a physicist at the University of California, Santa Cruz, suggests an astrophysical gauge. He points out that because massive neutrinos would exert a gravitational tug, they should leave their imprint on the density and arrangement of galaxies. By charting galaxy clusters and the cosmic voids in between, the Sloan Digital Sky survey—a mapping effort that will take in a region more than a billion light-years across—may be able to roughly discern the average mass of the neutrinos. Their effect on cosmic structure would be small, however, because other kinds of invisible matter are also thought to have sculpted the universe.

    View this table:

    A nearby supernova explosion could also yield clues to the mass, in the timing of the neutrino burst that would likely be picked up in Super-Kamiokande and other detectors. If neutrinos have mass, it should slow them down. Ejected at a variety of speeds from the supernova, the neutrinos would spread out as they crossed the galaxy, arriving over a period of time rather than in a single burst. But because such explosions only happen a few times a century, physicists will again have to be patient to learn new facts about the neutrino.

Stay Connected to Science