News this Week

Science  28 Feb 2003:
Vol. 299, Issue 5611, pp. 1290

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    AIDS Vaccine Trial Produces Disappointment and Confusion

    1. Jon Cohen

    On Saturday, 15 February, a team of data analysts from VaxGen checked into a hotel a short drive from their offices in Brisbane, California, using fictitious names. The scientists had come to “unblind” the results of the first-ever, real-world test of an AIDS vaccine, a 4-year study involving 5000 people. VaxGen was so concerned that no one outside this team—and in particular no investors—would learn anything about the results until the company formally released them that the scientists agreed to speak to their families and other outsiders during their weeklong stay only through an intermediary. “There were concerns that people would read something into the tone of our voices or the spring in our walks,” says Phillip Berman, the company's chief scientist. Had outsiders spotted the scientists around midnight, they may well have detected that the results were bitterly disappointing.

    Late that evening, Berman and his colleagues saw the first comparison of HIV infection rates in those who received placebo shots versus the two-thirds of trial participants who received the vaccine itself. “We saw absolutely no difference between the vaccine and placebo groups,” says Berman. “Our first reaction was, ‘There's a mistake.’” He adds: “Everyone was pretty depressed.”

    The next day, however, the mood brightened. When the scientists broke the data down into racial groups—which they say was part of the original design—the vaccine appeared to have worked in black, Asian, and mixed-raced participants. “The numbers were small, which concerned us,” says Berman, who with Donald Francis, VaxGen's president, helped launch the company in 1995 from the ashes of an AIDS vaccine program at the biotech powerhouse Genentech. “But the result was highly statistically significant,” adds Berman. Subsequent analyses of the immune responses in the vaccinated group further buttressed the scientists' confidence. “They were pretty incredible results,” Berman reports.

    Berman and his colleagues released the results in a Webcast on 24 February. Francis, a high-profile AIDS epidemiologist since the earliest days of the epidemic, opened the unusual briefing. “This is a very interesting time that we're in,” said Francis. Indeed, VaxGen's analyses touched off a debate that promises to reverberate around the community for weeks, if not months, to come.

    The trial, which took place in the United States, Puerto Rico, Canada, and the Netherlands, predominantly involved gay men at high risk of becoming infected. The researchers reported that 5.7% of the vaccinated group and 5.8% of those who received the placebo became infected with HIV, strongly indicating that the vaccine offered no benefit whatever. But VaxGen CEO Lance Gordon argued that the analyses of racial subgroups showed “clear evidence of vaccine efficacy.” In particular, VaxGen showed data indicating that the vaccine—which contains a genetically engineered version of HIV's surface protein —protected 67% of the black, Asian, and mixed-race participants. An analysis of blacks alone revealed that the vaccine worked a startling 78% of the time (see table).

    Black and white?

    Despite overall negative results, VaxGen's Donald Francis sees hope in the subgroup analyses.


    Hours after VaxGen made the data public, AIDS researchers began pasting asterisks and red flags all over the results. The subgroup analyses “are very interesting and provocative,” says Anthony Fauci, head of the U.S. National Institute of Allergy and Infectious Diseases. “I was quite surprised.” But Fauci—whose institute, over the strenuous objections of Francis and Berman, all but abandoned this vaccine in 1994 when early human trials proved disappointing—stresses that, overall, “efficacy did not occur.” And he adds: “I'd like to have hard-core statisticians really pore over the data and let us know what it means.”

    Steven Self, a biostatistician at the University of Washington, Seattle, who specializes in AIDS vaccines (and who once advised VaxGen about the trial's study design), has strong reservations about subset analyses in general. “Subset analyses are notoriously difficult to interpret, and they're doubly difficult when the overall result is nil, which is the case here,” says Self.

    Seth Berkley, head of the International AIDS Vaccine Initiative, a nonprofit organization that bankrolls development of products, is more blunt. He notes that VaxGen's subanalysis hinged on “just 13 infections” among black participants. “Pull out 13 people from a large study—no matter whether they're left-handed homosexuals or whatever —and I certainly worry more about the results.” Four AIDS advocacy groups—the AIDS Vaccine Advocacy Coalition, Project Inform, the Treatment Action Group, and the Gay Men's Health Crisis—issued a joint statement calling VaxGen's analysis “misleading and premature.” The groups urged the company to submit the results to an outside panel of experts.

    VaxGen argues that the differences may be explained by the fact that vaccinated blacks had higher levels of anti-HIV antibodies than their white counterparts did, but the company has not yet released the data to back that assertion. John Moore, an HIV antibody expert at Cornell University who has long voiced strong criticisms of the vaccine, is skeptical. “I don't buy this,” says Moore. “I don't think there's a biological mechanism that will explain this.”

    “It's not implausible that there would be differences in immune responses in different populations,” says geneticist Stephen O'Brien of the U.S. National Cancer Institute. “But I wouldn't take this as proof.” He notes that African Americans have on average a 20% admixture of Caucasian genes, which “makes genetic association studies with small numbers suspicious.” HIV geneticist Richard Kaslow of the University of Alabama, Birmingham, says he's especially dubious about the huge differences: 3.8% protection in the overall study group, but 78% among blacks. “That this would select a marker in blacks that has a 20-fold difference in vaccine efficacy is not impossible, but it's hard for me to imagine,” says Kaslow.

    Berman readily concedes the need for further study. “You can't analyze a 4-year trial in a week,” he says. VaxGen plans to discuss its results in detail next month at an AIDS meeting in Banff, Canada. The company also has an efficacy trial of its vaccine under way in Thailand, which should yield results by this fall.


    Britain to Cut CO2 Without Relying on Nuclear Power

    1. Daniel Bachtold

    CAMBRIDGE, U.K.—Britain and the United States may be marching side by side to war in Iraq, but their energy policies could not be more different. Prime Minister Tony Blair's government announced this week that it wants to up the ante on reducing carbon emissions over the next half-century, without building any new nuclear power stations. Lauded by environmentalists as “a crucial landmark,” the Energy White Paper is nonetheless taking heavy flak from energy experts.

    In what seems the death blow to nuclear energy in this country, the white paper outlines a plan to reduce levels of carbon dioxide in the atmosphere by increasing funding and incentives for companies to invest in renewable energy sources, such as wind, wave, and tidal power. “Climate change is a clear and present danger,” says Trade and Industry Secretary Patricia Hewitt. “The government is serious about cutting carbon emissions, but we know this cannot be achieved without a fundamental review of the way we produce and consume energy.”

    Over the next 50 years, the U.K. aims to cut its carbon dioxide output by 60% from today's levels, substantially more than is required by the Kyoto Protocol. It intends to do so by setting tougher standards for energy efficiency and by boosting renewable energy from its current 3% of total energy capacity to 10% by 2010 and 20% by 2020. If achieved, this ramping up of renewables will offset the decline of nuclear power as the country's 33 nuclear reactors —which now produce 26% of Britain's energy—reach the end of their working lives over the next 30 years.

    Gone with the wind.

    Over the next few decades, wind, waves, and tidal power will replace the U.K.'s aging nuclear stations.


    Prior to publication of the white paper, several scientific bodies, including the Royal Society and the Institute of Physics, as well as the government's own chief scientific adviser, David King, all warned the government against abandoning nuclear power entirely. And the government has not shut the door: If renewables do not fill the gap, new nuclear stations could be built.

    Energy experts consulted by Science were generally skeptical of the government's plans. “To try to reduce carbon dioxide by 60% is a fantastic thing to do. But I don't think it is remotely achievable,” says Ian Fells, an energy consultant and professor of energy conversion at the University of Newcastle upon Tyne. And electrical engineer Mike Laughton of London's Queen Mary College believes that a 20% share of renewable energy is wishful thinking: “It is totally aspirational and not realistic at all.”

    The government has put several measures in place to achieve its 20% ambition. There is $95 million in new money for renewable projects, raising spending on renewable energy to $550 million over 4 years. Further tax breaks will endow the renewables industry with an estimated $1.6 billion a year by 2010. In addition, planning regulations will be loosened to speed approval of onshore and offshore wind farms.

    Although critics of the white paper concede that renewable energy needs to be pushed, they argue that a mix of nuclear and renewables is more realistic. Wind is a notoriously unreliable power supply, they say, so nuclear energy or conventional gas-fired power stations are still needed as a backup. “A wind policy is not an emission-free policy in total,” says Laughton. “[The white paper] will be taken to pieces gradually and sorted out.”


    Java Skull Offers New View of Homo erectus

    1. Ann Gibbons

    Since the first fossil of the long-legged human ancestor called Homo erectus was found on the island of Java in 1891, researchers have studied several dozen H. erectus skulls from all over Africa and Asia. But this ancient human species is still capable of springing surprises.

    On page 1384, a team of Japanese and Indonesian researchers presents another find from Java—a beautifully preserved skull that offers researchers their first glimpse of the base of the cranium in H. erectus. The unexpectedly modern anatomy disproves one hypothesis about the development of the large brains of our own species.

    “This is an important find because it is the first H. erectus with a reasonably complete cranial base, and it looks modern,” says paleoanthropologist Dan Lieberman of Harvard University. The new skull also links the early and late forms of H. erectus on Java, suggesting that the species lived continuously on this remote outpost for about a million years, rather than arriving in distinct, successive migrations, as some researchers had proposed.

    Java has yielded an unusual wealth of human fossils, including 23 skulls and teeth and bones from more than 100 individuals. Most of the skulls come from two different areas, with the oldest from sites in central Java and the youngest from east Java.

    The new find was made on 1 October 2001, when an Indonesian construction worker collected sand and a skull from the Solo River at Sambungmacan in central Java. The construction company alerted paleontologist Fachroel Aziz of the Geological Research and Development Centre in Java, who contacted the paper's lead author, Hisao Baba, of the National Science Museum in Tokyo and the University of Tokyo.

    Because the skull was found underwater, there is no associated sediment with which to date it. But the skull's anatomy looks intermediate between ancient and recent Javanese H. erectus in central and eastern Java. “We have some kind of missing link,” says Baba.

    Heads up.

    Modern anatomy in the interior of this skull surprised researchers.


    The new skull is flat on top, like skulls from central Java dated at 1 million to 1.5 million years ago or even older. Yet it also shares features (such as the shape of an opening for a nerve near the temple) with the youngest fossils from eastern Java, dated anywhere from 50,000 to 400,000 years ago.

    “This is important because it takes away the suggestion that the later H. erectus in Java are something substantially different from earlier H. erectus,” says paleoanthropologist Susan Anton of Rutgers University in New Brunswick, New Jersey. Baba adds that the Java people probably were an isolated population, not ancestral to modern humans, based on several idiosyncratic features found only on Javanese skulls.

    But the skull does possess one strikingly modern feature: the delicate bony platform behind the eyes on which the brain rests, called the cranial base. In modern humans, the cranial base is bent, or flexed, at a much sharper angle than it is in Neandertals and other archaic Homo fossils. This was thought to allow the face to grow tucked under the braincase during development, rather than jutting forward as in chimpanzees or H. erectus.

    Lieberman proposed that this change might also have allowed a shift from the oblong, American football-like head shape seen in both H. erectus and Neandertals to the characteristic globular or volleyball shape of the modern human skull. Thus, he proposed that cranial base flexion was developmentally linked to the unique expansion of the modern human brain (Science, 15 February 2002, p. 1219.)

    But the new skull does not support that view. When the researchers used computerized tomography scans to peer inside its head, they found a strongly flexed cranial base and a “teeny” brain, as Lieberman puts it. Even he says that if the flexion is verified in other H. erectus fossils, his hypothesis may prove baseless. In that case, researchers will have to find a new angle on the developmental changes that led to the modern human brain.


    New Front-Runner for Carving Martian Gullies

    1. Richard A. Kerr

    The discovery of gullies on Mars 3 years ago electrified planetary scientists. Water seemed to have gushed from the planet's rocky cliffs, and where there's liquid water, there could be life. But how could water ever flow on the frigid surface of the Red Planet?

    At a NASA press conference here last week and in a paper published online by Nature on 19 February, planetary scientist Philip Christensen of Arizona State University in Tempe presented intriguing new images from Mars. They suggest that slow melting of snowfields left from the last martian ice age could have eroded gullies beneath the snow. If so, lingering snow patches might harbor life still. The snow-patch hypothesis has shot to the front of a large pack of gully explanations.

    The idea's prominence doesn't necessarily make it a winner, however. Researchers are welcoming the new entry but, given the difficulties of interpreting planetary remote sensing, they're stopping far short of declaring it the answer. “It's probably more plausible than anything I've seen so far,” says Mars geologist Michael Carr of the U.S. Geological Survey in Menlo Park, California, “but no one's come up with a really plausible way of making these gullies. They're a puzzle.”

    Christensen pieced together his model for gully formation from ideas that go back as much as 30 years. His inspiration was in the striking images returned by the camera he operates on the orbiting Mars Odyssey spacecraft. It has a unique combination of capabilities: sufficient resolution to make out gullies, but enough breadth of view to include their surrounding terrain. In these just-right images, Christensen could see that gullies at various stages of development are closely associated with patches of “pasted-on” or mantling material first seen in the early 1970s and now thought to be rich in ice or even mostly ice. Some gullies run beside these snow patches or seemingly begin to form in them, suggesting to Christensen that one feature may have caused the other.

    Cause and effect?

    Lingering ice (upper right) may hide gullies that it helped form, such as those now exposed on this crater wall.


    In Christensen's scenario, snow fell on the martian midlatitudes tens of thousands of years ago, when the increased tilt of the planet made water likely to sublimate from polar ice and fall as snow at lower latitudes. When Mars again became more upright, more sunlight fell on the snowfields, penetrated just beneath the surface, and warmed the ice there past the melting point. The dust-tinged snow acted like a greenhouse, letting in sunlight but holding in the heat. The resulting meltwater eventually worked its way down to erode the loose soil on any steep slope beneath, while the snow protected the water from evaporation. Where the snow is now gone, gullies show up. Beneath remaining patches of dirty snow, liquid water—and, conceivably, ice- loving algae—may remain.

    The snow-patch model has plenty of competitors, such as water gushing from ice-sealed aquifers, brine seepage, and carbon dioxide bursting from dry ice in the rock, among others (Science, 22 June 2001, p. 2241). Christensen sees a clear advantage in snow patches being able to both promote melting and protect the liquid water as it erodes. Such snow patches would also work on the central peaks of impact craters, he notes, where gullies are seen but aquifers are unlikely.

    Planetary scientists concede that Christensen's snow patches have some advantages, at least compared to the alternatives. “We're finding that some models work a little better than others,” says Michael Mellon of the University of Colorado, Boulder, “but none of them works well,” including his own, which involves seepage from an aquifer.

    A particular concern with Christensen's model is just how much meltwater would be produced. It might be “really difficult to get things to melt and produce runoff under current atmospheric conditions,” says Gary Clow of the U.S. Geological Survey in Denver. He did the melting calculations published in 1987 that Christensen depends on. “Everything had to be just right,” Clow adds. Even so, “it's an interesting idea that's worth pursuing,” says Mellon. But until a clear winner emerges, “I don't know if we need to have more press conferences.”


    Yale Grad Students Prepare to Strike

    1. Jeffrey Mervis

    Chronic labor problems at Yale University have taken a turn for the worse in a dispute that revolves around a decade-long attempt to organize more than 2000 graduate students.

    Last week, the Graduate Employees and Students Organization (GESO) at Yale voted 482 to 141 to join two recognized unions representing support staff in a strike unless school administrators by 3 March “agree to a fair negotiating process” over the right to unionize. About 30% of GESO members are pursuing degrees in the life and physical sciences, a larger share than in most graduate-level labor organizations. Student organizers have vowed to stay out of their labs and classrooms for 5 days, forgoing their duties as research and teaching assistants in hundreds of undergraduate courses. “We deserve a say in our working conditions,” says GESO activist Maris Zivarts, a doctoral student in molecular biology.

    Labor relations are sufficiently sensitive at Yale to warrant a prominent link on the home page of its office of public affairs. In 1996, the two unions representing support staff struck for 4 weeks before winning their current contracts, which expired at the end of February. Although 27 U.S. universities bargain with unions representing graduate students, Yale president Richard Levin has declared repeatedly that “the unionization of graduate students would not be in the best interest of the students” or the Yale community. Yale has said it would not recognize a graduate student union, and GESO has not pushed for a federally supervised election on the issue.

    GESO plans to set up picket lines and hold teach-ins during the strike, set for a week before spring break. Yale's administration has urged faculty members to maintain normal class schedules and to pick up the slack left by absent teaching assistants, although dining hall services have been cancelled. “Many graduate students will continue what they're doing,” deputy provost Charles Long predicted to the Yale Daily News. “I don't expect classes will be very disrupted.”


    Recipe for Rocket-Free Space Travel: Dive In and Paddle, Patiently

    1. Charles Seife

    His ideas may seem to violate basic laws of motion, but a physicist has figured out a way to propel a spacecraft without expelling hot gas, gravitational waves, or anything else in the opposite direction. The discovery, published online this week by Science (, won't lead to a practical propulsion system, but gravitational physicists are nonetheless excited about the notion of swimming in spacetime.

    “The whole thing is fascinating, actually,” says Frank Wilczek, a physicist at the Massachusetts Institute of Technology (MIT). “I don't know if it will really help us with any fundamental problem in physics, but it can pose questions that are almost philosophical —simple-seeming questions that have profound answers.”

    At first glance, the idea of propellantless propulsion in space seems self-contradictory. Newton's third law of motion, which states that every action must have an equal and opposite reaction, seems to imply that something must compensate for the spacecraft's propulsion. Rockets do this by spewing gas out of their engine, driving the machines forward; solar sails do it by bouncing particles of light back in the direction opposite to the spacecraft's acceleration.

    But as Jack Wisdom, a physicist at MIT, explains in his paper, there's another way to move about in space. Although it took some complicated math to prove it, the central idea is analogous to something you can do while kneeling in an office swivel chair. By thrusting your arms in and out and moving them about, you can slowly turn the chair around. Angular momentum is conserved—you can't make the chair spin on its own—but by distributing the angular momentum in different parts of your body at different times, you can change your orientation. Similar maneuvers explain how a cat dropped upside down from a height can twist its body and retract and extend its legs so that it flips over and lands on its feet.

    According to Wisdom, the same sort of idea can work when a body is sitting on a curved surface, such as the four-dimensional surface of spacetime. A cat (or any other object bigger than a point) can stretch its limbs, move them, and retract them again, over and over and over, and slowly move forward. “It really is a swimminglike motion,” says Wisdom. As in the swivel-chair experiment, in which you can change orientation but not spin, the spacetime swimmer can change its position but cannot affect its overall velocity. Before and after each “stroke,” the swimmer will be moving at the same speed.

    Kick start.

    By stretching and contracting a pair of moving limbs, a “swimmer” could theoretically traverse curved spacetime.


    “In general relativity, you are accustomed to the idea that spacetime has a dynamical existence, that it's a medium, not just nothingness,” says Wilczek. “Space is not homogeneous, and you can push against the bumps, so to speak. Once you've swallowed Einstein, this doesn't seem so startling after all.” Unlike an Olympic swimmer, though, the spacetime swimmer isn't forcing the medium to flow in the opposite direction—just taking clever advantage of the universe's built-in conservation laws.

    Unfortunately for would-be interstellar travelers, the effect is very tiny. A meter-sized object gyrating in a spacetime curved as dramatically as the surface of Earth—far more sharply than most regions of spacetime—could make headway of only about 10−23 meters per contortion. At one stroke per second, it would take the swimmer hundreds of thousands of years to move the width of an atom.

    Nevertheless, the effect has scientists excited. “It's a really pretty idea, beautiful,” says David Finkelstein, a physicist at the Georgia Institute of Technology in Atlanta. According to Wilczek, Wisdom's work raises questions about what it would mean to be “moving” against the pure fabric of spacetime if there were no stars and galaxies to help you judge your motion —an issue philosophers of science have been debating ever since physicist Ernst Mach raised it in 1893. Figuring out whether a swimmer could still move in a matterless universe might help probe the meaning of motion. “You can ask that kind of question sensibly,” Wilczek says—although answering it might still be an upstream swim.


    Smaller Station Crew Would Put More Pressure on Research

    1. Andrew Lawler

    The grounding of the U.S. space shuttle fleet is taking a toll on research aboard the international space station. U.S. and Russian officials are thinking about sending up two rather than three astronauts when a Soyuz capsule—the only link now to the orbiting laboratory—arrives in May to relieve the current three-person crew. The smaller crew would allow officials to minimize food, air, and water consumption while NASA searches for the reasons behind the recent Columbia disaster.

    Russian space chief Yuri Koptev told reporters in Moscow last week that the replacement crew would focus on station maintenance until the shuttles resume their construction duties. “The maintenance of the station is a no less complicated task than its completion,” Koptev said. A reduced crew and limited launch capacity could restrict the time available for science to one-third of the amount planned before Columbia's destruction, NASA estimates.

    NASA and Russian space agency staffers are considering a two-member crew, confirms NASA spokesperson Debbie Rahn: “We're looking at it very closely to see if it makes sense.” Recently, NASA designated one of the three crew members as a science officer to emphasize the importance of station research.

    The change would still allow human physiology research, because much of the data collection occurs before and after the mission. But research involving large amounts of material, such as protein crystal growth, would not be possible, says NASA's Peter Ahlf, and there will be less room for product development. “We'll be happy to get [done] one-third or more” of the research planned before the accident, he says, adding that a halt in construction could free up some crew time for research.

    Koptev said that Russia may add another unmanned Progress mission, an automated spacecraft that ferries materials, to its scheduled launch this year of two Soyuz and three Progress supply ships. If the shuttle is still grounded next year, he said, a half-dozen Progress launches would be necessary to maintain station operations.

    Crew cut.

    The next Soyuz mission may ferry only two astronauts to the space station to conserve supplies.


    Koptev is hoping that U.S. aid will bolster Russia's perennially cash-strapped space program. But a U.S. law prevents NASA from purchasing additional vehicles unless the White House can certify to Congress that Russian companies are not selling ballistic missile technology to Iran. It also prevents a third party, such as the European Space Agency, from purchasing hardware on behalf of NASA. Although Koptev denies that any sales to Iran are taking place, U.S. officials say that his assertion would be difficult to confirm. At the same time, congressional staffers note that the law provides an exception for safety needs.

    Meanwhile, NASA officials and outside researchers continue sorting through data from the 80 experiments aboard Columbia. “We estimate that anywhere between 50% and 90% of the data was acquired” from experiments that beamed their data from the shuttle to satellites to Earth, says David Liskowsky, program scientist for the mission. Most of that research involved combustion, materials science, and fluid experiments.

    For example, about half of the data from an experiment designed to test the mechanics of granular materials, such as sand, was successfully downlinked. And nearly all of the data from an effort to measure changes in the viscosity of xenon—in order to understand complex fluid movement—was returned via satellite.


    NCI Goal Aims for Cancer Victory by 2015

    1. Jocelyn Kaiser

    The nation's cancer chief, National Cancer Institute (NCI) director Andrew von Eschenbach, has announced a startling new goal in the battle against cancer. His institute intends to “eliminate death and suffering” from the disease by 2015. The cancer research community is abuzz over the announcement. Some say that however well intended, the goal is clearly impossible to reach and will undermine the director's credibility.

    Von Eschenbach, who has headed the $4.6 billion NCI for a year, announced the 2015 target on 11 February to his National Cancer Advisory Board. He told board members that he did “not say that we could eliminate cancer.” Rather, he continued, his goal is to “eliminate suffering and death due to this disease.” NCI is working on a strategy to do that by discovering “all the relevant mechanisms” of cancer, developing interventions, and getting treatments to patients.


    NCI director Andrew von Eschenbach sets the tempo with an optimistic theme.


    Many researchers have been skeptical of such deadline setting ever since the War on Cancer, President Richard M. Nixon's plan to cure cancer by 1976, failed to deliver. Von Eschenbach's announcement, which reached many people through a report last week in the newsletter The Cancer Letter, drew mixed reactions. Setting a target date is not “necessarily a bad thing,” says molecular biologist Phil Sharp of the Massachusetts Institute of Technology. But given the 7 to 10 years it takes to test new therapies in the clinic, “I doubt we're going to be able to accomplish that” in 12 years. Science historian Robert Cook-Deegan of Duke University in Durham, North Carolina, who suggests that deadlines are more suitable for technological challenges such as computing and sequencing the human genome, notes: “I like the spirit of what he's trying to do,” but “making outlandish promises damages a leader's credibility” among researchers. Biologist David Baltimore, president of the California Institute of Technology in Pasadena and, like Sharp, a Nobel Prize winner, says that cancer is a “notoriously slippery disease,” and the 2015 target is “a remarkable goal to make as a public statement.”

    Others who preferred not to be identified were less charitable. One researcher at a major cancer center said that setting a goal that is “patently impossible to accomplish … serves no useful purpose” and “[substitutes] pious wishes for realistic planning.” Von Eschenbach defended the plan in a brief interview with Science. “We have not made this goal capriciously,” he said, but only after a year of deliberation with NCI leaders. He said that a number of targeted initiatives—from the Human Genome Project to National Institutes of Health Director Elias Zerhouni's plan to “reengineer” clinical trials—will make it possible to “accelerate the process.” “This is a milestone for the entire cancer community to collectively work towards,” von Eschenbach said.

    Sharp suggests that the public is more sophisticated about the challenges of cancer than it was 30 years ago. “Maybe in some people's minds it will discredit the effort, but people have been disappointed before”—and so they won't pin too much hope on the announcement.


    Graduate Training, Research Councils Are Big Winners

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    OTTAWA—The Canadian government last week unveiled a program to lure elite students into science Ph.D. programs on their way to becoming university professors and government researchers. The carrot: money. The program, Canada Graduate Scholarships, will offer fellowships of $23,100 a year—double the size of current federal awards. The 4000 scholarships to be funded would more than double the number currently offered by Canada's three research councils.

    The new program is one of several goodies for Canadian scientists contained in the 2003–04 budget. In contrast to the deficit-plagued U.S. budget, Canada's budget is showing a healthy surplus. That has allowed Prime Minister Jean Chrétien to include 10% hikes for the country's three research granting councils; a permanent $150-million-a-year program to pay universities for the cost of supporting research (Science, 27 October 2000, p. 687); $50 million for new applied and clinical genomics projects; and annual contributions to the Atacama Large Millimeter Array in Chile, the Extended Very Large Array project in New Mexico, and other international astronomy projects. The budget would boost research spending, now roughly $5 billion, by $1 billion over 5 years.

    Canadian officials expect an estimated 5000 professors a year to retire over the next decade, and the new scholarships are a way to refill the pipeline. Although they can help only a fraction of the 100,000 students now pursuing master's and doctoral degrees, the scholarships and indirect cost reimbursement are seen as two shots in the arm for graduate education. “These are major steps,” says Robert Giroux, president of the Association of Universities and Colleges of Canada. “It's probably not fulfilling total [graduate training] needs, but that doesn't mean there won't be more in future budgets.”

    The $70 million budgeted for the graduate fellowship program will support 2000 students each at the Ph.D. and master's levels. The three granting councils currently offer a combined 1600 fellowships for Ph.D. students, and the Natural Sciences and Engineering Research Council (NSERC) awards 1100 master's fellowships—the only one of the three councils with such a program. Officials also hope that the increased doctoral stipends (those for master's degrees would hold steady at $11,500) will lower attrition rates that reach 50% in some disciplines.

    A brighter picture.

    University of Toronto grad student Brad MacIntosh, who studies how the brain recovers from a stroke, hopes to apply for one of the new fellowships.


    Despite that largesse, the government's plan could also create problems for students and their universities. The new scheme could put pressure on the research councils to raise stipends for other fellowships, as well as for graduate students funded through research grants. “Otherwise, we'll have two graduate students working side by side in a lab, one making $13,800 and one making $23,100. That won't be tolerated,” says Canadian Institutes of Health Research president Alan Bernstein. But NSERC president Thomas Brzustowski doesn't foresee a problem: “It'll just add another layer, a prestigious layer, of scholarships.”

    Students are more concerned that cash-strapped universities will use the new program as an excuse to hike graduate tuition fees, particularly in light of the budget's failure to increase federal cash transfers to the provinces for postsecondary education. In most disciplines, those fees have already more than doubled since being deregulated in 1998, notes Jesse Greener, head of the Canadian Federation of Students graduate caucus and a doctoral student at the University of Western Ontario in London.

    Nor will funding an elite number of students at a higher level necessarily translate into better completion rates or raise the overall quality of graduate education, argues Brad MacIntosh, a first-year graduate student in medical biophysics at the University of Toronto. “What's the point of having a couple of excellent students if a department is crumbling?” asks MacIntosh, referring to the precarious financial condition of many universities. Even so, he plans to apply for the scholarship, which would nearly triple his current university-funded stipend of $7900.

    In contrast to the debate over how best to strengthen graduate education, there's unanimous support for boosting the budgets of the three research councils, which now total $915 million. Bernstein and Brzustowski expect higher success rates in basic grants competitions. Bernstein also looks for an increase in strategic initiatives at each of his 13 institutes, while Brzustowski plans to expand university-industry programming. Marc Renaud, head of the Social Sciences and Humanities Research Council, says the money will permit a new round of grants under a popular program that encourages university researchers to tackle community needs (Science, 13 November 1998, p. 1237).

    The government's investment will have a ripple effect beyond Canada's borders, predicts David Strangway, president of the Canada Foundation for Innovation: “You put all of these things together, and it gives Canadian universities a real capacity to compete on the international front.”


    ITER Negotiations Heat Up as All Sites Pass Muster

    1. Dennis Normile*
    1. With reporting by Wayne Kondro in Ottawa.

    TOKYO—For 17 years, backers of the International Thermonuclear Experimental Reactor (ITER) have been itching to demonstrate that the process that powers the sun can be used to generate energy on Earth. Now, as a decision nears on where and how to build the $5 billion experiment, negotiations among the partners are starting to generate a little heat themselves.

    Last week in St. Petersburg, Russia, all four proposed sites were judged equally good on technical merit. That has led one of the competitors—Canada—to up the ante in hopes of keeping pace with the financial packages of its competitors. Alongside that competition is another one, equally fierce, to build the machine's various components, with the most commercially lucrative ones at the top of every partner's list. “We're not going to be able to resolve these issues just among the ITER negotiators,” admits Robert Aymar, the project's international team leader, who is based in Garching, Germany. He acknowledges the need for some high-level geopolitical horse trading.

    First proposed in 1986 as a $10 billion project to demonstrate the feasibility of harnessing nuclear fusion to generate power, ITER has survived a major redesign and a reshuffling of key players. The latest lineup includes three countries and regions that would like to host the facility—Japan, the European Union (E.U.), and Canada—and another three seeking to be full partners but not hosts: the United States, Russia, and China (see table).

    View this table:

    The heart of ITER is a mammoth vacuum vessel surrounded by several types of superconducting coils, some weighing as much as 900 tons, that will magnetically confine a hydrogen plasma in the shape of a doughnut. The plasma will be heated to the point of igniting fusion. Such experiments are considered the next step toward learning how to exploit fusion as a source of energy. In addition to the main reactor, the complex will include a dozen or so buildings and structures spread over 40 hectares. Construction is expected to take 10 years.

    Last week in St. Petersburg, a panel reviewing the technical suitability of each bid reported that “ITER may be successfully implemented at any of the candidate sites.” That leaves the would-be hosts scrambling to differentiate their bids. Satoru Ohtake, head of the Office of Fusion Energy at Japan's Ministry of Education, emphasizes that Japan plans to build Western-style housing, an international school, and various cultural amenities at its Rokkasho site at the northern tip of Honshu island. The French government is touting the weather and ambience of southern France in support of Cadarache, one of the E.U.'s two proposed sites. And Murray Stewart, president of ITER Canada, crows over the site-assessment committee's praise of the cultural and social advantages of his country's Clarington site near Toronto, “a major cosmopolitan city.”

    Hot idea.

    Japan hopes that its proposed site will carry the day with the partners on the International Thermonuclear Experimental Reactor.


    Aymar emphasizes that financial capacity will be a major consideration. Japan and the E.U., which has proposed a Spanish as well as a French site, have both promised to finance the cost of the buildings and surrounding infrastructure, including electric- and water-supply lines, wharfs, and roads capable of supporting heavy loads. These items are expected to account for 20% to 25% of ITER's total costs. Canadian officials are now revising their original bid, a private-sector initiative, to make it more competitive by adding an as yet undetermined level of financial support from the federal government.

    Site selection is not the only thorny issue facing negotiators. Aside from the host, each of the other partners is expected to pick up at least 10% of the total cost by supplying components. A list of 85 packages includes the superconducting coils, the plasma- heating system, vacuum pumps, cryogenics, and diagnostic devices. Japan, the E.U., and Russia have already informally identified their preferred packages on a scale from 100% interest to zero interest.

    Not surprisingly, “everybody wants to supply the fusion-related high-technology packages, and no one is interested in the conventional components,” such as power transformers, says Yasuo Shimomura, deputy ITER team leader. Industry prizes such elements as the superconducting coils, the vacuum vessel, and the diagnostic devices, says Shimomura, because their development and production could have spinoffs in other high-tech areas.

    Fortunately, some of the most desirable packages can be split up. Shimomura says that ITER officials want multiple suppliers for the superconducting wires and at least two makers of the critical 19 toroidal field coils that will be spaced around the plasma doughnut. But other components, such as the various diagnostic devices that will monitor the x-rays and neutrons emitted by the plasma and its magnetism, are best supplied by one party. None of the partners wants to speak publicly about its negotiating strategy, which Ohtake likens to “a poker game where we make up the rules as we go along.”

    The overarching goal is to make sure that every partner receives enough chips to stay in the game. Under the most optimistic timetable, the participating governments will ratify an agreement in early 2004 on how and where to build ITER.

  11. AGING

    The Wisdom of the Wizened

    1. Laura Helmuth

    New data indicate that some of the mental declines that accompany aging aren't as bad as researchers once thought. And in many cognitive domains, the old have a lot to teach the young

    The study of how the mind changes with age can be pretty grim. Classic performance tests show similar trajectories across the life-span: Movement speed, visual acuity, and several types of memory steadily and inexorably decline, starting as early as in one's 20s. People who behaved foolishly during their sophomore year of college would rightly be chagrined to learn that they were then at their peak mental performance.

    But in the past few years, researchers have developed a much more optimistic view of the aging mind. They're motivated, in part, by the observation that most older people fare well in the world—living independently, contributing to family members' lives, and solving crossword puzzles into their 80s and beyond. “The physiological and cognitive declines measured in the lab don't map perfectly or even arguably well onto everyday life,” says psychologist Laura Carstensen of Stanford University.

    Some of this revisionism comes straight from a reexamination of lab testing conditions. Many skills improve with age, it turns out, but they aren't picked up by standard cognitive screens. And certain testing conditions have exaggerated age-related declines in performance.

    When cognitive abilities are divided into those that build on existing knowledge and those that require new learning, clear differences emerge between youngsters and oldsters. With advanced age, it takes longer to pick up new skills. Although this message may be frustrating to a 60-year-old trying to program the darn VCR, it means that people retain or even improve their performance in domains they practice regularly—that is, in the skills they care about most.

    Older people outperform the whippersnappers in some spheres. For instance, elders have better social wisdom: They can evaluate a stranger's personality more accurately. They also have better verbal abilities.

    This isn't to deny that skills fade: Some important ones decline steeply, reliably, and almost universally (see Science of Aging Knowledge Environment article at;2003/8/ns3). But researchers have found ways to improve older peoples' memory and mental abilities. Long-term studies of cognitive training regimens show that even prototypical age-sensitive tasks, such as quickly judging whether two complex figures are the same or different, improve with practice. The neuroanatomical news isn't so bleak, either: Recent studies show that the aging brain doesn't lose as many neurons as was once thought, and that adult brains continue to sprout new neurons (Science, 3 January, p. 32). Researchers have adopted a “much more hopeful, positive view than 5 or 10 years ago,” says Molly Wagster, program director for neuropsychology of aging research at the National Institute on Aging (NIA) in Bethesda, Maryland.


    Old people maintain peak performance in skills they practice regularly.


    One of the great comforts of age is that the old are generally happier than the young are. They're in better mental health, they navigate interpersonal relationships more adeptly, and they suffer fewer negative emotions. Much cognitive aging research is motivated by a desire to find ways to help old people overcome cognitive failings, but regarding overall well-being, “we may do very well to study older people to see how to help younger people,” says Carstensen.

    The mismeasure of the mind

    To isolate the effects of age, researchers try to make as clean a comparison as possible between subjects of different ages. Volunteers come to the same lab and take the same sets of tests, for instance. But the optimum testing conditions are not the same for all generations.

    Most labs gather data in the afternoon, a convenient time for researchers and undergrad volunteers. But about 75% of the old are morning people; that's when they perform best. This is not true of college-age controls, who are generally sharper in the afternoon. Lynn Hasher of the University of Toronto and her colleagues started testing 20-somethings at 4 or 5 p.m. and 60- or 70-somethings at 8 or 9 a.m. Age differences in performance on basic memory tests, such as recognizing sentences from a story or memorizing a list of words, were cut in half.

    Participating in a memory experiment can be more intimidating for old than young subjects. For example, young subjects learn a list of words better if researchers warn them in advance, rather than if they spring an unannounced recall test at the end of a session. For old people, the reverse is true, Hasher has found.

    Researchers have a theory about this, explains psychologist Thomas Hess of North Carolina State University in Raleigh. Older people are acutely aware of age-related declines in memory. Explicitly describing something as a memory test heightens older subjects' anxiety and makes them anticipate that they'll perform poorly. Given standard testing conditions, which usually involve explicitly asking volunteers to participate in a study of memory and aging, “we may bias ourselves toward finding greater negative effects,” says Hess.

    To look for bias, Hess and his colleagues tinkered with older subjects' expectations about their memory performance. Volunteers read one of two mock newspaper articles at the beginning of a testing session. One described the declines in memory that accompany aging, whereas another article emphasized research on the preservation of memory skills over the lifetime. As the team reported in January in the Journal of Gerontology, those who read a pessimistic account of memory and aging remembered 20% to 30% fewer words than did people who learned about the joys of aging. (If your memory feels particularly sharp later today, thank Science.)

    Social Solomons

    With experience, most people become wise to the ways of the world. Older people are particularly well attuned to judging character, for instance. Given a list of behaviors of some fictional person, older people overlook distracting but relatively unimportant actions and focus on those that are more diagnostic, Hess has found. Old subjects are more likely than young ones to accurately label a stranger as dishonest or intelligent, for instance. “Older people have become experts in the social domain,” he says.

    A keen awareness of character can even compensate for pervasive memory disruptions, Hasher and her team reported in the March 2002 issue of Psychological Science. It can help connect words to their source, for example. Researchers call the ability to remember where you learned something “source memory.” Failures of source memory, which increase dramatically with age, can be quite awkward; for instance, an elderly person may forget who told a joke and retell it to the source.

    To probe this frailty, testers often ask people to listen to a series of statements spoken by either a male or female voice. At the end, subjects read the statements and say which voice spoke them. Hasher and her colleagues added a twist: Before playing the tape, researchers told some subjects that one voice belonged to a saintly person who never told a lie, and that the other speaker was a dishonest cad. As in past experiments, old people had trouble remembering which voice spoke a given line. But those who were clued in to the trustworthiness of the speakers readily judged whether a given statement was likely to be true or false, suggesting that old subjects can remember information about the source of a memory when it's important to them.

    Old people engage the world in a distinctive way, and memory tests are notoriously susceptible to the level of engagement. To demonstrate how the same information can make a big impression or fade from memory, Hess's team asked young and old subjects to listen to a drawn-out description that was identified as either someone's experiences in a first job or their experiences while searching for a retirement home. Young people remembered details from both scenarios well; the job description slipped from older subjects' minds, but their memories for the housing search were keen.

    The old view.

    Researchers now question the assumption that most cognitive abilities are doomed to decay with age; language skills yield high scores in the 70s.


    Hasher points to a theme unifying such studies: Age differences are robust when researchers “ask for a piece of information that just doesn't matter much. But if you ask for information that's important, old people do every bit as well as young adults.” As Hess puts it, “young people have [mental] resources to burn. As people get older, they get more selective in how they use their resources.”

    Automatic drive

    A few classic tests of cognitive abilities show improvements over a lifetime. In a massive study of people of many different ages, psychologist Denise Park of the University of Illinois, Urbana-Champaign, and colleagues showed that, as expected, many skills tank with age. These include basic skills such as short-term memory and speeded processing, which are often tested, respectively, by having someone recite back a list of numbers or make a snap decision about whether two columns of letters match. In contrast, verbal abilities continue to shine even in 80-year-olds, the researchers reported last year in Psychology and Aging (Science, 21 June 2002, p. 2131).

    Tests of vocabulary and the ability to pick synonyms and antonyms are fairly straightforward ways to measure what Park calls knowledge about the world. People build such knowledge over a lifetime and develop many areas of expertise that help them compensate for age-related declines.

    Park and others distinguish between highly practiced skills that become automatic—such as using words correctly—and those that require new learning, such as picking up a new language. Activities that require a lot of effort on the part of a young person may be fairly automatic in someone who's been around a while and cultivated them. For instance, Park explains, a senior researcher has plenty of experience in giving a talk. She knows her way around a slide projector, is accustomed to looking out at a sea of drowsy faces, and may have presented some of the same slides in the past. A junior scientist may fumble with the minutiae and end up giving a worse talk.

    When people continue to practice something, such as playing chess or the piano, they can compensate for age-related declines such as shorter attention spans or hesitant fingers. But with only so many practice hours in a day, says psychologist Paul Baltes of the Max Planck Institute for Human Development in Berlin, “the domains in which older people do well will become fewer and fewer.” As a result, healthy old people choose the skills that are important to them, continue to stay active in those milieux, and thus maintain a high level of functioning.

    One domain that is relatively well preserved in many old people is social wisdom, which Baltes's team measures as an ability to find sensible solutions to life problems. For instance, people are asked how they might advise a 15-year-old girl who wants to run away and get married. “Wisdom is the prototype of positive aging,” says Baltes. Many older people “invest themselves in social and emotional competence.”

    Looking on the bright side

    Social wisdom isn't limited to performance on lab tests; Carstensen and others have found that older people handle interpersonal conflicts more effectively than do younger people. In this way, as in many others, the old seem particularly adept at maintaining mental health—or so they say about themselves. The reasons are somewhat disputed. Carstensen has evidence that older people, conscious of their limited life expectancies, concentrate on making the here and now more meaningful and rewarding. In a somewhat less flattering light, Baltes suggests that many older people compare themselves to those less fortunate in order to maintain satisfaction with their lives.

    The study of emotional health and aging is fairly new. Carstensen suggests that's because “researchers absolutely knew, they were confident, that the story would be the same [one] that is told about physical aging”—namely, that emotional health would ebb with advancing years. But in fact, “emotional experience and regulation improve with age, despite the real onslaught of losses that aging does entail.”

    Many differences in how old and young people learn and remember things, Carstensen says, are a result of how they see the world. Young people, she says, see the future as open-ended and favor learning new things. Over the years, attention shifts to the present: “People live in the moment, not preparing for the future. … They deepen emotional relationships and tend to savor life.”

    For instance, given an advertisement for a camera with the slogan “Capture the unexplored world” or “Capture those special moments,” young subjects are more drawn to the expand-your-horizons theme. Old subjects prefer the second appeal—to family and positive emotions—and remember such slogans more easily, Carstensen's team reports in an upcoming issue of the Journal of Personality and Social Psychology.

    Audience approval.

    Elderly subjects are more swayed by appeals to positive emotions (top) than to adventure.


    Old people also respond differently to positive and negative stimuli. Studies typically show that people are more likely to remember the negative. But such studies also typically consist of college students. Older subjects, Carstensen found, show the opposite bias. They're more likely to remember quickly flashed pictures showing positive scenes, such as a child's birthday party, than negative scenes, such as a burn victim.

    To see how young and old people process such images, Carstensen and John Gabrieli's team at Stanford flashed positive and negative scenes while scanning brain activity. Young subjects' brains, as in past experiments, showed more activity in response to negative images, but old subjects' brains lit up more brightly in response to the positive ones. The differences were clearest in the amygdala, which processes emotions, Carstensen reported at a November 2002 meeting at the New York Academy of Sciences. The study reinforces Carstensen's contention that older people have a different view of the world. “Absolutely, at the initial stages of processing,” older people record information differently than do the young.

    But this bias could have negative side effects, Carstensen says. Old people may overlook unpleasant but critical facts affecting their lives, such as deficiencies in health insurance plans. Positivity bias even colors autobiographical memory. Carstensen and Quinn Kennedy of the Veterans Administration hospital in Palo Alto, California, recently gave nuns a lengthy questionnaire identical to one they'd completed in 1987. When asked to fill out the form based on how they'd felt years earlier, the nuns remembered greater physical, mental, and emotional health than they had actually enjoyed.

    Baltes finds a slightly different bias in older people's thinking. Many of them maintain a sense of well-being, he says, by seeking out examples of people who are even worse off than they are. “Someone who has a heart attack and survives compares himself to someone who died, and [he then] feels better,” says Baltes. “In my view, that's one of the primary reasons the old don't do badly” in terms of life satisfaction.

    Baltes, a director of a longitudinal aging study in Berlin, cautions that much of the successful aging revealed by recent studies is based on the performance of what he calls the “young old”—those in their 60s and 70s. Once people hit the “old old” years of 80 and above, he finds, they're much less likely to be able to compensate for the ravages of age.

    At the same time, the nature of youth and age has changed over the past few generations. The Seattle Longitudinal Study, inaugurated in 1956, tests continuing and newly recruited participants on a variety of cognitive measures at 7-year intervals. Today's elders are unlikely to have graduated from high school, says Sherry Willis of Pennsylvania State University, University Park, but half of the study's younger cohort, as in the population in general, has at least some post-high school education. Even so, no generation can beat those born in the 1920s for adding and subtracting two-digit numbers, Willis says. In the past, education was more rote, she says, with less emphasis on thinking symbolically or discovering underlying algorithms. Comparing tests taken at the same chronological age, she finds that successive generations are better at problem solving and other age-related skills.

    Some of later generations' edge can be attributed to improved physical health, NIA's Wagster points out, stemming from better detection, treatment, and prevention of disease. Type II diabetes and cardiovascular disease, for instance, can interfere with mental activity.

    The question of how to slow age-related declines is “ripe for the picking,” says Wagster. Some studies suggest that remaining mentally and socially active protects against Alzheimer's disease; others show that physical exercise can improve cognitive abilities. Training older people on strategies to improve memory, problem solving, and processing speed does improve their performance, Willis and colleagues reported in November 2002 in the Journal of the American Medical Association. But training in one domain didn't transfer to the others. Willis foresees a mental antiaging regimen akin to what fitness buffs adhere to: “We may need to do cross-training in mental exercises to totally tone you up.”


    Biotech Thinking Comes to Academic Medical Centers

    1. Joe Alper*
    1. Joe Alper is a writer in Louisville, Colorado.

    A novel enterprise at Harvard Medical School could provide a model for how to translate academic research into new drugs that industry is unlikely to pursue

    With several million dollars in the bank, the Laboratory for Drug Discovery in Neurodegeneration (LDDN) is like many other biotech start-ups that hope to turn a few novel biological insights into new drugs. Located in Cambridge's University Park, across the street from the Massachusetts Institute of Technology and among dozens of biopharmaceutical companies big and small, LDDN is going at this task with a roomful of robots, a freezer with a 70,000-compound chemical library, and about two dozen hard-working chemists, molecular biologists, and biochemists. Although LDDN is not quite 2 years old, its scientists already have promising leads on potential drugs for multiple sclerosis and Parkinson's disease. Buoyed by this success, LDDN is expanding into Alzheimer's disease, Huntington's, and amyotrophic lateral sclerosis.

    But LDDN differs from other biotechs in one key way: Although its success is tied to developing commercially successful drugs, it is not tied to ever making a profit on those drugs—just enough money to keep the operation running.

    In fact, LDDN isn't a company at all, but rather a “not-for-profit biotech” operating under the Harvard Medical School umbrella. With a staff that's half permanent, experienced researchers and half postdocs from Harvard-affiliated institutions in the Boston area, LDDN is merging a strong research and teaching mission with a single-minded focus on developing drugs. Launched in mid-2001 with part of a $37.5 million gift from an anonymous donor to further research on neurodegenerative diseases, LDDN hopes to serve as a model for how academia can better turn its basic research findings into drugs to treat the many diseases that fall off the pharmaceutical industry's radar.

    “The pharmaceutical industry is so risk-aversive these days that it's shying away from developing drugs that aren't likely to be blockbusters”—drugs that can net at least a billion dollars a year—says Jill Heemskerk of the National Institute of Neurological Disorders and Stroke (NINDS) in Bethesda, Maryland. In particular, industry is scaling back efforts in such small market areas as drugs for neurodegenerative conditions and most cancers, adds chemist Ross Stein, LDDN's director and a former department head at Merck.

    Academic medical centers hope to fill this void, partly to serve patients but also to reap some economic rewards. At the same time, they are under pressure to take a bigger role in “translational” work that moves research from the bench to the clinic. “Academic medical centers in the U.S. are searching for a new mission in the 21st century. Getting more involved in the early stages of drug discovery and development could be an important role for these institutions to play,” says Julian Simon, a molecular pharmacologist at the Fred Hutchinson Cancer Research Center in Seattle, Washington. Indeed, Simon is a member of a national task force that is studying how academic medical centers can get more involved in cancer drug discovery. At NINDS, Heemskerk is heading a new program with a similar goal for neurological diseases.

    Bring on the risk

    Risk is the birthright of start-up biotech companies, and Stein and LDDN chair Peter Lansbury, a chemist at Brigham and Women's Hospital, relish it. LDDN can hazard failure, explains Lansbury, because “our only mandate is to address the needs of patients and not to make a profit for shareholders.”

    Risky business.

    Ross Stein (left) and Peter Lansbury think their “not-for-profit biotech” can develop drugs that big pharma won't.


    LDDN has another advantage over its commercial counterparts: To identify promising targets, it can draw on the community of Harvard Medical School postdocs. Part of the larger Harvard Center for Neurodegeneration and Repair, the lab offers a sabbatical program each year that gives a dozen or more postdocs between 6 and 18 months to see if they can parlay their research into potential drug candidates for treating neurodegenerative disease. To be eligible, a postdoc must have a clearly defined target—an enzyme or receptor, for example—and an animal model in which to assay the activity of this target. Working side by side with LDDN scientists, the postdocs will then attempt to validate targets, identify molecules that interact with them, refine these first-stage drug candidates through chemical tinkering, and take them through all the stages of preclinical testing necessary to begin human clinical trials. “We want to hand potential drugs to the pharmaceutical industry on a silver platter,” says Stein.

    To do that, Stein and Lansbury created LDDN along the lines of many a biotech company, complete with a full-time staff paid industry-scale wages. A biology group, headed by 15-year industrial veteran Li-An Yeh, focuses on assay development and screening. A medicinal chemistry group, run by Greg Cuny, with 10 years of pharmaceutical company experience, handles “optimization,” the chemical tinkering that turns a lead compound into a drug candidate. A 100,000-compound-strong chemical library will serve as the starting point for drug-screening activity.

    Finding and refining

    Bioorganic chemist Yichin Liu is one of 12 postdocs currently working at LDDN. Liu, who comes from Lansbury's lab, is testing an inhibitor for an enzyme that her earlier work identified as playing a potential role in neuronal cell death and perhaps Parkinson's disease. Her project illustrates the standard operating procedure at LDDN. Armed with a basic bioassay to measure the activity of this particular enzyme, which is involved in processing cellular proteins for destruction, Liu worked with the lab's biology group to turn the bench-scale assay into one capable of running on one of LDDN's high-throughput screening robots. Some 42,000 compounds later—the size of the library at the time—the screen identified one family of molecules that inhibited the enzyme moderately and another that activated it.

    The medicinal chemistry group then took over. LDDN's chemists have so far generated several compounds that are about 1000-fold more potent at inhibiting the enzyme than were the initial hits. Liu is now measuring the activity of these compounds in a variety of biological models and plans to begin toxicology and bioavailability studies shortly. “At [LDDN], we get to see if the biology we've developed in our home labs has merit for drug development, something that you don't get to do normally in an academic lab,” says Liu. Back at Lansbury's lab at Brigham and Women's Hospital, Liu and others will also begin using these high-potency inhibitors as probes to better understand the enzyme's roles in neuronal cell health and degeneration.

    Because the initial link between Liu's enzyme and Parkinson's disease was iffy at best, it's unlikely that any company, big or small, would have been interested in studying the protein and its ramifications for disease. “This is exactly what we created the lab for,” says Stein. Indeed, the promise of just this sort of result, where taking a flyer has led to something promising, got the project going. Stein persuaded Harvard Medical School officials to expand the original vision of a fee-for-service, high-throughput screening lab to make room for new research. Many universities are setting up such screening labs to help faculty members develop chemical probes for the biological molecules they discover. But as Stein explains, “Screening isn't the bottleneck for academic labs. It's screening tied to medicinal chemistry, so that you can take hits from an assay and elaborate them in the med chem lab, then screen again, and refine again.”

    Defining success

    Stein says the lab has already had triumphs: It has run screens for eight targets, finding potential drug candidates for two, both of which have moved on to medicinal chemistry. But true success, he says, will come when a pharmaceutical partner starts clinical trials with one of the drug candidates discovered at the lab. Heemskerk, for one, thinks LDDN has a good chance of meeting that goal. “If any model is going to move ideas from the bench to the clinic, it's likely to be the Harvard lab,” she says.

    Postdoc power.

    At LDDN, ideas come from postdocs such as Melissa Nicholson (top) and Yichin Lui, who work with the lab's biologists and chemists to try to turn targets into potential drugs.


    The medical school is already sizing up new opportunities. Edward Harlow, who heads its molecular pharmacology department, has taken the lead in an effort to create an LDDN look-alike that would focus on cancer drug development. Harvard is currently raising money for this new entity. And at LDDN, Stein and Lansbury hope to open its doors to postdocs nationwide.

    But tricky issues surrounding long-term funding and intellectual-property rights remain to be resolved. Although profit isn't a motive at LDDN, the lab still needs to raise funds to continue operating; expanding would take more. The lab has applied for grants under the new NINDS program that Heemskerk runs. And patient groups—already the biggest source of outside dollars—are eager to fund research that targets neurodegenerative diseases, says Lansbury. He and Stein hope that such groups will help support the expensive toxicological and animal studies needed to gain regulatory approval for human clinical trials.

    Future funding could come, too, from licensing deals and royalty streams, assuming that some of the lab's projects are successful enough to interest pharmaceutical companies. Such deals, however, will depend on working out intellectual-property rights among the many parties. “We have to consider the postdoc, the postdoc's home lab and institution, our lab, and the Brigham and Women's Hospital, which is our home institution,” explains Lansbury. “It's complicated, but we're making progress.”

    More difficult to resolve will be the issue of how broad a patent portfolio to develop for each drug candidate. Typically, when a pharmaceutical company discovers a new biologically active molecule, its medicinal chemistry group will synthesize—and patent—a slew of chemically related compounds to prevent competitors from easily developing similar drugs. Lansbury says LDDN does not plan to take such a time-consuming and costly approach, but he acknowledges that some potential partners might balk at making a deal without this kind of patent coverage.

    Deals with big pharma may be years away, but as far as LDDN's postdocs are concerned, the lab has given their research a tremendous boost. Melissa Nicholson, a postdoc in immunologist Kai Wucherpfennig's lab at the Dana-Farber Cancer Institute in Boston, says that in less than a year there, she has identified possible inhibitors of HLA-DR2, a major histocompatibility complex class II molecule thought to be involved in multiple sclerosis. She's recently received an LDDN fellowship for another year. She adds that her experience has opened her eyes to the world of industrial, goal-oriented science: “I'd never even considered a career in a pharmaceutical company, but I'm finding that I like bringing my basic research to bear on the very practical problem of discovering a drug.”

    Whether LDDN will prove to be a model for other universities is an open question. “Harvard's situation is unique in that the medical school has a tremendous concentration of both intellectual talent and financial resources that not many institutions can match,” says Fred Hutchinson's Simon. That may be, but if LDDN succeeds at its primary mission of turning basic research into drugs for underserved patient groups, other academic medical schools are sure to follow suit in some manner. Copycats, after all, are a way of life in the pharmaceutical business.


    False Memories, True Pain

    1. Mary Beckman*
    1. Mary Beckman is a writer in southeast Idaho.

    DENVER, COLORADO—More than 5000 attendees—many of whom extended their stays when flights to a snowbound East Coast were canceled—met here 13 to 18 February to hear highlights of recent research from a wide range of fields. For more coverage, see last week's issue (p. 1177) and ScienceNOW (

    Even impossible memories can be fabricated from suggestions, researchers reported at the AAAS annual meeting last week. And such memories can create physiological responses that are indistinguishable from those elicited by remembering real trauma.

    Many people think their memories of dramatic events, such as where they were when they heard that President Kennedy was shot, are very reliable. But that doesn't appear to be true. To demonstrate the power of suggestion over such memories, Elizabeth Loftus of the University of California, Irvine, and colleagues implanted a memory into people who had witnessed a bombing in Russia. They interviewed volunteers twice, 2.5 years and 3 years after the bombing. During the second interview, the team posed the suggestive question: “When you were taking part in our study, you mentioned a wounded animal. Could you tell us about it?” Almost 13% of the people recalled, incorrectly, that they had seen an injured pet.


    People who think they were abducted by space aliens tremble when reminded of the trauma.


    Critics have argued that such false-memory experiments might call up real experiences—perhaps some subjects did see bleeding animals. So Loftus implanted a clearly impossible memory: a person in a Bugs Bunny outfit shaking hands and hugging children at Disneyland. “Bugs is a Warner Brothers character. He wouldn't be allowed on Disney premises,” Loftus says. Her team recruited volunteers who had been to Disneyland earlier in their lives. They were shown an advertisement for Disneyland with pictures of Bugs and text describing a trip to Disneyland that included meeting the wascally wabbit. Weeks later, 36% of the volunteers who had seen the ads vividly recalled that they had seen Bugs Bunny in real life: They shook his hand or even hugged him, they reported.

    “Her work points out to people that, in terms of our own subjective experiences, what we think is crystal-clear imagery could be inaccurate at the very deepest level,” says psychologist Michelle Leichtman of the University of New Hampshire, Durham. A study of people who claim to have been abducted by space aliens helped psychologist Richard McNally of Harvard University determine how deep imagined events can go. His team recorded 10 volunteers' abduction stories. Volunteers listened to audiotape clips of their stories while the researchers measured heart rate, sweating, and facial muscle tension. All stress responses were elevated, to the point that they mirrored those of people remembering Vietnam combat events or childhood sexual abuse. More than half of the alien abductees exhibited some symptoms of posttraumatic stress disorder.

    The response to trauma “is driven by emotional beliefs, whether accurate or not,” McNally reported. “If you sincerely think you were being abducted by aliens, you were.” The result “is troubling,” says Leichtman. “It underscores the similarities between true and false memories at an even more profound level” than researchers generally think.


    Bitter News for Tender Tongues

    1. Siri Carpenter*
    1. Siri Carpenter is a writer near Madison, Wisconsin.

    DENVER, COLORADO—More than 5000 attendees—many of whom extended their stays when flights to a snowbound East Coast were canceled—met here 13 to 18 February to hear highlights of recent research from a wide range of fields. For more coverage, see last week's issue (p. 1177) and ScienceNOW (

    Good taste is always good, but supertaste is another story. Men with an especially keen sense of taste are at heightened risk for obesity, cancer, and other serious health problems, according to findings presented at the meeting 14 February. This is the first indication that individual variations in taste perception can affect diet enough to compromise health.

    The new results provide “fascinating evidence that people really are living in different taste worlds,” comments Cynthia Beall, an anthropologist at Case Western Reserve University in Cleveland, Ohio.

    About 25% of the population consists of “supertasters,” who are three times more sensitive to bitterness and other taste sensations than are people with the least taste sensitivity. Most people fall between the two extremes. These differences are genetically determined and stem from the number of structures on the tongue called fungiform papillae that contain taste buds and are surrounded by pain- and touch-sensitive fibers.

    Some researchers have wondered whether supertasters' aversion to bitter, but healthful, compounds in vegetables might predispose them to cancer and other illnesses. In a study of about 200 men, Yale University psychophysicist Linda Bartoshuk and colleague Marc Basson, a gastroenterologist at Wayne State University in Detroit, Michigan, found that among those age 65 and older, greater sensitivity to bitterness was associated with more colorectal polyps, precursors to colon cancer. Men with polyps also ate fewer vegetables and were more overweight—both risk factors for colon cancer—than men without polyps.

    In another study of almost 4000 participants, Bartoshuk and colleagues found that men—especially supertasters—with a history of ear infections were more overweight than were men without such a history. Repeated infections can damage a nerve that normally keeps taste and fat sensation in check; the damage leads to a greater ability to detect fat. Although the data are preliminary, Bartoshuk's group speculates that this greater sensitivity to fat may lead supertaster men, already predisposed to like fat, to eat even more of it.


    Nuclear-Powered Bugs

    1. Helen Fields*
    1. Helen Fields is a writer in Santa Cruz, California.

    DENVER, COLORADO—More than 5000 attendees—many of whom extended their stays when flights to a snowbound East Coast were canceled—met here 13 to 18 February to hear highlights of recent research from a wide range of fields. For more coverage, see last week's issue (p. 1177) and ScienceNOW (

    Microbes at the bottom of a South African gold mine apparently use the byproducts of radioactive decay to survive, researchers reported at the meeting 14 February.

    The site is not the most accessible: To reach it, geomicrobiologist T. C. Onstott of Princeton University and his colleagues take an elevator 3.5 kilometers below the Witwatersrand Basin. There, they've found microbes living in water trapped in cracks in the hot rock. The organisms feed on hydrogen gas dissolved in the water, combining it with carbon gases and producing methane.

    In principle, the hydrogen might have come from other microbes, from water interacting with certain minerals, or from radioactive decay. But Onstott found that the hydrogen concentrations were too high to have come from microbes, and the necessary minerals aren't in the mines—which leaves radioactivity.

    Hot meal.

    Radioactive decay provides a source of nourishment for microbes in a South African mine.


    When uranium atoms deep underground decay, they send off alpha particles—two protons and two neutrons—that bump into other molecules. When they hit water, they create hydrogen gas, hydrogen peroxide, and oxygen. Other evidence supports the radioactive origin of the microbes' meals: The concentration of helium, which forms from the alpha particles that uranium sends off, indicated that there was sufficient radioactive decay to produce the hydrogen gas in the bugs' water, Onstott reported.

    The research demonstrates a previously unknown means by which microbes may be able to survive, says microbial ecologist Rick Colwell of the Idaho National Engineering and Environmental Laboratory in Idaho Falls. “This is an environment that may in some ways be part of a bright spot” for life deep underground—generally not a friendly place for living things.


    Behind the Wheel of an Expanding Axon

    1. Dan Ferber

    DENVER, COLORADO—More than 5000 attendees—many of whom extended their stays when flights to a snowbound East Coast were canceled—met here 13 to 18 February to hear highlights of recent research from a wide range of fields. For more coverage, see last week's issue (p. 1177) and ScienceNOW (

    As the embryonic brain wires itself up, nascent neural projections are pulled to make connections with other neurons by structures called growth cones. Researchers know a lot about the chemical signals that lure growth cones. Now a team has uncovered the growth cones' steering mechanism.

    Growth cones send out dozens of thin, threadlike feelers to help sense the world around them. Each feeler is supported by fibers called actin filaments; other fibers, called microtubules, run on a treadmill of actin filaments that reel them back toward the cell body as fast as they can grow. As a result, growth cones go nowhere in particular. But when a feeler senses gluelike chemicals on the surface of another neuron, the treadmill slows, microtubules grow, and the feeler solidifies into an axon.

    Exploring the connections.

    Neuron growth-cone projections solidify into axons once they meet their match.


    To see how the feelers steer in response to the come-hither chemicals, Paul Forscher of Yale University and his team blocked young neurons from building new microtubules, which caused the proteins to sit still on the actin treadmill and be carried back to the center of the cell. That stopped an enzyme called Src that relays signals inside the cell, and that, in turn, stopped the axon from steering where it needed to go, the team reported at the meeting 14 February. The researchers also found that microtubules tugged on a rubber band-like arc of protein at the base of the feeler as they grew outward. This suggests that the physical tension on this arc is what triggers the signals that steer the growth cone, Forscher says.

    The team's work “is the first time we're getting a handle on how signals outside the growth cone control the cytoskeleton,” says neurobiologist Shelley Halpain of the Scripps Research Institute in La Jolla, California.


    Record Donation to AAAS for Innovation

    DENVER, COLORADO—More than 5000 attendees—many of whom extended their stays when flights to a snowbound East Coast were canceled—met here 13 to 18 February to hear highlights of recent research from a wide range of fields. For more coverage, see last week's issue (p. 1177) and ScienceNOW (

    AAAS (publisher of Science) received the largest gift in its 155-year history last week: $5.25 million from science-policy authority William T. Golden. The money will be dedicated to new programs not currently funded by AAAS's $80 million annual budget.

    A longtime science enthusiast, Golden, 93, has been a securities analyst, inventor, corporate official, and science-policy adviser to the government. He earned a ham-radio license at the age of 13 in 1922 and “enjoyed all things mechanical, electrical, and chemical.” During World War II, he invented a new firing device for antiaircraft machine guns. And as a government adviser in the 1950s, he persuaded President Harry Truman to create the post of science adviser to the president—now a key position. Golden is chair emeritus of the American Museum of Natural History in New York City, and he served as treasurer of AAAS for 30 years until his retirement in 1999.

    “I thought, well, I don't need this money,” Golden says. So he donated it to spark AAAS to think creatively about new programs “for the advancement of learning and the improvement of humanity.”

    “This is a serious chunk of money,” says Alan Leshner, CEO of the organization, adding that AAAS plans to hold a competition among staff members and others to decide over the “next few months” how to spend it. “He wants to see innovation soon,” Leshner says of Golden.