News this Week

Science  12 Nov 2004:
Vol. 306, Issue 5699, pp. 1110

    Bush Victory Leaves Scars--and Concerns About Funding

    1. Jeffrey Mervis*
    1. With reporting by Jocelyn Kaiser, Andrew Lawler, and David Malakoff.

    U.S. presidential science adviser John Marburger has some sharp words for researchers who publicly opposed President George W. Bush's reelection: Wrong message. Wrong audience. Wrong candidate.

    Fresh from the election triumph by his boss and the Republican Party, Marburger warned last week in an interview with Science that criticism of the Administration's science policies during the campaign may be undermining public support for science. Offering a vigorous defense of the Administration's record, Marburger blamed critics for “looking at how the sausage is made” rather than at the product itself, which he characterized as a record windfall for science. Such partisan attacks, he suggested, may make it harder to prevent science from losing ground in the next 4 years given the demands of the war in Iraq, national security, and economic recovery.

    Marburger's remarks came just 1 day after Bush described how he planned to “spend the political capital” from a 51% to 48% victory over Democrat Senator John Kerry and the increased Republican majority in both houses of Congress to reform Social Security, rewrite the tax code, and achieve other priorities. Science lobbyists are already worried about what will happen when Congress returns next week to finish the 2005 budget for the fiscal year that began 1 October. Their level of anxiety rises when they speculate about possible flat funding for key science agencies in the president's 2006 budget request this winter. And they may have to court new chairs of legislative panels that set policy and control budgets after a major reshuffling next year.

    “If we're not careful, the scientific community can become estranged from the rest of society.”

    —John Marburger, White House Science Adviser


    “Rightly or not, I think the science community is now perceived by this White House as the enemy, and that will make it harder to open doors,” says physicist Michael Lubell, who handles government affairs for the American Physical Society. “It's one more factor in an increasingly complex situation,” says David Moore of the Association of American Medical Colleges, who worries that fallout from the recent campaign could determine whether the Bush Administration “reaches out and engages [the science community] or goes in its own direction.”

    If Marburger's analysis is correct, it's not the Administration but its scientific critics who have gone their own way, losing touch with society's concerns in the process. “Science needs patrons, and our patron is society,” said the 63-year-old applied physicist, a former university president and head of Brookhaven National Laboratory. “But if we're not careful, the scientific community can become estranged from the rest of society and what it cares about.”

    Marburger said his remarks were directed at the 48 Nobel laureates who publicly endorsed Kerry last summer and a group—Scientists and Engineers for Change—that spent $100,000 to stage about 30 events on university campuses around the nation at which researchers criticized Bush's policies. The get-out-the-vote effort came on the heels of a fierce fight between the White House and some scientists, led by the Boston-based Union of Concerned Scientists (UCS), over allegations that the Administration has manipulated or suppressed science advice to advance its political agenda (Science, 9 April, p. 184). “I don't think that it was good for science to have done that,” he says. “It was clear from the beginning because of the sweeping nature of the charges that the list of concerns were coming from the Democrats.”

    That's not true, says UCS chair Kurt Gottfried. “We're used to having our advice ignored or our recommendations rejected by both parties,” says Gottfried, a physicist emeritus at Cornell University. “But we felt strongly that the quality of the scientific information coming from this Administration was being compromised by the way the process was being managed. And that's why we spoke up.” Adds ecologist Jane Lubchenco of Oregon State University in Corvallis, “I can't speak for others, but I can assure you that my own motivation was not political.”

    Representative Sherwood Boehlert (R-NY), chair of the House Science Committee, thinks that everybody needs to take a deep breath. “Shame on both sides,” says the self-proclaimed science booster, who this fall underwent triple bypass surgery but still managed to be reelected comfortably to a 12th term. “The rhetoric got a little bit excessive. I hope the Administration will demonstrate a greater degree of interest in the opinions of the scientific community. And I hope that scientists will realize that what they have been saying [about the Bush Administration] hasn't helped the profession.”

    Bush whacks.

    Supporters cheer the president's reelection over John Kerry.


    For most rank-and-file scientists, the acid test for whichever party is in power is the flow of federal dollars into research. Not surprisingly, there is sharp disagreement between the president's supporters and his critics about how well science has fared in his first term even under that seemingly objective measure.

    Throughout the campaign and again last week, Marburger touted a 44% increase over 4 years in the government's overall research and development budget, from the $95 billion the Bush Administration inherited in 2001 to its 2005 request for $132 billion. Although defense-related research has led the way with a 62% hike, the National Institutes of Health and the National Science Foundation (NSF) have chalked up gains of 42% and 30%, respectively. “You really have to work at it to make a counterargument that science has not fared well in this Administration,” says Marburger.

    Critics see the numbers differently, however. In particular, they cite the government's failure even to begin the process of doubling NSF's budget, a promise written into a nonbinding 2001 law, and the fact that Congress has typically bulked up the Administration's initial request for NSF each year. They note that NIH's budget rose only 3% this year after a 5-year doubling that spanned the Clinton and Bush Administrations. And they say that most of the added defense spending goes to new weapons systems and fighting terrorism, not basic research.

    Marburger and the Administration's critics may disagree over the record of the past 4 years, but both sides accept that the next few years will be tough for science budgets. Science's share of the overall discretionary spending budget (minus the mandatory payments and debt service that consume the major part of the $2 trillion federal budget), at almost 14%, stands at a 40-year peak, Marburger notes, and sustaining such a “local maximum” is unlikely.

    His co-chair of the President's Council of Advisors on Science and Technology, Silicon Valley financial guru Floyd Kvamme, goes one step further. He argues that science doesn't need a larger slice of the discretionary pie. “We're close to the limit of the amount of spending that the research enterprise can effectively absorb,” says Kvamme, partner emeritus of the venture capital firm Kleiner, Perkins, Caufield, and Byers. “Of course, there are lots of other sectors demanding a share of that money. But I think we've reached a ceiling. It's unrealistic to talk about science taking up 15% or 20% of the discretionary budget.”

    More space.

    Bush hopes Congress will fund his plans to explore the moon and Mars, announced earlier this year.


    The notion of a ceiling doesn't sit well with research advocates such as Nils Hasselmo, president of the 62-member Association of American Universities, science lobbyists, nor with Boehlert. “I think there's room for growth,” says Boehlert, who helped shepherd the NSF reauthorization bill through Congress. Still, Boehlert says he expects “to be among those yelling” that the president's 2006 budget request for NSF and other science agencies, due out in February, is inadequate.

    Next week Congress returns for a lame-duck session to complete work on the 2005 budget and, perhaps, conduct other business. But the real action won't begin until the more heavily Republican 109th Congress convenes in January. In the Senate, the GOP picked up four seats and holds a de facto margin of 55 to 45. (Senator James Jeffords of Vermont is an independent but usually sides with the Democrats.) In the 435-member House of Representatives, the GOP could have a 31-seat advantage after runoffs next month in Louisiana.

    The new Senate is more likely to endorse the president's massive energy bill and overhaul the Endangered Species Act, but legislators may be more skeptical of his expansive—and expensive—moon/Mars exploration program, particularly at a time when private spaceships are capturing the headlines. The climate for science-related issues will also depend, in part, on who fills several key committee chairs. For example, Senator Ted Stevens (R-AK), who has shown little interest in regulating greenhouse gas emissions, is expected to become chair of the Commerce, Science, and Transportation Committee. He would replace Senator John McCain (R-AZ), who used the slot to push for climate change legislation and criticize the Administration's climate policies. Boehlert seems assured of remaining head of the equivalent House panel.

    There will certainly be departures within the executive branch. The first science-related post to be vacant is at the Environmental Protection Agency, where research chief Paul Gilman announced that next month he will become head of a soon-to-be-announced research consortium of universities. NASA Administrator Sean O'Keefe is thought by many to be headed to another federal post, possibly even before this summer's planned shuttle launch, the first in Columbia's aftermath. And Health and Human Services Secretary Tommy Thompson, long rumored to be out the door after the election, said last week that “the president and I haven't had a chance to talk, and it's clearly up to the president.” If he hits the road, one frontrunner for the job is Mark McClellan, a physician and economist who headed the Food and Drug Administration before moving to head Medicare. Lobbyists see him as more attuned to the interests of researchers than the former Wisconsin governor.

    Despite the harsh language of the past year, Marburger says that “nothing has changed” in the way he will operate: “I'm going to continue to try to make sure that scientific issues are addressed.” Harold Varmus, head of the Memorial Sloan-Kettering Cancer Center in New York City and one of the Nobelists who endorsed Kerry, expects to “still have conversations with people in government who hold opposing views.” And NASA climate scientist James Hansen, who gave a widely publicized speech criticizing Bush just before Election Day, says “everybody is telling me to watch my back. … But I spoke from the heart, and these are honorable people.”


    California's Proposition 71 Launches Stem Cell Gold Rush

    1. Constance Holden

    California is poised to leap ahead of the federal government as a backer of stem cell research after voters last week approved a 10-year, $3 billion plan to invest in the field. But the state is likely to proceed down a familiar path: Supporters of Proposition 71, which passed by a 59% to 41% margin, say the new California Institute for Regenerative Medicine will be modeled after the National Institutes of Health (NIH) in allocating its $295 million annual budget.

    The state bond initiative, which will support work involving nuclear transfer (so-called research cloning), was backed by a staggering array of scientists and high- profile groups and received a last-minute endorsement from Republican Governor Arnold Schwarzenegger. Jubilant supporters are now moving on to the next phase, beginning with the selection within 40 days of a 29-member Independent Citizen's Oversight Committee. The governor and three top state officials will each appoint five members, and five University of California campuses will also have seats at the table. The likely chair is Donald Klein, the real estate magnate who led the campaign. In January the board will set up working groups on research funding, facilities, and standards, with the last being the first order of business, says campaign spokesperson Fiona Hutton. The first awards are to be made within 60 days of issuing interim standards.

    Organizers promise that the new institute, at a site yet to be selected and with a permanent staff of 50, will be a first-class operation both ethically and scientifically. “The burden is upon us to prove that we are above reproach,” says stem cell researcher Evan Snyder of the Burnham Institute in La Jolla. Stanford University stem cell researcher Irving Weissman, another leader in the campaign, expects universities to send either their presidents or medical school deans to represent them on the oversight committee, which he hopes will require reviewers from outside the state who are experts in the field.

    Close to home.

    Hollywood producer Doug Wick and family celebrate passage of California's Proposition 71, which may one day help his daughter Tessa (far right) and others with diabetes.


    California researchers will be able to put the voters' largess to good use, the initiative's backers assert. “Our goal has always been to mimic the NIH structure as much as possible,” says Snyder. “We all come from the NIH tradition. … We're not amateurs. We really know how science should be funded and conducted and administered.”

    Snyder says training will be a priority, and 10% of the budget will go to build research facilities and buy equipment. As for the rest, Weissman expects the bulk of the research funding to support investigator-initiated basic research rather than any top-down priorities set by the oversight committee.

    One important early decision will be the selection of a full-time director. It “could easily be someone who's already an NIH administrator, with California roots,” says Snyder. One person who fits that description is James Battey, who coordinates NIH's $214-million-a-year investment in human stem cell research, some $25 million of which goes to embryonic stem cells. Battey wouldn't comment, and an NIH spokesperson says NIH has nothing to say because Proposition 71 is “a state matter.” But earlier this year Battey told The New Yorker that the measure “could have a really transforming effect on stem-cell research … [and] would certainly make California an extremely attractive place to conduct it.”

    Despite the victory, supporters remain concerned about a bill sponsored by U.S. Senator Sam Brownback (R-KS) that would outlaw all forms of cloning. The new Senate includes four members whose records suggest they are likely supporters of the ban, which would leave Brownback only about five votes short of the 60 needed to move forward. At the same time, supporters of an opposing bill that would outlaw reproductive cloning but permit cloning for research remain hopeful that they, too, will prevail.


    Decline in New Foreign Grad Students Slows

    1. Yudhijit Bhattacharjee

    The number of international students beginning graduate studies at U.S. universities has declined for the third year in a row. But the 6% drop is the smallest in 3 years, an improvement that some attribute in part to faster handling of visa applications.

    The news, in a survey released last week by the Council of Graduate Schools (CGS), comes as a relief to higher education organizations, which had braced for the worst earlier this year after a 28% drop in international graduate applications and an 18% drop in offers of admission. Enrollments, which represent the final step in that progression, were down 10% in the fall of 2003 following an 8% drop the year before. The decline appeared in the first academic year after the 11 September terrorist attacks and reversed several years of growth in the number of international students.

    Universities have stepped up their efforts to assist foreign students, says CGS president Debra Stewart, “by streamlining their admissions processes, enhancing their use of technology, and forming international partnerships.” The council says those measures contributed to a rise this year in the percentage of admitted international students who ended up enrolling.

    “I am pretty sure that we have gotten over a hump in terms of visa delays,” says Sherif Barsoum, associate director of the Office of International Education at Ohio State University in Columbus, who says he has received “not one e-mail or phone call this year complaining about a visa.” Another survey released this week by five groups, including CGS and NAFSA: Association of International Educators, reported no change in undergraduate enrollments but declines in graduate enrollments at two-thirds of major research institutions.

    “The good news is that the administration has become aware of the seriousness of the problem and has begun to take steps to address some of the obstacles that are discouraging or preventing legitimate students and scholars from coming to the United States,” says NAFSA's Marlene M. Johnson. “The bad news is that, despite some positive signs, the overall numbers are still discouraging.”

    University administrators say their schools still need to combat the perception that U.S. campuses are unfriendly toward international students. Toward that goal, some universities are reimbursing students for the $100 fee the government charges to implement the Student and Exchange Visitor Information System, which tracks foreign students once they arrive. “It's not a lot of money, and it sends out a welcoming message,” says Patricia Parker, assistant director of admissions at Iowa State University in Ames, which saw a 25% drop in first-time international graduate enrollment this fall.

    Overall graduate enrollment is down 1%, according to the CGS survey, and 2% fewer domestic students are entering graduate school. The life sciences and engineering show the steepest declines in first-time international enrollment within the sciences, whereas the physical sciences are enjoying a 6% rise in first-time international students (see graph).

    Showing up.

    Enrollments are the last step in the process of attending graduate school, and trends vary by field.


    “We made offers to more international students this year than usual, anticipating that some of those who might accept would have trouble getting visas,” says Allen Goldman, chair of the physics department at the University of Minnesota, Twin Cities. “That didn't materialize.” The result, says Goldman, is a larger entering class—35 rather than 25 students—that is also more international.


    NOAA to Retool Research Programs

    1. David Malakoff

    The National Oceanic and Atmospheric Administration (NOAA) has embraced suggestions to consolidate some laboratories and make its funding practices easier to understand. But officials have rejected a proposal from an outside advisory panel for a new science czar.

    “This looks like a very good start,” said Science Advisory Board chair Leonard Pietrafesa, of North Carolina State University in Raleigh, reacting last week to NOAA's plan to shake up its $350 million research program, which includes everything from space-based climate and weather studies to fisheries research and deep-sea exploration. The outside panel, led by climate scientist Berrien Moore of the University of New Hampshire, Durham, suggested in August that the agency consolidate half a dozen laboratories in Boulder, Colorado, and revamp a convoluted external grants program. The panel also called for the agency to develop 5- and 20-year science plans and to put the agency's research programs under the control of a new senior administrator and an allied advisory board (Science, 11 June, p. 1579).

    On 3 November, NOAA officials told the board that they have nixed the last two ideas because of congressional opposition to any more bureaucratic layers. Instead, NOAA Deputy Administrator James Mahoney says his position has been “restructured” to increase his oversight of research. Officials said they expect other changes will take place over the next 18 months, including clarifying both the amount of money available and the application process for extramural researchers and creating a Web-based grants management system. “I don't think anyone would give NOAA an ‘A’ for our involvement with outside” researchers, Mahoney said. He also promised to increase the number of administrators overseeing key science programs, saying that although NOAA has “an abundance of capable researchers,” the administrative corps “is very thin indeed.”

    Perhaps the biggest question mark is the fate of NOAA's six Colorado laboratories. Congressional critics have argued for lumping the labs together into fewer and less expensive units, but researchers worry that the move could hurt science programs. Mahoney says a task force could issue a consolidation plan as early as this fall, followed by another group looking at NOAA's ecological research programs. Any proposed changes, however, must survive vetting from the White House Office of Management and Budget and win the support of Congress.


    Mixed Week for Open Access in the U.K.

    1. Daniel Clery

    CAMBRIDGE, U.K.—Supporters of “open access” scientific publishing—in which authors pay the cost of publication and accepted papers are freely available online—have received a public setback and a private boost in the United Kingdom in the past few days.

    The British government, saying it is “not obvious … that the ‘author pays’ business model would give better value for money than the current one,” rejected recommendations from the House of Commons Science and Technology Committee to fund some costs associated with open-access publishing. The committee promptly accused the government of buckling under pressure from scientific publishers. On the other hand, the Wellcome Trust—the largest funder of basic biomedical research in the United Kingdom—threw its considerable weight behind open-access publishing. It announced that it will require researchers it funds to deposit papers in a public archive “within 6 months of publication,” and it is discussing the creation of a European version of PubMed Central, the open archive of published papers run by the U.S. National Library of Medicine.

    The government's statements came in a detailed response to a report issued this summer by the House of Commons committee (Science, 23 July, p. 458). The committee concluded that open-access publishing is an “attractive” idea but called for further study because of the possible impact on learned societies, which rely financially on journal subscriptions. The panel also was concerned that the pharmaceutical industry, which subscribes to many journals but contributes few papers, would get a free ride. In the meantime, the committee recommended that the research councils create a fund to which authors could apply for the costs of publishing in open-access journals. It also called for U.K. universities to set up online repositories for their researchers' preprints and published papers, which would be posted at some agreed point after publication in a journal, and called for the government to set up a body to coordinate these archives.

    Open up.

    Parliamentary committee chair Ian Gibson (left) and Wellcome Trust chief Mark Walport back public access.


    The true costs of open-access publishing are still not clear, the government responded, noting that “before fully supporting any new business model, the Government will need to be convinced that this model is better and cheaper.” It declined for now to require that government-funded researchers deposit their published papers in open repositories or to establish a fund to pay authors' publication fees.

    The House of Commons committee published the government's response on 8 November, along with its own commentary* accusing the Department of Trade and Industry (DTI) of neutralizing dissenting voices within the government. “DTI is apparently more interested in kowtowing to the powerful publishing lobby than it is in looking after the best interests of British science,” says committee chair Ian Gibson. The government's response is “a defence of the status quo,” adds Jan Velterop, publisher of the author-pays journal Biomed Central.

    While the government was urging caution on open-access publishing, the Wellcome Trust was stepping up its support. The trust will now fund “reasonable” costs for its grantees to publish in author-pays online journals; in addition, it will require grantees to deposit published papers in a public archive. Trust director Mark Walport estimates that publication charges will only amount to 1% of the trust's research costs. The goal is “to achieve maximum value from our research through maximum distribution,” says Walport.

    Gibson says the committee will continue the fight. Librarians are “gung-ho” about public access, he says, and he hopes that the research councils will soon come out in favor too. In the new year, Gibson says, there will be a debate in the House of Commons: “We're going to argue this with them.”


    Skeptics Question Whether Flores Hominid Is a New Species

    1. Michael Balter

    When a research team announced last month that it had found a new species of 18,000-year-old tiny human in a cave on the Indonesian island of Flores, it seemed almost too amazing to be true (Science, 29 October, p. 789). Now a small but vocal group of scientists argues that the skeleton dubbed Homo floresiensis is actually a modern human afflicted with microcephaly, a deformity characterized by a very small brain and head. Meanwhile, an Indonesian scientist who also challenges the skeleton's status has removed the skull to his own lab for study. But members of the original team of Australian and Indonesian scientists staunchly defend their analysis, and outside experts familiar with the discovery are unmoved by the critique.

    The main challenge comes from paleopathologist Maciej Henneberg of the University of Adelaide in Australia and anthropologist Alan Thorne of the Australian National University in Canberra. Neither has seen the specimen itself, and as Science went to press, they had yet to publish their criticisms in a peer-reviewed journal. But Henneberg published a letter in the 31 October Adelaide Sunday Mail arguing that the skull of the Flores hominid is very similar to a 4000-year-old microcephalic modern human skull found on the island of Crete. And at a press conference on 5 November, Indonesian paleoanthropologist Teuku Jacob of Gadjah Mada University in Jakarta claimed that the specimen was a diminutive modern human. Jacob, once described as the “king of paleoanthropology” in Indonesia (Science, 6 March 1998, p. 1482), has had the skull transported to his own lab from its original depository at the Center for Archaeology in Jakarta, according to center archaeologist Radien Soejono, who is a member of the original discovery team.

    Surprising skull.

    A few researchers say the Flores skull may be a deformed modern human.


    In its original paper, the team considered and rejected several possible deformities, including a condition called primordial microcephalic dwarfism (Nature, 28 October, p. 1055). But Henneberg claims that the authors failed to consider a related condition called secondary microcephaly. “They jumped the gun,” he told Science. Henneberg, who with Thorne favors a multiregional model of human origins that some say is at odds with the finding of a distinct but recent human species on Flores, concludes that the skeleton is “a simple Homo sapiens with a pathological growth condition.” (Multiregionalism holds that modern humans evolved after 2 million years of interbreeding among worldwide populations; the evolution of a distinct species would require a long period without interbreeding).

    But archaeologist Michael Morwood of the University of New England in Armidale, Australia, a leader of the team that discovered the skeleton, insists that the skeleton is not a pathological case. “We now have the remains of at least seven individuals,” he says. “All are tiny, and all can be referred to as Homo floresiensis.

    The team is backed by several outside researchers. Anthropologist Leslie Aiello of University College London says the skeleton cannot be that of a modern human because the postcranial bones indicate a separate species. “The pelvis is virtually identical to that of an australopithecine,” much wider than the modern human pelvis, she says. And compared with modern humans, “the arms are long in relation to the legs.” Chris Stringer of the Natural History Museum in London sums up many researchers' opinions by saying, “This cannot be a peculiar modern human.”


    Spin Current Sighting Ends 35-Year Hunt

    1. Robert F. Service

    The electron's charge gets all the glory: It is, after all, responsible for the plethora of electronic gizmos that surround us. But the particle's magnetic behavior—a property known as spin—has also been tantalizing scientists for decades. Thirty-five years ago, for example, Russian theorists suggested that impurity atoms in a semiconductor might interact with electrons' spins to redirect currents flowing through it. A related effect, called the Hall effect—in which magnetic fields push electrons around by interacting with their charge—had been known for more than a century. But despite decades of work, the spin-based Hall effect had never been spotted—until now.

    In a paper published online this week by Science (, researchers led by David Awschalom, a physicist at the University of California, Santa Barbara (UCSB), report the first experimental sighting of the spin Hall effect. “It is as beautiful as it is a breakthrough experiment,” says Gerrit Bauer, a theoretical physicist at the Delft University of Technology in the Netherlands. Daniel Loss, a theoretical physicist at the University of Basel, Switzerland, agrees. “The data is very clear,” he says. “It's very impressive.” The new scheme works in standard semiconductors widely used in industry today. That could be a major boon to the nascent field of spintronics, which promises to create a new class of high-speed, low-power electronic devices that manipulate the spin of electrons.

    An American physicist named Edwin Hall discovered the original Hall effect in 1879. The effect occurs when an electric current moves through a metal strip while a magnetic field is applied top down through the metal. The magnetic field interacts with the charge of the moving electrons, deflecting some to the left and some to the right sides of the strip. In 1971, Mikhail Dyakonov and Vladimir Perel of the Ioffe Physico-Technical Institute in Leningrad suggested that electrons' spins might trigger similar detours. These spins behave like tiny bar magnets that point up or down. Dyakonov and Perel proposed that electrical defects in semiconductors could create a localized electromagnetic field that would shunt spin-up and spin-down electrons to opposite sides of the semiconductor, a scheme that came to be known as the “extrinsic” spin Hall effect. Decades later, other theorists suggested that such deflection might also result from an “intrinsic” effect due to the strain between atoms in a semiconductor alloy. In recent years, theorists have clashed sharply over which effect would more likely be spotted.


    Impurity atoms deflect electrons with different spins (red and blue) to opposite sides of a semiconductor chip.


    Awschalom says his team waded into this battle somewhat by accident. Earlier this year, the Santa Barbara researchers, who included graduate students Yuichiro Kato and Roberto Myers along with electrical engineer Art Gossard, discovered a scheme for electrically injecting spins into a semiconductor, another long-sought goal of spintronics (Science, 2 April, p. 42). They were tracking spins by a technique called scanning Kerr microscopy, in which researchers bounce polarized laser light off a semiconductor sample. If electrons on the surface atoms all have spins in one preferred orientation, the polarization of the ricocheting photons will rotate slightly.

    When the UCSB researchers used a Kerr microscope lens with 1-square-micrometer resolution, they saw clear bands of electrons with opposite spins huddled along the sides of the semiconductor chips. All it took to create the bands was to push an electric current through the semiconductor chip.

    The researchers first detected the bands in a semiconductor chip made from gallium arsenide (GaAs), similar to the chips in cell phones. Then, in hopes of resolving the battle over intrinsic and extrinsic effects, they looked for the effect in a semiconductor called indium gallium arsenide (InGaAs). Theorists had suggested that the atomic bonds in InGaAs had the right sort of strain to produce a sizable intrinsic spin Hall effect even if the impurities didn't play a role. But the researchers found that impurities could explain virtually all of the buildup of spins they saw. “It looks like [the extrinsic effect] is the major player” in the semiconductors studied so far, Loss says.

    Using standard techniques for tailoring the amount of impurity atoms and other “defects” in semiconductors, “it should be possible to engineer materials to increase the size of this effect,” Awschalom says. That in turn could point the way for spintronics researchers to develop an array of spin-manipulating devices to switch currents of particular spins on and off, as well as steer, filter, and amplify them. That might be enough to finally bring the electron's spin a little limelight of its own.


    Sperm-Targeting Vaccine Blocks Male Fertility in Monkeys

    1. Jennifer Couzin

    When the 1880s debut of vulcanized rubber and the 1930s advent of latex mark the latest advances, one quickly understands the sorry state of male contraceptives. Researchers in this beleaguered field have tried to supply men with other options but have had little success. Now, on page 1189 of this issue of Science, a team in the United States and India reports preliminary results of its new contraceptive vaccine for males. Although not without problems, the vaccine prevented pregnancy in female partners of the male monkeys receiving it.

    “There seems to be some real promise,” says Ronald Swerdloff, a reproductive endocrinologist at the University of California, Los Angeles. Still, “it's just early in the game,” with too few monkeys tested, to conclude whether the approach will pan out, he adds.

    Reproductive biologist Michael O'Rand of the University of North Carolina, Chapel Hill, crafted the vaccine several years after reporting his discovery of a novel male-only protein in 2001. The protein, called Eppin, has been found so far on the surface of sperm cells and elsewhere in the testis and the epididymis. Its function isn't clear. But drawing on the general strategy of immunocontraception, in which vaccines are designed to act as contraceptives, O'Rand reasoned that if a male harbored antibodies to this protein, his sperm might malfunction.

    O'Rand teamed up with colleagues at the Indian Institute of Science in Bangalore, which hosts a large primate research center. There, the group vaccinated six monkeys with human recombinant Eppin protein and administered a sham vaccine to six others. The team hypothesized that the monkeys needed high levels of antibodies to Eppin in their blood for the vaccine to work, especially because antibody levels drop in the epididymal fluid. So when only four of the six treated monkeys displayed antibody levels that O'Rand's team deemed sufficient, the other two animals were dropped from the study. The team brought in three additional monkeys, who also reached the desired antibody levels. It's not clear why some monkeys did not produce sufficient antibodies, a problem other immunocontraceptives have encountered.

    Sperm stopper.

    Male contraceptive vaccine works in monkeys.


    The monkeys received boosters of vaccine every 3 weeks. Because the vaccine didn't lower sperm count or alter sperm in an easily detectable way, scientists resorted to another method of testing its effectiveness: The immunized male monkeys spent several days each with three different females during the fertile peak of the females' menstrual cycle. The upshot: None of the seven vaccinated monkeys managed to impregnate a female. Four of the six control monkeys did.

    The contraceptive effect of the vaccine was intended to be reversible; once the booster shots were stopped, the researchers anticipated that antibodies to Eppin would decline and fertility would return. But only five of the seven vaccinated monkeys, some of whom received booster shots for nearly 2 years, recovered their fertility during the course of the study. “It's hard to say” what that means, says O'Rand. “Maybe they recovered 2 weeks after we quit” testing them. Although conceding that the vaccine has a long way to go, O'Rand believes the study offers “a proof of principle” for immunocontraception, which has been so relegated to the sidelines that the National Institutes of Health no longer funds research on it.

    Companies have also been hesitant. New Jersey-based Organon studied immunocontraception for females before backing out, says Willem de Laat, the company's medical director. Instead, Organon and another company, Schering in Berlin, are testing a combination of oral progestin and injected testosterone.

    O'Rand and his colleagues, heartened by what they consider a success, are now trying to understand how, exactly, their vaccine disrupts fertility. One possibility is that the technique leaves sperm sluggish. Whatever its mechanism, O'Rand and other contraceptive researchers hope the new vaccine will provide a shot in the arm for the field.


    Regulators Talk Up Plans for Drug Biomarkers ...

    1. Jennifer Couzin

    New methods of predicting clinical outcomes are getting serious consideration at the U.S. Food and Drug Administration (FDA). Last week the agency invited members of one of its advisory committees to a meeting in Washington, D.C., where attendees explored the use of biomarkers to monitor everything from protein levels to bone density, to gauge a drug's effectiveness, or possibly to measure the progression of a disease. FDA officials at the meeting said they're launching a multiyear effort to bring biomarkers into the mainstream of drug discovery.

    “There are some significant payoffs if this is successful,” such as speedier drug trials, says Lawrence Lesko, FDA's director of clinical pharmacology and biopharmaceutics. Still, Lesko has encountered hesitancy: “People are sensitive to past failures” of biomarker use, he says.

    The new effort began with former FDA commissioner Mark McClellan, who left the agency in February and has not yet been replaced. McClellan made biomarkers part of the agency's “Critical Path” initiative, a plan released in March to speed drugs to market.

    Currently, most drugs are approved based on so-called clinical endpoints, such as longer survival for cancer drugs or fewer fractures for an osteoporosis drug. Researchers have long believed that there are reliable surrogates that can be detected earlier for many clinical endpoints. For example, the time it takes a cancerous tumor to resume growing during or after a specific type of treatment may indicate how long a patient will survive. Biomarkers tied to a clinical outcome, like this one, could be used as surrogate endpoints in trials.

    FDA already approves some drugs based on surrogates, particularly for life-threatening diseases like AIDS (for which it has used surrogates since 1992) and cancer. But the history of biomarkers is marred by some high-profile disasters. One was the widespread use of two antiarrhythmic drugs, encainide and flecainide. These drugs were intended to reduce the likelihood of a second heart attack because uncontrolled arrhythmia was considered a predictor. When a clinical trial actually tested the drugs for this indication in the late 1980s, three times as many people died in the drug arm as in the placebo group, and the study was halted.


    FDA's Lawrence Lesko is pushing biomarkers forward.


    Advocates nevertheless argue that using biomarkers will do more good than harm by getting new medicines to patients quickly. “We're putting these molecules in these painfully slow, archaic programs” for testing, says Paul Watkins, a liver specialist at the University of North Carolina, Chapel Hill.

    Although FDA is enthusiastic, it remains vague about how it might change its methods. As a first step, says Lesko, it will set up an internal working group to discuss what qualifies a biomarker as a surrogate endpoint. It may also comb through archival data for promising biomarkers. “We have to get down to some more specifics … to make this proposal come to life,” Lesko admits. Working closely with FDA's acting commissioner for operations Janet Woodcock, he says he hopes to foster collaborations between industry, academia, and FDA to get ideas moving beyond the basic-research stage.

    But pushing biomarkers forward is not risk-free. “As we study biomarkers, we're going to develop evidence that impugns their use,” said John Wagner, senior director of clinical pharmacology at Merck Research Labs, at last week's meeting. And Watkins, who heads a new government-funded consortium on drug-induced liver toxicity, notes that biomarkers will gain acceptance only if they're used to flag issues of safety as well as efficacy. In May, the consortium began enrolling the first of dozens of patients who suffered severe liver effects from one of four drugs, along with matching controls. It aims to link molecular markers with susceptibility to this common side effect.

    “Everybody wants to sit down and use something that's predictive” of clinical outcome, says Charles Grudzinskas, a former drug industry executive and founder of NDA Partners LLC, a Washington, D.C., consulting firm. He adds that FDA seems ready to lead the way with an attitude that, “we're willing to go out on skinny branches.” But he and others are waiting to see whether FDA's next chief will throw the agency's prestige and funds behind this cause.


    ... And NCI Hears a Pitch for Biomarker Studies

    1. Jocelyn Kaiser

    Cancer researcher Lee Hartwell this week proposed a major new initiative to discover biomarkers for the early detection of cancer. In a white paper presented to the National Cancer Institute's (NCI's) Board of Scientific Advisors, Hartwell, of the Fred Hutchinson Cancer Research Center in Seattle, Washington, outlined how the project would scan thousands of blood samples from cancer patients for proteins and other biological molecules that can indicate incipient tumors.

    Hartwell said his plan would boost spending on biomarkers, now overshadowed in NCI's budget by new drug development and even prevention trials. Although only a handful of biomarkers are widely used, the sequencing of the human genome and the debut of new, automated mass spectrometry machines for detecting proteins leaves the field ripe for new breakthroughs, Hartwell said: “We need to organize our efforts.” His proposed “Coordinated Clinical Proteomics and Biomarkers Discovery Initiative” would include centers for testing biomarker technologies, a repository of reagents, and a public proteomics software package.

    The initiative, for which Hartwell offered no price tag, would complement a technology plan outlined by the Broad Institute's Eric Lander in September (Science, 24 September, p. 1885). Both proposals were requested by NCI Director Andrew von Eschenbach as part of his plan to eliminate death and suffering from cancer by 2015.

    Board members peppered Hartwell with questions. Because the same protein can occur in many forms, for example, “you have to realize it's going to be much more complicated than looking for a single protein,” said Susan Horwitz of Albert Einstein School of Medicine in New York City. Richard Schilsky of the University of Chicago suggested that costs might approach those of drug testing if each new marker had to be validated clinically. Hartwell disagreed, saying the project would “piggyback” on other large studies such as the Women's Health Initiative by borrowing tissue samples. “I don't see the validation adding a great deal of expense,” he said.

    Several members also asked how the initiative would fit with existing NCI programs and the National Institutes of Health's broader Roadmap, which includes proteomics. Von Eschenbach responded that it would “dovetail” with them but did not specify how. Questions about one such detail—how NCI would pay for a new biomarker plan with an ever-tightening budget—may come up later this month at a meeting of the National Cancer Advisory Board.


    Seeking the Key to Music

    1. Michael Balter

    Why did the ability to carry a tune evolve? At an unusual, high-level meeting, researchers pondered whether music helped our ancestors survive and reproduce or whether it is merely a happy evolutionary accident

    READING, ENGLAND—On a recent fall evening, the lobby of the archaeology building at the University of Reading was the scene of a strange ritual. Twenty-five researchers danced in a circle while blowing on the ends of differing lengths of rubber tubing. Pedro Espi-Sanchis, a music educator based in South Africa, had cut the tubing such that the notes produced by the pieces spanned two full octaves. Espi-Sanchis encouraged everyone to toot to his or her own inspiration, but to try not to repeat what others were doing. After several minutes, to everyone's delighted surprise, the individual notes coalesced into a single pleasing melody to which the dancers swayed and dipped in rhythm.

    This spontaneous musical performance, a highlight of a recent workshop on the evolution of music and language,* illustrated one of the meeting's key themes: Music, like language, can be a form of communication and coordination among people. Moreover, music is an exquisitely powerful way of conveying emotion, a task at which language all too often falls short.

    Yet although few researchers question that human language arose by means of natural selection, presumably because more accurate communication helped early humans survive and reproduce, the evolutionary significance of music has remained open to debate. The meeting, organized by Reading archaeologist Steven Mithen and music educator Nicholas Bannan, was intended as a first step in setting a research agenda to explore the evolution of music.

    In 1997, cognitive scientist Steven Pinker, then of the Massachusetts Institute of Technology, threw down the gauntlet in his book How the Mind Works, when he suggested that music itself played no adaptive role in human evolution. Rather, Pinker argued, music was “auditory cheesecake,” a byproduct of natural selection that just happened to “tickle the sensitive spots” of other truly adaptive functions, such as the rhythmic bodily movements of walking and running, the natural cadences of speech, and the brain's ability to make sense of a cacophony of sounds. Music, Pinker maintained, is what the late paleontologist Stephen Jay Gould called a “spandrel,” after the highly decorative but nonfunctional spaces left by arches in Gothic buildings.

    But many researchers disagree, arguing that music clearly had an evolutionary role. They point to music's universality and the ability of very young infants to respond strongly to it as evidence that music itself is hardwired into our brains. “A predisposition to engage in musiclike activities seems to be part of our biological heritage,” says Ian Cross, a psychologist of music at Cambridge University. He and others point to the work of University of Montreal neuroscientist Isabelle Peretz, whose studies of musically challenged neural patients, which suggest that distinct regions of the brain specialize in music processing, have made her a leading opponent of the Pinker viewpoint (Science, 1 June 2001, p. 1636). Indeed, Cambridge University anthropologist Robert Foley argues that the evidence is suggestive enough that “an adaptive model for music should be the default hypothesis.”

    Scientific bonding.

    Researchers at a meeting danced and played in step.


    All the same, many researchers agree that Pinker's argument represents the key challenge to be met: If music is the result of Darwinian natural selection, how did it evolve, and in what way did it make humans more fit? At the interdisciplinary meeting, many talks focused on music's ability to cement social bonds. Some researchers argued that the roots of music could perhaps be traced back to “performance spaces” created by earlier species of human. Others see music as a way of getting high with one's peers, again to lubricate human bonding. And new studies focus attention on mothers and infants, suggesting that music might have evolved as a way for parents to soothe babies while foraging for food.

    By the end of the meeting, says Peretz, “I felt a consensus around the idea that music is not only distinct from language but also has biological foundations.” Yet there was also broad agreement that Pinker's challenge had not been fully answered.

    Sociability versus sex

    Like language, most musical behavior leaves no trace in the archaeological record. The earliest undisputed instruments are flutes made from bird bones found at Geissenklösterle cave in Germany and Isturitz cave in France, created and played by modern humans a scant 32,000 years ago. But the first instruments were probably made of perishable materials such as bark or bamboo and are not preserved, says Bannan. And given the universality of music today, most researchers assume that its origins extend back much further, possibly even before modern humans arose some 150,000 years ago. “If there is a strong genetic basis to musicality, then for it to be universally present in the human population it must have been in place more than 150,000 years ago,” says Foley.

    In the workshop's opening talk, Foley pointed out that Charles Darwin himself was hard put to explain how music made humans better adapted to their environment. In the end, Darwin concluded that music was the result of “sexual selection,” the elaboration of traits—such as the peacock's tail—designed to attract a mate and thus ensure reproductive success. Just as some songbirds sing as part of the courtship process, Darwin proposed that humans evolved the ability to sing to each other to express emotions such as love and jealousy.

    That theory has some leading proponents today, including University of New Mexico evolutionary psychologist Geoffrey Miller, author of The Mating Mind. Miller notes that in some bird species, such as marsh warblers and nightingales, the male signals his supposed genetic fitness to the female by the sheer number of songs he can sing and can reach a repertoire of more than 1000 numbers. He argues that music might have evolved as a way for humans to show off their reproductive fitness. But the sexual selection hypothesis continues to be a minority view among music evolution researchers. “If it was sexual selection, [music] would be a lot more restricted,” says Foley. “We would see it more in courtship and less in other activities. Musical ability and activity are too widespread.”

    Foley and others favor another hypothesis, which holds that in humans, music plays an important role in maintaining social cohesion—critical to mounting coordinated actions—which was essential for hominid survival. Experts in primate behavior have long assumed that cooperation among members of a group boosted the survival rates of early hominids and their offspring, thus selecting for genes that enhance social bonding. But direct evidence has been lacking—until last year, when anthropologist Joan Silk of the University of California, Los Angeles, and her co-workers published a study in Science. After 16 years of observing wild baboons, they demonstrated that infants of more sociable female baboons had a higher survival rate (Science, 14 November 2003, p. 1231).

    Foley points out that the apparent fitness benefit of social cohesion is also the current leading hypothesis for why language itself evolved. “So it makes sense to extend it to music and indeed most other activities,” he says. The evening performance led by Espi-Sanchis was a good example of music's “ability to be used in group bonding,” adds psychologist Helen Keenoo of the Open University in Milton Keynes, U.K. “Many people seemed to come away from this experience on an emotional high.”

    But others, including Pinker, say the social-cohesion hypothesis suffers from circular reasoning. Björn Merker, an expert in animal vocalizations at Uppsala University in Sweden who attended the meeting, says that the hypothesis “takes for granted that which it needs to prove, namely why music is needed for bonding and where it got its group-stimulating powers.” Merker prefers a hypothesis that “is driven exclusively by the individual advantage of sexual selection.” Pinker, who was not at the meeting and is now at Harvard University, adds that “universality and early development don't show that music is an adaptation. It just shows that music is innate. That's a necessary condition for something being an adaptation but not a sufficient one.”

    First flutes.

    These 32,000-year-old flutes are the oldest undisputed evidence of music.


    Music for the masses

    For social-cohesion theorists, the challenge is to explain why singing or dancing enhanced social bonding—and why that in turn fosters greater fitness and survival. Robin Dunbar, a psychologist at the University of Liverpool, U.K., has suggested that music might have put groups of hominids into a collective endorphin high, making them feel more positively disposed toward their fellow hominids—and thus more likely to cooperate and survive. Researchers have long known that listening to music can trigger the production of endorphins, natural opiates that are produced in response to pain or other stress. In a frequently cited 1980 study by Stanford University neuroscientist Avram Goldstein, volunteers who received injections of an endorphin-receptor blocker reported getting considerably less pleasure when they listened to normally moving musical pieces.

    Dunbar is well known for his “social brain” hypothesis of human evolution, which holds that larger hominid brain sizes and language both evolved as a response to increasing group sizes in our primate ancestors (Science, 14 November 2003, p. 1160). He argues that the endorphin release from music may enhance the subjective feeling of bonding, creating stronger social cohesion. He told the attendees of the meeting about a pilot study that he and his students recently carried out in English churches. In the study, which aimed to look at the effects of music in a social setting, the endorphin levels of churchgoers who attended Anglican services with and without singing were monitored by indirect methods that measured tolerance to pain. (Measuring endorphins directly requires an invasive lumbar puncture.) After services, parishioners who had sung were able to endure having a fully inflated blood pressure cuff on their arms for significantly longer than those who had not sung.

    Dunbar stressed that although his own study is very preliminary, the overall evidence suggests that group singing and dancing might have helped bridge what he calls the “endorphin gap” between the nonverbal grooming activities of our primate ancestors and the later development of language. A number of studies have shown that grooming, which is the social glue of monkeys and many other primates, raises endorphin levels. “Humans are good at finding things that trigger the sensations they like,” Dunbar says. And in a social context, he says, “endorphin surges create a very strong sense of bondedness and belonging that seems difficult to create any other way.”

    One way to support the social-cohesion hypothesis might be to find archaeological evidence of such group interactions in humans' evolutionary past, but such evidence has been hard to come by. In an imaginative talk, archaeologist Clive Gamble of the University of London proposed that group singing and dancing might be traceable back as far as half a million years ago, by seeking evidence for “performance spaces” where such activities might have taken place.

    He drew on a recent visit to a village of the Makuri people of northern Namibia, where he watched a performance in which women sat around a fire while men, wearing rattles on their legs and striking sticks, danced around them. The next morning, Gamble could see the circle in the sand made by the male dancers. He compared those circles to several circles, 8 meters in diameter and marked by anvils of bone and stone, unearthed at the 400,000-year-old hominid site of Bilzingsleben in Germany, which he suggested represented gathering and performance areas of these early humans. He also pointed to an unusual concentration of 321 hand axes, many of them unused and all located far from a butchering area, at the 500,000-year-old site of Boxgrove, in Sussex, U.K. Gamble suggested that this possibly symbolic deposit of hand axes may have represented a space where early humans gathered to sing and dance.

    Although Gamble's evidence is scant, “I am sure that the hominids at Boxgrove were communicating in a musical and dancelike fashion,” says Mithen, who feels that such speculations “give us a perceptive understanding of [early humans'] lifestyle.”

    Music and motherese

    If music did evolve to facilitate a sense of belonging among early hominids, it's possible that a very specific human relationship—that of mothers and infants—was involved, says University of Toronto psychologist Sandra Trehub. She suggested at the meeting that music was crucial to both bonding with and soothing babies, as well as allowing mothers to get on with other tasks that boosted survival.

    For years Trehub and her colleagues have studied how mothers talk and sing to their infants. Maternal speech has a number of features that can be considered musical, including higher pitch than normal speech—which is associated emotionally with happiness—and a slower tempo, which is associated with tenderness. Trehub and others have demonstrated that infants prefer maternal cooing to normal adult speech in studies that monitor “infant gaze,” or how long a baby spends looking in one direction, considered a measure of attention.

    Music to his ears.

    A mother's song captures her baby's attention.


    In a more recent study, in collaboration with Takayuki Nakata of the Nagasaki Junshin Catholic University in Japan, Trehub measured the responses of 6-month-old infants as they watched videos of their mothers. Infant gaze times were even longer during episodes of maternal singing than during normally melodic maternal speech. In another recent study, Trehub and Nakata asked volunteer mothers to talk to their infants for 2 minutes at a time. During one session, the mothers were allowed to touch their babies as much as they wanted; in a second session, they were told not to touch their babies. Trehub and Nakata found that the women markedly increased the pitch of their voices—that is, made them much more musical—when they could not touch their infants. The infants, for their part, responded to their mothers' efforts to compensate for the no-touch rule with even longer gaze times.

    Trehub and her co-workers did not try to measure endorphin levels in their infant subjects, but they did measure the cortisol levels in the saliva of babies before and after their mothers spoke or sang to them. Higher blood cortisol levels are a reliable indicator of higher arousal levels, and the hormone passes easily from the bloodstream to saliva. Mothers themselves took the saliva samples by gently swabbing their infants' mouths with a cotton roll. The results were striking: Maternal singing caused a marked decrease in cortisol levels that was maintained for at least 25 minutes after the singing stopped. Maternal speech, on the other hand, caused an initial drop in cortisol levels, which then quickly rebounded to normal. “The function of maternal singing seems to be to regulate the arousal level of the infant,” Trehub concluded.

    Of course, this is rather obvious to anyone who has ever sung a baby to sleep. But for Trehub, that's the whole point. “Every culture in the world has lullabies,” she told the meeting. “And they sound very similar across cultures. They are emotive: The pitch goes up and the tempo goes down.” The universality of lullabies, Trehub said, is strong evidence that they have an evolutionary origin. As for what their adaptive function might be, Trehub favors a speculative new idea, called the “putting down the baby hypothesis,” recently proposed by anthropologist Dean Falk of Florida State University in Tallahassee.

    Falk's hypothesis, in press at the journal Behavioral and Brain Sciences, is based on comparisons of the mother-infant interactions of chimpanzees and modern humans as well as data from fossils. She argues that as the brain size of early hominids increased—thus making it more difficult for infant heads to pass through the birth canal—natural selection favored females who gave birth to more immature infants. Unlike baby chimps, who can cling to their mothers at a very young age, human infants are too helpless to do so. The hominid female responded to this situation, Falk argues, by developing melodious vocalizations, or “motherese,” so that she could calm and reassure her baby, if not actually put it to sleep, while foraging for food. These vocalizations, Falk concludes, were the prelinguistic forerunner to true language. And although Falk's hypothesis is controversial—not everyone agrees that “motherese” is universal—Trehub says that it is consistent with the notion that maternal singing, and thus early forms of music, also had an adaptive function.

    Despite this range of suggestions for music's adaptive functions, Pinker, for one, says his challenge has not been met. “The idea that music evolved to soothe babies might explain why mothers sing to their babies,” he says, “but it doesn't explain why older children and adults listen to music.” But he adds that whether music was essential to the survival of modern humans has little bearing on its value to us today: “Some of the things that make life most worth living are not biological adaptations.”

    • * European Science Foundation Workshop on Music, Language, and Human Evolution, Reading, U.K., 28 September to 1 October 2004.


    Immunizing Kids Against Flu May Prevent Deaths Among the Elderly

    1. Jon Cohen

    Increasing evidence suggests that vaccinating schoolchildren can create a “herd immunity” that indirectly benefits the unvaccinated

    The flu vaccine shortage in the United States has had one clear benefit: It has forced a debate about the best vaccination strategy. And mounting data suggest that there are much more effective ways to combat the annual onslaught of this deadly disease than what the country does now.

    The Advisory Committee for Immunization Practices (ACIP) of the Centers for Disease Control and Prevention recommends the vaccine for healthy infants and people 50 or older, as well as people with chronic illnesses, the groups that suffer the most hospitalizations and deaths. But a study published online 2 November in Vaccine concludes that vaccinating school-age children could have a greater impact on slowing the spread of influenza virus and reducing disease.

    Vaccinating a high percentage of people in a community can decrease the spread of a pathogen and create a “herd protection” that extends to the unvaccinated. Just such a phenomenon has occurred in two neighboring Texas towns, Temple and Belton, report Pedro Piedra, W. Paul Glezen, and colleagues from Baylor College of Medicine in Houston. Since the 1998–99 flu season, the researchers have offered a new, nasally administered flu vaccine to everyone between 18 months and 18 years of age, reaching 20% to 25% of the 20,000 eligible children each year. The researchers tallied serious flu-related disease in all age groups. For comparison, they analyzed three other communities in which less than 1% of the 39,000 children received flu vaccine.

    In Temple and Belton, the researchers found that each year they vaccinated school-age children, serious flu cases in adults 35 and older were 8% to 18% lower than in the comparison communities. “This translates to a major reduction in illness,” says Piedra. “With the current policy, you only try to control mortality. If you want to control flu, our hypothesis is to focus on kids.” Earlier studies showed similar herd protection by vaccinating school-age children, but they did not persuade policymakers. U.S. flu-control campaigns “have never really taken this approach seriously,” says epidemiologist Arnold Monto of the University of Michigan, Ann Arbor.

    In 1968–69 Monto and colleagues vaccinated 85% of the schoolchildren in the small town of Tecumseh, Michigan. Virtually no adults received the vaccine. Even so, disease rates in all age groups were three times lower in Tecumseh than in a largely unvaccinated nearby town. “There's a lot of benefit to be gained by targeting school-age kids,” says Monto, who chairs a subcommittee of ACIP that focuses on herd immunity. Monto adds that vaccinating the “frail” elderly may be less effective because they often do not develop a robust immune response.

    Still more evidence of a herd effect comes from Japan. As Thomas Reichert, Baylor's Glezen, and co-authors reported in the 22 March 2001 New England Journal of Medicine, Japan vaccinated 50% to 85% of schoolchildren during the mid-1970s and 1980s, but the elderly rarely received the vaccine. During that period, deaths from influenza and pneumonia—which mainly kill the elderly—dropped by at least 10,000 per year. As of 1987, parents could exempt their children from the program; deaths from those diseases began to steadily increase.

    Biostatisticians Ira Longini and Elizabeth Halloran of Emory University in Atlanta, Georgia, have developed a mathematical model to find the optimal way to distribute flu vaccine. As they describe in a 2000 Vaccine paper, vaccinating just 30% of schoolchildren in a community reduces the likelihood of epidemic spread of flu from 90% to 65%. If half the children are vaccinated, the likelihood drops to 36%; if 70% are vaccinated, the probability of epidemic spread plummets to 4%. “Children are highly connected among themselves, and they're connected through adults to families and neighborhoods,” says Longini, noting that they're twice as likely to become infected as adults. “By vaccinating children, you're tearing the heart out of that web.”

    As was found in the Japanese study, Longini and Halloran's models suggest that such a strategy could greatly reduce deaths among the elderly. If, for example, coverage of schoolchildren increased from the current 5% to 20%, they predict it would reduce more deaths in the over-65 population than increasing their vaccination coverage from the current 68% to 90% (see graph).

    Kid you not.

    Vaccinating even 20% of school-age children may prevent elderly deaths from flu more effectively than increasing elderly vaccination rates.


    Given the evidence, why hasn't herd immunity to flu received more attention? “Flu until now wasn't really a sexy topic,” says Halloran. “And it's sort of a medical thing that you look at protecting people directly.”

    Although the results hold for both the injected vaccine and the recently licensed inhaled one, Monto suspects that the latter works better. The nasal flu vaccine relies on live but weakened virus, whereas the injected version contains killed influenza. In theory, the live vaccine can trigger a broader immune response and may better thwart viral transmission because it stimulates immunity at the site where the virus typically enters. For mass immunization campaigns, says Monto, the spray is also easier to deliver.

    The Texas herd-immunity study will continue for two more seasons, and MedImmune Inc. in Gaithersburg, Maryland, the maker of the nasal vaccine, has four of its own studies under way that will attempt to assess the indirect benefits of vaccinating schoolchildren.

    In another novel strategy, two reports published online 3 November by the New England Journal of Medicine suggest that the current vaccine supply can be “stretched” by injecting smaller doses under the skin, instead of intramuscularly. This approach offers “great promise,” wrote Anthony Fauci and the late John La Montagne of the National Institute of Allergy and Infectious Diseases in an accompanying editorial. “It's not going to be a practical solution to the shortage problem we have this year,” stresses Fauci, “but it could be in the future.”


    RNAi Shows Cracks in Its Armor

    1. Jennifer Couzin

    RNAi's tendency to influence genes and proteins that it's not designed to target is provoking questions and controversy, as scientists labor to solve the problem

    A promising new approach to manipulating genes is showing blemishes as it moves from its glamorous early days to a more nuanced adolescence. The technique, RNA interference (RNAi), shuts down genes; this braking effect helps reveal a gene's function and could potentially be used to treat a host of diseases. But a growing number of researchers are learning that RNAi, which was hailed for its laserlike specificity by scientists and the press (including Science, which anointed it 2002's Breakthrough of the Year), comes with some unintended baggage. In particular, it can hijack genes and proteins it wasn't designed to target—a potential problem for both basic genetics studies and RNAi-based therapies, some of which are just beginning human testing.

    Even experts concerned about these so-called off-target effects hasten to point out that RNAi's future remains bright. But the issue is stirring controversy in the field. Biologists are struggling to determine—and agree upon—just how widespread off-target effects are, why they occur, and what can be done to avoid them. Some are feverishly working to circumvent the problem, with early hints of success.

    “We don't know all the rules” of the RNAi machinery, says Mark Kay, a pediatrician and geneticist at Stanford University, who's conducting RNAi animal studies to treat hepatitis B and C viruses. “My philosophy is that we move forward with caution, but we move forward.”

    In the late 1990s, scientists discovered the potency of small RNA molecules just 21 nucleotides or so in length—some lab-produced, others naturally occurring. Injecting these RNAs, often called small interfering RNAs (siRNAs), into worms and flies silenced only messenger RNA (mRNA) molecules containing a complementary sequence. That, in turn, blunted expression of the gene producing that messenger RNA. In these organisms, there was no sign that an mRNA with a slightly mismatched sequence—with, say, 17 compatible nucleotides out of 21 in the siRNA—could also be affected.

    But as scientists moved on to studies in mammals, the picture changed. One of the first to see irregularities was Peter Linsley, the executive director of cancer biology for Rosetta Inpharmatics in Seattle, Washington, a subsidiary of the drug giant Merck. “We thought it would be cool,” Linsley recalls, to use siRNAs to try to design more targeted drugs. The plan: Use siRNAs to knock down expression of a particular gene that an experimental compound is already designed to target. Then add that compound to the mix, and see if it disrupts other genes as well—something that might suggest it's not targeted enough for treating patients.

    But as it turned out, says Linsley, it wasn't the compounds that were poorly targeted. The siRNAs were turning down expression in multiple genes instead of just one. “The siRNAs were dirtier than our compounds,” says Linsley, whose team was taken aback. The pattern persisted, and the researchers finally concluded that siRNAs could “cross-react” with other genetic targets. After some struggle convincing reviewers that the paper was accurate, it appeared in Nature Biotechnology in June 2003.

    Eyeing RNAi's potential.

    With RNAi trials launching for macular degeneration (above), researchers are watching closely to see whether the technique has any unexpected effects on humans.


    RNAi enthusiasts responded skeptically. After all, they'd trusted for several years that the small RNA molecules they were crafting were undeniably specific. Gradually, prodded by Linsley's work and in some cases their own, that belief shifted. “We saw more and more unexplained phenomena,” says René Bernards, a cancer geneticist at the Netherlands Cancer Institute in Amsterdam. Phillip Zamore, a biochemist at the University of Massachusetts Medical School in Worcester, says his thinking evolved “when I couldn't find a way to disprove Peter Linsley.” Like many of his colleagues, Zamore now believes that RNAi's limitations should have been obvious and that to presume such specificity was “incredibly unreasonable.” Genetics, says Zamore, is rarely so neat.

    Why off-target effects occur remains a matter of debate. One possibility is that introducing foreign siRNAs into a cell's existing RNAi system—upon which it relies for a range of functions, from early development to protecting the genome's integrity—risks throwing a wrench into the machinery. Soon after scientists began experimentally adding siRNAs to mammalian cells, they learned that these cells naturally use hundreds of so-called microRNAs, which are similarly sized and help translate RNA molecules into proteins. MicroRNAs are widely considered much less specific than siRNAs, frequently targeting sequences that only partly match their own.

    This has left scientists wondering whether mammalian cells, awash in microRNAs, are mistaking foreign siRNAs for more of the same, especially because both microRNAs and siRNAs need many of the same enzymes to function. An RNAi study last year showed that this mistaken identity could occur. “There's probably a fine balance between the microRNA pathway and what we're putting into cells of animals,” says John Rossi, a molecular biologist at the City of Hope Graduate School of Biological Sciences in Duarte, California.

    Weak sequence matching between siRNAs and genes has also been traced to a specific part of the siRNA. That bit, called the 5′ end, helps govern how an siRNA binds to its target. As Linsley found and others such as Zamore confirmed, if that particular piece, about seven nucleotides long, matches a sequence in another gene, there's a risk of the entire siRNA binding to that gene instead.

    Increasingly, biologists are turning up other seemingly esoteric details that may also determine whether an siRNA shuts down unintended genes. In the fall of 2003, a group led by Anastasia Khvorova at Dharmacon, a company in Lafayette, Colorado, and another led by Zamore, reported in Cell that siRNAs with certain sequences and structures unwind slightly differently—and the pattern in which they unwind can ultimately affect how good they are at targeting the right gene.

    A year ago Bryan Williams, a cancer biologist at the Cleveland Clinic in Ohio, identified another, more controversial kind of off-target effect. In fruit fly cells and human cancer cells, he found that siRNAs activated the interferon pathway, which is the body's first line of defense against viruses.

    How widespread the interferon response is remains uncertain. Although some unpublished reports of interferon response in animals exist, “by and large, people who've treated animals with siRNAs have not seen significant interferon induction,” says Phillip Sharp, a biologist at the Massachusetts Institute of Technology in Cambridge and co-founder of the company Alnylam, which next year hopes to begin testing RNAi in patients with the eye disease macular degeneration.

    On target?

    Hepatitis B is one of the diseases RNAi enthusiasts are working to disable.


    In animals generally, the impact of off-target effects isn't clear. Mice with liver disease treated with RNAi technology do suffer toxic effects, says Harvard's Judy Lieberman, but those are considered more a result of the way siRNAs are delivered—in this case, under extremely high pressure, to ensure that they infiltrate liver cells. Lieberman says she's seen no visible evidence of off-target effects in her mice, but she is planning to examine the animals more carefully. Says Linsley, “You can't conclude it's not there until you look.”

    Looking, though, can be trickier than it sounds. For the most part, scientists are relying on microarrays, which show gene-expression levels, to learn whether their siRNAs are hitting unintended genes; in general they're finding that a dozen genes may be affected by a single siRNA. (Linsley has recorded on average at least 40.) Still, it's difficult to gauge how big a problem that is. Mismatches provoke a less dramatic change in gene expression than complete matches. Most gene expression varies by less than twofold when the siRNA doesn't fully match—often not enough to have a substantial biological impact on how a cell, or an animal, actually functions.

    But using microarrays to look for off-target effects has one big drawback: They show only gene expression, not protein levels. If siRNAs are imitating microRNAs, that means they're not affecting DNA directly but rather are altering how RNA is translated into protein. Microarrays thus might not detect changes in protein abundance. “What's really important is what's happening at the protein level, and we don't have a lot of data on that,” says Linsley.

    Researchers at Dharmacon and elsewhere are trying to see whether microarray results correlate with changes in protein levels. At a meeting last week in Titisee, Germany, Sharp presented preliminary data from his lab showing a 10-fold change in protein levels with only a twofold microRNA difference, the level commonly seen from an off-target effect. But doing the kind of broad protein screens that microarrays today accomplish for genes isn't yet possible. “You can spend the rest of your life trying to see all 10,000 proteins in the cell,” says Sharp, “and you'll never get an answer.”

    Scientists are quick to add a caveat: Even if off-target effects occur, they don't necessarily affect the phenotype, or how a cell or animal actually functions. If phenotype isn't altered, notes Zamore, the effects rarely make a difference.

    Cautious but upbeat.

    Stanford's Mark Kay hopes that off-target effects won't derail RNAi's extraordinary possibility.


    The significance of off-target effects also depends on how RNAi is used: to unearth the function of mystery genes or as a medical therapy. In the first case, scientists are getting around the problem by applying several different siRNAs, each of which corresponds to a different sequence in their gene of interest. That way, if one siRNA prompts an off-target effect that changes a cell's phenotype, it will be more apparent.

    When it comes to RNAi-based treatment, though, the potential challenges multiply. The first clinical trial of RNAi therapy—for use in macular degeneration—was launched last month by the Philadelphia company Acuity Pharmaceuticals. Because treatments can be restricted to the eye, the risk of off-target effects is of less concern.

    For other diseases, “it's unclear how much of an issue this is going to be,” says Stanford's Kay, whose RNAi work focuses on hepatitis. The disease is a popular choice for RNAi therapies because RNAi can disable the virus. Yet it's also difficult to target the liver without affecting other parts of the body. In Kay's view, RNAi therapies shouldn't be viewed differently from traditional drugs: “If you give somebody aspirin, they're going to have changes in gene expression in specific tissues.” He expects that RNAi clinical trials, like all others, will need to home in on the lowest effective dose and monitor patient safety carefully.

    To avoid any effects that may cause problems, researchers are chemically modifying siRNAs to try to stop them from glomming onto messenger RNAs they should ignore. Modifications can also make the key 5′ bit of siRNAs more sluggish in its binding, rendering mismatches less likely. Linsley and some Dharmacon colleagues have just submitted a paper on the subject. “We've made a few steps,” he says, declining to be more specific. But “I don't think we've completely solved it.” Although off-target effects may forever linger as a risk of RNAi, he and others say, they hope that they'll become less of a worry, and soon.


    Brain Cells May Pay the Price for a Bad Night's Sleep

    1. Greg Miller

    SAN DIEGO—From 23 to 27 October, this California coastal city hosted the annual Society for Neuroscience meeting, at which more than 30,000 researchers presented data on topics such as sleep problems, addictive anesthesia, and baby talk.

    It's enough to keep you up at night: Sleep apnea, a condition in which breathing irregularities occur during sleep, may kill neurons in brain regions crucial for learning and memory, according to research on rodents described at the meeting. The findings may provide a disturbing explanation for the cognitive deficits often seen in people with sleep apnea. And to make matters worse, new evidence suggests that adding an unhealthy diet to the mix greatly compounds the neural harm caused by disordered sleep.

    In the United States, sleep apnea affects at least 2% of children, 4% of middle-age adults, and 10% of older adults, according to conservative estimates, and it is even more common in obese people. The cognitive problems that result, including hyperactivity, wandering attention, and learning deficits, were long thought to stem solely from the fatigue that follows a bad night's sleep, says Gordon Mitchell, a respiratory neurobiologist at the University of Wisconsin, Madison. “The concept that you're causing specific damage through cell death in particular areas of the brain is pretty new,” he says.

    Mitchell and others credit their colleague David Gozal with much of the work that has led to the unanticipated realization. Gozal, a pediatric researcher at the University of Louisville in Kentucky, has published a slew of papers in the last few years that describe what happens in the brains of rodents exposed to brief periods of reduced oxygen similar in nature to those experienced by a person with sleep apnea.

    You snooze, you lose.

    Rodent research suggests that sleep apnea can kill brain cells.


    It's not a pretty picture. Gozal's team has found that intermittent hypoxia kills rodent brain cells in the hippocampus, a key memory center, and interferes with a process called long-term potentiation, a strengthening of neural connections considered crucial for learning and memory. The same reduced level of oxygen, kept constant, has little or no effect.

    Gozal has also begun to elucidate some of the molecular events underlying the damage. Intermittent hypoxia stresses brain cells, causing them to produce molecules called oxygen free radicals that are notorious for wreaking havoc on cells and driving them to self-destruct. Inhibiting certain enzymes involved in the stress response can save neurons and prevent learning deficits in rodents subjected to hypoxic periods, Gozal's team has found. They also reported at the meeting, for the first time, that regular exercise—the rat equivalent of walking in the park for an hour a day, Gozal says—cancels the learning deficits caused by intermittent hypoxia.

    That's the good news. The bad news is that a diet high in fat and refined carbohydrates appears to magnify the deleterious effects of intermittent hypoxia. The diet alone caused a mild learning impairment when rats were tested in a water maze, and it reduced the level of the activated form of a protein called CREB in their hippocampi. CREB plays an important role in memory consolidation and neuron survival, so it is a good general marker of hippocampal health, Gozal says. Pairing the high-fat, refined-carbohydrate diet with intermittent hypoxia had a synergistic negative effect on learning and activated CREB. “It's a major disaster for the brain,” Gozal says. If the finding holds true for humans, he adds, that would be especially troubling because sleep apnea and unhealthy diets are a common combo for many people with obesity.

    “It's an incredibly important observation,” says Sigrid Veasey, a neuroscientist at the University of Pennsylvania in Philadelphia. “The injuries can be compounded by what would be considered an unhealthy diet, but [is] probably a pretty standard diet for a lot of people in this country.”


    Anesthesia's Addiction Problem

    1. John Travis

    SAN DIEGO—From 23 to 27 October, this California coastal city hosted the annual Society for Neuroscience meeting, at which more than 30,000 researchers presented data on topics such as sleep problems, addictive anesthesia, and baby talk.

    Warning: Anesthesiology may be hazardous to your health. According to Mark Gold, who has spent much of his career investigating the problem of physicians who abuse drugs, data from the state of Florida show that “every year since 1995, anesthesiologists were the number one [medical] specialty for substance abuse or dependence.” His group found, for example, that in 2003, anesthesiologists represented less than 6% of all physicians in the state but made up almost 25% of the physicians monitored for substance-abuse disorders.

    Although Gold, chief of the McKnight Brain Institute at the University of Florida (UF), isn't the first to conclude that anesthesiologists are especially susceptible to abusing drugs, particularly opiate-based compounds similar to those used in general anesthesia, he has a new explanation. The problem isn't simply easy access to drugs, he says. Instead, he and his colleagues propose that the physicians may become primed for drug abuse because they chronically inhale small amounts of anesthetics that sensitize the brain's reward pathways. Indeed, at the meeting, Gold's team reported finding traces of intravenously delivered anesthetics in the air of operating rooms.

    When early anesthesiologists depended on gases such as ether and chloroform, secondhand exposure was a serious problem. But better ventilation and increased use of intravenous drugs reduced such exposures. As for the issue of drug access, hospitals have gone to great pains to safeguard their medications. Even so, anesthesiologists continue to have much higher rates of opiate-related substance abuse than other physicians with similar access.

    From his review of records from Florida's Impaired Professional programs, Gold has found that anesthesiologists who abuse drugs tend to start much later in life than other addicts, who typically experiment with drugs during their youth. Anesthesiologists also tend to relapse unless they change professions, says Gold.

    Breathing problem.

    IV-delivered anesthetics may escape into the air and affect physicians.


    Having conducted research on whether secondhand smoke sensitizes brain reward pathways—children of smokers are much more likely to smoke—Gold wondered whether secondhand anesthesia might be at work. He teamed up with several UF anesthesiologists to answer that question. As a first step, they used a mass spectrometer to examine the air in operating rooms during cardiac bypass surgeries in which fentanyl, an opiate many times more potent than morphine, and a nonopiate anesthetic, propofol, were administered intravenously to patients. Both compounds were found in the operating room air and at higher concentrations in the space between the anesthesiologist and patient, Matt Warren, a graduate student with the Florida team, reported in San Diego.

    In another test, the researchers found that volunteers given fentanyl exhale it. In lengthy operations such as cardiac bypasses, anesthesiologists “could be breathing analgesics and anesthetics for 8 hours,” speculates Gold.

    The Florida team is now collecting blood samples of anesthesiologists during operations to see if fentanyl is present. They also intend to expose rodents to the same air concentrations of the anesthetics as found in the operating rooms and test whether those animals are more susceptible to drug addiction and develop changes in brain regions involved in the rewarding aspects of drugs.

    Those follow-up studies will be needed to convince anesthesiologists and others who are intrigued by Gold's hypothesis. “He's a guy who thinks outside the box,” says Robert L. Dupont, president of the Institute for Behavior and Health in Rockville, Maryland, and former director of the National Institute on Drug Abuse. “But it's hard for me to imagine that the doses people are getting this way are having any biological effect. I'm ready to be persuaded, but I'm skeptical.”


    Listen, Baby

    1. Greg Miller

    SAN DIEGO—From 23 to 27 October, this California coastal city hosted the annual Society for Neuroscience meeting, at which more than 30,000 researchers presented data on topics such as sleep problems, addictive anesthesia, and baby talk.

    How quickly babies home in on the sounds of their native language during their first year may predict how quickly they learn new words, string together complex sentences, and acquire other language skills as toddlers. The new research, presented in San Diego, helps pin down a milestone in language development and may shed light on why the ability to pick up a new language wanes with age.

    When it comes to language, babies are “citizens of the world,” Patricia Kuhl of the University of Washington, Seattle, said in a lecture here. In the early 1990s, her team found that 6-month-old infants naturally possess a language skill far beyond the reach of adults: They can distinguish all the sounds of all the world's languages—about 600 consonants and 200 vowels. By the end of their first year, however, babies begin to specialize. As they become better at recognizing the basic elements, or phonemes, of their native language in all their acoustic variations—learning to lump /o/ as pronounced by mom together with /o/ as pronounced by grandpa and Bugs Bunny, they lose the ability to distinguish phonemes in other languages. By about 11 months, for example, certain vowels that sound distinct to Swedes start to sound the same for a baby born to an English-speaking family.

    Kuhl and others have argued that this change in speech perception is an essential step in language learning. They contend that babies need to be adept at identifying native phonemes, for example, before they can break down a stream of speech into individual words. That skill, in turn, is necessary for assigning words to objects, creating more complex sentences, and so on.

    Baby talk.

    An infant's level of speech discrimination predicts language skills.


    Recent work in Kuhl's lab bears this out. Her team has been using electroencephalogram (EEG) electrodes to monitor the brain activity of 7-month-old infants, who are just at the cusp of the change in phoneme perception. The babies listened to a recorded voice repeat a single phoneme several times before switching to another phoneme. If the baby caught the switch, a blip appeared in the EEG record. This evoked related potential (ERP) is a standard indicator that the brain has picked up something new. The researchers tested the babies' ability to discriminate both native and non-native phonemes and then followed up with a battery of language tests at 14, 18, 24, and 30 months of age.

    The ERP recordings revealed that infants who at 7 months of age were good at native phoneme discrimination tended to be bad at non-native phoneme discrimination, and vice versa. This fits with the “neural commitment” theory proposed by Kuhl several years ago, says Mirella Dapretto of the University of California, Los Angeles. Kuhl's work suggests that “the more your brain gets committed to picking up what's relevant in the first language you're exposed to, the more you're tuning out distinctions that are relevant in other languages,” Dapretto says.

    The follow-up studies suggest that the brain's commitment to native speech sounds provides the foundation for later language learning. Although all the children in Kuhl's study tested in the normal range, the ones who did best at native phoneme discrimination at 7 months scored higher at later times on all language measures, including number of words produced, duration of utterances, and sentence complexity.

    “It's extremely interesting [that] you can look at infants and learn something really important about their future learning,” says April Benasich, a cognitive neuroscientist at Rutgers University in Newark, New Jersey. Benasich says the findings add to evidence, including work from her own lab, that it may be possible to screen young infants to identify those likely to need extra help with language learning.

Log in to view full text