News this Week

Science  03 May 2002:
Vol. 296, Issue 5569, pp. 820

    Challenge to FDA's Authority May End Up Giving It More

    1. Eliot Marshall

    In a recent 4-week period, the U.S. government reversed course twice on whether drugmakers should be compelled to test their products on children. The policy went from “yes” to “no” and then back to “yes”—confusing researchers and companies alike. The muddle ended 19 April, when the Bush Administration came out in favor of retaining a 3-year-old rule that gives the Food and Drug Administration (FDA) power to demand that companies conduct targeted studies to learn about side effects and set proper doses for children.

    Clinicians and child-health advocates—who lobbied for this outcome—are upset about the flip-flop and want to ensure that it won't happen again. Several senators responded this week by proposing to give FDA permanent authority to order such clinical trials.

    The furor was sparked by a lawsuit filed in 2000 seeking to curtail FDA's power. The Competitive Enterprise Institute of Washington, D.C., and two other free-market advocacy groups challenged FDA's authority to carry out what the agency calls its “pediatric rule.” In force since 1999 but used sparingly, this rule enables FDA to ask for pediatric tests of any drug being developed for adults that might also be given to children. The aim is to look for unexpected effects and set proper doses.

    But the three groups viewed the pediatric rule as an economic burden and a restriction on the practice of medicine, claiming that the effect would be to “delay new drug approvals and to enlarge FDA's power beyond the limits set by Congress.” They sued to stop it. (FDA's new chief counsel, Daniel Troy, helped draft the suit when he was in the private sector, but he has recused himself from the matter at FDA.) In March, FDA informed the court that it would not fight the lawsuit; the agency said it would suspend the pediatric rule while it studies its impact. This prompted an uproar.

    FDA took “a massive step backwards,” says Mark Isaac of the Elizabeth Glaser Pediatric AIDS Foundation. “We were appalled” by FDA's failure to defend its authority. That feeling was widely shared. FDA's decision “surprised and dismayed” members of the American Academy of Pediatrics (AAP), says Richard Gorman of Ellicott City, Maryland, who chairs AAP's committee on drugs.

    At the heart of the dispute is whether incentives are enough to get companies to study pediatric effects, or whether mandatory authority is needed. FDA has had a program since 1997 that offers big rewards for doing pediatric trials. Companies can get a 6-month extension of an exclusive patent on a drug if they do research that defines doses for children. More than 50 drugs have been reexamined and 29 relabeled in this way. In one case, FDA learned that children receiving a pain and seizure medicine were being underdosed by 30%; another trial found that young children given an anaesthetic had a higher than expected risk of seizures. The incentive program is so popular that Congress enacted a law last year extending it through 2007.

    Filling a need.

    After the Administration flip-flopped on whether FDA should have power to require pediatric drug trials, Congress is stepping in.


    But AAP and the child health groups argue that the pediatric rule is needed to fill gaps in the incentives program and overcome companies' unwillingness to include children in clinical trials of some drugs. Philip Pizzo, dean of medicine at Stanford University and an expert in pediatric AIDS and oncology, says that industry had “not made the codevelopment of drugs for children a priority” until FDA began nudging it. He thinks the pediatric rule is “enormously important.”

    And as Gorman points out, the incentives program covers only drugs. It leaves out vaccines and other nonpill biopharmaceuticals, a category that includes some of the most promising new therapies being produced by molecular biology. In addition, incentives may not work if a drug's use is being expanded to cover a new disease, Isaac says, because the company gets the patent bonus only once.

    Furthermore, several pediatric oncologists meeting at an advisory panel of the Institute of Medicine in Washington, D.C., last week said that companies usually refuse to allow new, experimental cancer drugs to be given to children. They consider it too risky. That leaves doctors with few options, said a frustrated Peter Adamson of the Children's Hospital of Philadelphia: “We continue shuffling existing therapies … like deck chairs on the Titanic.” AAP's Gorman adds: “The exciting thing about the pediatric rule is that for the first time, it puts children's needs at the table” when new drugs are being considered at FDA. He thinks this could radically change the way companies plan and develop new drugs.

    AAP, the Pediatric AIDS Foundation, and a dozen other organizations lobbied Congress and the president seeking to put FDA back on its original track. As criticism mounted, Secretary of Health and Human Services Tommy Thompson intervened. On 19 April he issued a statement saying, “We will enforce and improve the FDA's pediatric rule.” He also promised to increase the amount of aid to high-priority pediatric trials from $4 million to $7 million a year. The money will go to a network of academic labs supported by the National Institute of Child Health and Human Development.

    But the turnabout did not halt a political reaction. Two Democratic senators—Hillary Clinton (NY) and Christopher Dodd (CT)—joined Republican Mike DeWine (OH) to propose that FDA get full legal power to order pediatric trials, and on 29 April they introduced a bill to that effect. The attempt to curb FDA's authority may therefore have done just the opposite.


    Fossil Plant Hints How First Flowers Bloomed

    1. Erik Stokstad

    Some 65 million years ago, a riot of flowering plants burst upon the world. Where did they come from? That question, which Charles Darwin called an “abominable mystery,” has perplexed evolutionary biologists ever since. Now a remarkably well-preserved fossil from China promises to unveil the murky ancestry of this most diverse group of plants, in a surprising way. “This may be the most significant fossil flowering plant ever found,” says Peter Raven, director of the Missouri Botanical Garden in St. Louis.

    The 125-million-year-old plant—which a team of paleontologists led by Ge Sun of Jilin University in Changchun, China, and David Dilcher of the Florida Museum of Natural History describes on page 899—suggests that the forebears of flowering plants may have been aquatic, weedy herbs. Most paleobotanists have long believed that flowering plants, or angiosperms, arose instead from woody plants resembling the magnolia tree. That made sense, because the closest known relatives of angiosperms—the conifers and other so-called gymnosperms—are all woody. Indeed, the latest genetic studies suggest that the most primitive living angiosperm is Amborella, a woody shrub in New Caledonia.

    Enter Archaefructus sinensis, fresh from the lake deposits of Liaoning Province in northeastern China. A closely related species from Liaoning came to light in 1998 (Science, 27 November 1998, p. 1692), but like most plant fossils, it was fragmentary. Then, in summer 2000, Qiang Ji, now at the Geological Institute of the Chinese Academy of Geosciences, showed Dilcher a slab of rock from Liaoning that contained a much better specimen, one that preserved intact the entire plant from roots to flowers. “I had to sit down, I was so impressed,” Dilcher recalls.

    Like a rose.

    The 25-cm-high Archaefructus resembled modern flowering plants.


    The plant has clear flowerlike traits. The female reproductive structure, called the carpel, is closed with seeds inside. The male organs, known as anthers, resemble modern ones and lie below the female parts, a classic hallmark of flowers. But Archaefructus would raise a florist's eyebrows: It has no sepals or petals, and most strangely of all, its stamens come in pairs rather than singly.

    To find out where Archaefructus fits within the botanical family tree, co-author Kevin Nixon of Cornell University plugged 16 such traits into a computer programmed to calculate likely evolutionary relationships. The program compared the features with those of 173 living plants, whose own relationships were strengthened by 1600 molecular markers. Archaefructus came out as the sister group to all living angiosperms, even closer to the common ancestor than the woody Amborella.

    If the team's analysis holds up, Archaefructus could have a lot to say about the earliest angiosperms. Its characteristics support the idea that early angiosperms were herbs. Herbs grow faster and reproduce younger than other seed plants do, and that could have given them an edge over slower growing competitors. Because every branch tip on Archaefructus ends in a flower, paleobotanist Bruce Tiffney of the University of California, Santa Barbara, infers that Archaefructus had a short, fast-growing life. “This is the best evidence so far” for herbaceous early angiosperms, he says.

    It may also have lived in water, Dilcher says. The presence of fish fossils in the same type of rock, the plant's delicate stems, and its bulbous structures that may have served as floats all hint that Archaefructus grew in lakes. Early herbs may have thrived in watery habitats, Dilcher speculates. There, free of competition from other seed plants, early flowering plants could have bloomed into new shapes.

    Dilcher and his colleagues also think that Archaefructus helps explain some of the steps in flower evolution. The paired stamens, Dilcher says, are consistent with the idea that angiosperms once bore their male and female reproductive organs on separate shoots. As these shoots evolved to be shorter, the sexual parts came into the close proximity now seen in modern flowers. “It's very tantalizing,” says Dennis Stevenson of the New York Botanical Garden.

    But although many other experts are equally smitten by Archaefructus, they say they won't be swept off their feet until they've had a closer look at the characters used to establish its evolutionary position. “A whole lot depends on whether [Archaefructus] is correctly positioned in the tree,” says Michael Donoghue of Yale University. If it is, then they may begin tossing roses.


    DOE Delays Hiring of Livermore Head

    1. Andrew Lawler,
    2. Charles Seife

    The scheduled appointment of a new director for Lawrence Livermore National Laboratory in California was delayed last week in the latest sign of tension between the lab and its two overseers, the University of California (UC) and the Department of Energy (DOE).

    DOE officials say they just wanted more information on the slate of candidates drawn up by UC, which runs the labs for DOE, that was to be presented 26 April for action by the Board of Regents. The leading candidate is believed to be physicist Raymond Juzaitis, currently a senior administrator at Los Alamos National Laboratory. Sources say that the long-running rivalry between the two weapons labs may have played a role, along with the fact that Juzaitis once supervised Wen Ho Lee, the former computer scientist at Los Alamos who was caught up in allegations of spying but never charged with espionage.

    “DOE and [the National Nuclear Security Agency] both had some last-minute questions that we did not feel we could adequately answer in the time available,” says UC spokesperson Michael Reese. DOE spokesperson Jeanne Lopatto says that Energy Secretary Spencer Abraham had “asked for more information.” The planned announcement of the regents' decision, she adds, “was a bit premature.” In addition to Juzaitis, currently associate director for weapons physics at Los Alamos, the other candidates are believed to be Jeff Wadsworth, Livermore's deputy director for science and technology; Michael Anastasio, deputy director for strategic operations; and Steven Koonin, a nuclear physicist and provost at the California Institute of Technology in Pasadena. Juzaitis declined to comment on his candidacy.

    Typically, the university's regents rubber-stamp the president's choice for director, and DOE in turn approves the selection. But recent lab controversies have forced DOE to pay more attention. Livermore's eighth and current director, Bruce Tarter, announced his retirement in December amid problems at the National Ignition Facility that have tripled its estimated $1.2 billion construction cost.

    Choosing a Los Alamos manager would be “a strong rebuke to Livermore,” says Hugh Gusterson, a Massachusetts Institute of Technology anthropologist who has written extensively on both labs. “Livermore has a tradition of weak management oversight,” he adds, “while Los Alamos has always been thought to run a tighter ship.” It would also go against the lab's history of promoting from within. “[Juzaitis] is certainly a choice that would have left people here stunned and demoralized,” says one Livermore physicist.


    Stressed Mutant Mice Hit the Bottle

    1. Constance Holden

    Some people can be moderate drinkers for years, only to become mired in alcohol after a stressful life event. A new mouse model described on page 931 may help explain why. In the mice, which have been genetically altered to lack a key component of their stress response system, stress apparently acts as a catalyst that makes them—perhaps permanently—more prone to drink. “This paper nicely shows the relationship between genetics and environment,” says alcoholism researcher Todd Thiele of the University of North Carolina, Chapel Hill.

    The work comes from behavioral pharmacologists Inge Sillaber, Rainer Spanagel, and colleagues at the Max Planck Institute of Psychiatry in Munich, Germany. In previous experiments, Max Planck researchers found that mice lacking the gene encoding the receptor for corticotropin-releasing hormone (CRH) seemed to have a blunted stress response. For example, says Sillaber, the animals were less anxiety prone than normal mice, eagerly exploring well-lit boxes that nocturnal rodents normally avoid.

    Because stress is a cause of drinking in humans, and because stress-induced drinking has been shown to have a genetic component, Sillaber and her colleagues wanted to see how the loss of the CRH receptor affected the animals' drinking habits. The researchers stocked the cages of normal and altered mice with two bottles to choose from: one with pure water, the other containing 2% to 8% alcohol. Both types of mice proved to be moderate tipplers, choosing pure water most of the time.

    Stress response.

    Moderate drinkers increase intake after nasty experiences.


    But the two groups diverged after being put through some difficult experiences. In one test, a model of “social defeat,” a male mouse was put into the cage of a hostile stranger for a brief period 3 days in a row. When they were together, the resident mouse attacked the visitor; then they were separated by a wire mesh, preventing the visitor from being mauled but keeping him intimidated.

    None of the mice's drinking behavior changed during or immediately after the test, the researchers report. But alcohol consumption by the mutant mice began to rise a couple of weeks after the unfriendly cage visits, and a month afterward, their drinking had more than doubled, whereas that of the normal mice hadn't changed.

    Both groups of mice were then put through a second ordeal. For 3 days in a row, they had to spend 5 minutes in a container of water, unable to get out. The mutants' alcohol consumption rose even further. What's more, the authors report, the mutants were still drinking substantially more than the controls 6 months after their unpleasant involuntary swims. For mice, that qualifies as a permanent change in how they respond to alcohol.

    Exactly how loss of the CRH receptor alters the animals' drinking habits is not clear. The mutants don't appear to be any more shaken up by the stressful situations than are the normal mice. And because they don't start drinking more right away, they're not relying on alcohol to restore their courage. Jane Stewart of Concordia University in Montreal, Canada, who studies the involvement of CRH receptors in addictive behavior, explains that “the stress may activate pathways that have nothing directly to do with fear and anxiety but which alter the approach to alcohol itself.”

    The researchers are now doing association studies in humans in hopes of finding out more about such pathways. Specifically, Sillaber and Spanagel will look for variations in stress-related genes in alcoholics. Some alcoholics, they say, may have defects in their CRH receptors or other anomalies that disrupt the stress response system in a way similar to that seen in the mutant mice. This research, they say, may help pinpoint some of the genes that make an individual more likely to respond to the slings and arrows of outrageous fortune by turning to the bottle.


    U.K.'s Mass Appeal for Disease Insights

    1. Gretchen Vogel

    LONDON—Plans shifted into high gear this week for a huge repository of information on the genetics and lifestyle of the population of the United Kingdom. The $66 million BioBank UK hopes to collect data from half a million middle-aged Britons over the next decade. But a public battle is looming over how much access companies should have to the database.

    The project, first proposed more than 2 years ago, aims to use the trove of data on the British population's genetic makeup and way of life to flush out factors that influence common diseases such as cancer, diabetes, and heart disease (Science, 18 February 2000, p. 1184). On 29 April, the Medical Research Council, the Department of Health, and the Wellcome Trust, a mammoth biomedical charity, announced their financial backing for BioBank, which will collect blood samples and information on diet, smoking, and other lifestyle choices from 500,000 volunteers aged 45 through 69, then track their health for at least 10 years. Researchers will mine the database for disease-related patterns, such as genes that heighten vulnerability to the cancer-causing effects of smoking.

    Pay later?

    BioBank UK will probe the links between genes, lifestyle, and disease.


    The study is a logical follow-up to the Human Genome Project, says Wellcome Trust director Michael Dexter. “It is part of an overall strategy to really ensure that the [sequencing] research we've done does have health benefits,” he says. The human genome sequence, he says, will allow researchers to more quickly identify DNA variations in the U.K. population that correlate with disease. BioBank will stand out from a growing pack of genetic databases—including deCODE, which probes for disease genes in Iceland (Science, 1 January 1999, p. 13)—because it will collect detailed data on lifestyle choices and risk factors across several ethnic groups. The search for an executive director and a headquarters site will begin in the next few months.

    An oversight committee, to be established by BioBank's funders, will hammer out the rules for access to the data. These are expected to come under intense scrutiny. “A lot more work needs to be done on the relationship between BioBank and industry” to ensure that benefits flow back to the public, asserts David S. King, coordinator of Human Genetics Alert in London. The watchdog group is lobbying for a ban on patents based on genetic discoveries that come out of the database. The group is also pressing for BioBank to allow volunteers to opt out of research they may object to, such as studies on behavioral genetics. Dexter argues that industry researchers must be given access for the project to succeed. “At the end of the day,” he says, “they're the ones who develop the drugs.”

    BioBank has time to address such issues: Full-scale enrollment of volunteers is not likely to get under way until 2004, says a Wellcome Trust spokesperson. The real test will come then, when doctors start pitching the project in earnest to their patients. “It is an opportunity to get people on board for this kind of new biology,” says Dexter.


    Venter Is Back With Two New Institutes

    1. Eliot Marshall

    After 3 months of rare silence, genome scientist J. Craig Venter is back on the air. Venter, who abruptly resigned in January as president of Celera Genomics of Rockville, Maryland (Science, 25 January, p. 601), announced 30 April that he plans to establish two new institutes that will focus on ethics, clean energy, and the environment. Venter also made headlines last week by confirming a persistent rumor about Celera's research: “Three-fifths” of the human genome the company sequenced and published in 2001 is his own.

    Venter says he is establishing an outfit called the J. Craig Venter Science Foundation. It will be the financial and legal umbrella for three nonprofit organizations whose boards he will chair, all located in Rockville. One is already well established: The Institute for Genomic Research (TIGR), a sequencing and gene analysis operation presided over by Venter's wife, microbiologist Claire Fraser. TIGR's two new siblings will be a think tank called the TIGR Center for the Advancement of Genomics (TCAG) and a research institute called the Institute for Biological Energy Alternatives (IBEA). All three will share TIGR's current endowment, which is estimated to be worth about $140 million, according to Venter. The fund was established with stocks Venter received from Celera and from an earlier partnership with Human Genome Sciences of Rockville.

    The broadest of the new operations, TCAG, will enter a field already well populated with serious thinkers. TCAG will concern itself with “public policy and ethical issues related to the sequencing of the human genome,” says Venter. Initially it will take up four topics: risks of discrimination and a mistaken public emphasis on “genetic determinism”; fallacies about race; genetics and medicine; and stem cell biology. Venter says, for example, that congressional efforts to “criminalize” scientific research by banning some cloning and embryonic stem cell studies are “unprecedented” and deserve much wider comment. He plans to recruit a staff of 20 to 30 people to support up to 30 visiting faculty, who will come for periods of 3 to 12 months.

    Ethics and energy.

    Venter is moving into new research areas.


    TCAG's turf overlaps to some degree with that of another new center announced in April, the Genetics and Public Policy Center of Washington, D.C., backed by the Pew Charitable Trusts and Johns Hopkins University in Baltimore. The Hopkins center, headed by former National Human Genome Research Institute assistant director Kathy Hudson, will focus initially on reproductive genetics, as required by its 3-year, $9.9 million grant from Pew. Venter's comment: “The more voices, the better.”

    Bioethicist Thomas Murray, director of the Hastings Center in Garrison, New York, says the Hopkins center was carefully planned and “fills an important need.” Murray hasn't seen TCAG's agenda, but he offers Venter this advice: “Define your mission clearly” and guarantee the center its independence.

    Unlike TCAG, Venter's energy and environment shop, IBEA, may rely extensively on government support. Staff scientists will explore microbial genomics to look for solutions to environmental problems, for example, by degrading toxic chemicals and sequestering carbon dioxide from the atmosphere. They will also study clean energy products, such as hydrogen. This project, according to Venter, received encouragement from Ari Patrinos, head of biological and environmental research in the Department of Energy's (DOE's) science office. Indeed, Patrinos says, IBEA's agenda matches DOE's own research goals very closely: “If [Venter's] record is any indication, we expect big things from him again.”


    A Single Climate Mover for Antarctica

    1. Richard A. Kerr

    Weird things are afoot at the bottom of the globe. The Antarctic Peninsula's Larsen ice shelf has suffered a torrid 2.5°C warming during the past half-century (Science, 29 March, p. 2359). A Rhode Island-sized chunk of the ice shelf drifted away from the peninsula and broke up in recent months as glaciologists watched, some Antarctic glaciers are thinning, and sea ice is retreating—all as greenhouse warming would have it. Meanwhile, however, other glaciers are thickening. In places, sea ice is actually advancing, and most of Antarctica is not warming at all or is even cooling. What gives?

    Meteorologist David Thompson of Colorado State University in Fort Collins and atmospheric chemist Susan Solomon of the National Oceanic and Atmospheric Administration's (NOAA's) Aeronomy Laboratory in Boulder, Colorado, have an explanation. On page 895, they build a case that a climate master switch in the atmosphere over the high southern latitudes is driving the wacky climate shifts of Antarctica. And the hand on the switch, they suggest, may be our own. Humanmade chemicals drive the formation of the yearly Antarctic ozone hole, which, they argue, throws the climate switch—called the Antarctic Oscillation (AAO)—in the atmosphere below.

    The work is “the strongest evidence yet” that a shift in the AAO “could explain a number of different components of [Antarctic] climate trends,” says meteorologist David Karoly of Monash University in Clayton, Australia. The idea that Antarctic ozone loss is behind the AAO shift is getting a more cautious reception.

    Hot times.

    Warming (yellow) and winds (arrows) induced by the Antarctic Oscillation doomed part of the Larsen ice shelf.


    To link stratospheric ozone loss to climate change at the surface, Thompson and Solomon first turned to atmospheric observations from weather balloons routinely launched from seven sites around Antarctica. The instrumented balloons tracked the erratic atmospheric seesaw of the AAO, which raises atmospheric pressure alternately over the pole and in a ring passing over the Southern Ocean and the tip of South America. These pressure shifts alternately accelerate and slow the ring of westerly winds that encircle Antarctica, as Thompson and J. Michael Wallace of the University of Washington, Seattle, suggest happens in the Arctic (Science, 9 April 1999, p. 241). The AAO clearly swings erratically from one phase to the other week to week, month to month, and year to year, but the balloon data from 1969 to 1998 show that recently it has been spending more and more time in its positive, strong-wind phase, just as the Arctic Oscillation (AO) has.

    Having shown that the AAO high above the polar region has shifted, Thompson and Solomon demonstrated that the shift could explain most of the climate change at the surface. Comparing the pattern and amplitude of the AAO trend with those of the climate change, they found that the AAO's shifts in circulation—including winds and air rising over the continent—could account for 90% of the summertime cooling over Antarctica and about half of the summertime warming over the Antarctic Peninsula and the southern tip of South America. The rest of the peninsula's warming may be linked to changes as far away as the tropical Pacific.

    To trace the changes back to the stratosphere, Thompson and Solomon compared trends in stratospheric “climate” with the AAO trend. Researchers had already established that the springtime loss of ozone—which normally absorbs solar energy and warms the lower stratosphere—had cooled the lower stratosphere by 6°C each spring. That cooling, in turn, strengthens the stratospheric vortex of westerly winds, a stratospheric analog of the AAO's ring of westerlies in the lower atmosphere. Thompson and Solomon compared the timing of ozone-induced cooling and vortex intensification in the stratosphere with similar changes in the lower atmosphere and at the surface. The stratospheric shifts seemed to break through to the lower atmosphere at roughly the times of the year—late spring and early summer, and fall—when seasonal circulation changes temporarily break down the usual barrier between the wispy stratosphere and the dense lower atmosphere. That timing “seems pretty good evidence [that] ozone is important” in driving the AAO and thus climate change, says Thompson, “particularly during the late spring.”

    Pinning most of the contradictory Antarctic climate changes on a changing AAO “seems reasonable” to meteorologist Martin Hoerling of NOAA's Climate Diagnostics Center in Boulder. He and others are reluctant, however, to extend a linkage to the overlying stratosphere just yet. “You certainly can't rule out a role for ozone” in climate change, says meteorologist James Hurrell of the National Center for Atmospheric Research in Boulder. “But I think other things may be contributing.” He and Hoerling have shown that, in climate models, the recent warming of the tropical ocean drives the AO into its positive phase (Science, 27 April 2001, p. 660). Now the big riddle about the patchwork of Antarctic climate change seems to have shifted from “What is the culprit?” to “What could be pushing the AAO to such an extreme?”


    Pentagon Proposal Worries Researchers

    1. David Malakoff

    A proposal to impose new controls on U.S. scientists who do basic research for the military is drawing fire from universities, members of Congress, and even some top Pentagon research officials. The draft rules would require prior government review of publication and travel plans for researchers conducting nonclassified research deemed “critical” to national security. Critics say the new rules are largely redundant, and they warn that the added paperwork could scare away top scientists from working with the Department of Defense (DOD).

    The draft rules “are a valid effort to reassess security, but they don't appear to be very well thought out,” says Jacques Gansler, a former top Pentagon research administrator in the Clinton Administration and now head of the Center for Public Policy and Private Enterprise at the University of Maryland, College Park. In an internal analysis obtained by Science, Don DeYoung, executive assistant to the director of research at the U.S. Naval Research Laboratory in Washington, D.C., argues that the rules “can be expected to have a chilling effect” on defense research.

    The Pentagon will spend about $1.4 billion on basic research this year, with more than half going to universities for fundamental work in areas such as computer science, mathematics, and engineering. Although academic researchers have traditionally faced few restrictions, universities have reported sporadic Pentagon efforts to restrict the flow of unclassified information since the 11 September terrorist attacks (Science, 22 February, p. 1438).

    Going critical?

    Military-funded marine studies could be one field affected by new rules.


    Last week, those whispers took shape in the form of a leaked 120-page draft regulation entitled Mandatory Procedures for Research and Technology Protection Within the DOD. The internal document, dated 25 March and first reported last week by the Chronicle of Higher Education, describes a multilayered plan for protecting sensitive information. The first step would have Pentagon program managers decide if DOD-funded studies at universities, companies, or military laboratories involve “critical research technologies” or “critical program information.” If so, the institutions and researchers conducting the work would have to prepare detailed security plans, label documents as protected, obtain prior review of publication and travel plans, and decide whether to place restrictions on any foreign scientists involved in the project. The Pentagon would also create a centralized database to track the work it has funded.

    The plan is deeply flawed, says DeYoung, who responded to a memo from senior DOD officials asking for comment. In a brisk seven-page analysis, he argues that the draft rules overstate potential threats, ignore a 16-year-old presidential order against restrictions on military-funded basic research, and duplicate existing government efforts to protect critical technologies. He also argues that the rules will lead to a counterproductive, ever-expanding definition of critical research. “In a competitive budget environment,” he writes, “there will be a strong propensity for managers to designate their projects as critical.”

    Such fears are being echoed in Congress. “This could become another endless bureaucracy,” says one Senate aide. Adds Senator Jeff Bingaman (D-NM), who sits on the Armed Services Committee, “they are trying to wall off researchers.” Despite such concerns, however, lawmakers plan to wait for the Pentagon to come up with a final plan before reacting. “There isn't much appetite right now to micromanage [the military],” says a House aide. University and industry lobbyists are also keeping their powder dry in hopes that the Pentagon will modify its current proposal. DOD has been asked to extend the comment period, which was supposed to end this week.

    Gansler laments the fact that the proposal comes “just as world-class researchers and companies were showing a little greater interest in doing defense research.” He fears that any additional rules may cement the Pentagon's reputation as a funding source that's more trouble than it's worth.


    Europe Begins Work on Modest New Agency

    1. Richard Stone

    STOCKHOLM—You know scientists are desperate when they clamor for new bureaucratic paws on the R&D purse strings. But rampant dissatisfaction with Europe's basic research strategy—or lack thereof—has sparked calls for a new grantmaking body to fill the void. At a meeting here last week, the continent's top science managers started to flesh out a proposal for a European Research Council (ERC). It may not be what many scientists were hoping to see, but it does reflect budgetary constraints and the reality of the European Union's byzantine politics.

    The council's proponents invoke some disturbing numbers in arguing their case. European governments spend, on average, 2% of their budgets on R&D, compared with 4.2% in the United States, and the gap has widened significantly since 1995. “We have to do something, and we have to do it now,” says Dan Brändström, executive director of the Bank of Sweden Tercentenary Foundation and chair of a Swedish committee on the future of research in the European Union.

    Most research funding in Europe—roughly 96%—comes from national agencies. Nearly all the rest comes from a $4-billion-a-year pot known as the Framework program, administered by the E.U. But Framework targets mainly R&D that is likely to benefit industry in the near term, and industry currently favors hot fields such as genomics and nanotechnology.

    “Is agriculture the great future of Europe or is R&D?”—Michael Sohlman, the Nobel Foundation


    That has left many disciplines out in the cold, including some that are starved for support from the national agencies. Frank Gannon, executive director of the European Molecular Biology Organization, sees an “enormous increase” in the number of microbiologists leaving Europe for the United States. Other fields are faring even worse, he says: “There's a great danger that all research on plant biology will be snuffed out.”

    Such looming threats prompted a meeting last week at the Royal Swedish Academy of Sciences, where some 60-odd participants bandied about a new watchword: reapportionment. The idea is to lobby E.U. ministers to endow ERC by taking a tithe from Framework and other programs; national research agencies may also be pressed to contribute. Even that amount may not be enough, say some observers: Claiming a tenth of the E.U.'s much larger agricultural subsidies, for example, would allow the E.U. to double its science budget, notes Michael Sohlman, executive director of the Nobel Foundation. “One has to present politicians with a choice,” he says. “Is agriculture the great future of Europe or is R&D?”

    The participants would prefer to see the council created outside of Framework, which is tainted by what one observer calls “a credibility problem.” “Scientists don't trust it,” he maintains. Some mandarins suggest that the new council's initial remit should be to fund projects that are too risky for most national agencies; a promising model might be the U.S. Defense Advanced Research Projects Agency's nonclassified portfolio. Such an approach might also help Europe retain young, innovative researchers who now tend to go elsewhere. “The problem is not money. It's people,” asserts Reinder van Duinen, ex-president of the Netherlands Organization for Scientific Research.

    The discussion here was intended to set the stage for a meeting in October in Copenhagen, where the parties hope to hammer out an ERC vision and timetable. Further delays will only widen the competitiveness gap between Europe and the United States, predicts Fotis Kafatos, director-general of the European Molecular Biology Laboratory. For Europe's scientific community, he says, “this is a moment of truth.”


    Public-Private Group Maps Out Initiatives

    1. Jocelyn Kaiser
    1. With reporting by Robert Service.

    A new group hoping to spur a global effort to determine the structure and function of all proteins made by the human body kicked into gear last week. The Human Proteome Organization (HUPO), an international alliance of industry, academic, and government members, laid out its first set of initiatives and has begun knocking on industry doors for funding.

    HUPO was formed about a year ago by a group of scientists who wanted to make sure that companies don't lock up basic proteomics data under trade secrecy (Science, 7 December, p. 2074). The founders also wanted to include more countries than participated in the Human Genome Project. After an initial meeting last fall, HUPO participants this week fleshed out five initial projects (see table). “We want to nail down specific initiatives” so companies will be interested in contributing funding, says HUPO president Sam Hanash, an oncologist at the University of Michigan, Ann Arbor.

    View this table:

    The list is a mix of technology, tools, and research. For example, HUPO's bioinformatics plan would develop community-wide standards for presenting mass spectrometry and protein-protein interaction data. Another initiative would create a collection of antibodies for the primary proteins made by the 30,000 or more human genes. HUPO also wants to identify thousands of new proteins present in small amounts in blood, which would be very valuable to companies developing diagnostic tests. All the data would be freely available through public databases.

    Pieces of these projects are already under way. Protein chemists in Germany this summer expect to submit a 40-million-euro grant request to the European Union for an antibody initiative, and companies have shown interest in matching the funds, says Wolfgang Mutter of the health-care company Roche. A plan by the Asian and Oceanian branch of HUPO to form a liver proteome consortium—part of HUPO's cell models initiative—could soon get a jump-start: Korea's multibillion-dollar, 10-year 21st Century Frontier Research Program is considering devoting some funds to it, says Young-Ki Paik of the Yonsei Proteome Research Center.

    HUPO still needs to raise a lot more money, however. “These are not small projects,” says Emanuel Petricoin III of the U.S. Food and Drug Administration. “The goal is to get buy-in” from companies and then matching government funds, he says. Some companies have already chipped in a few million dollars. They include Amersham Biosciences, which announced at the meeting that it would spend $500,000 on seminars. Amersham's Günter Thesseling says the fact that everybody will have access to the results of HUPO projects is a plus. The data are “a prerequisite that everybody should be able to use,” he says. Chris Spivey, who's working on business support for HUPO, expects much bigger commitments by HUPO's next meeting in Paris in November. “The sums of money are going to be substantial,” he predicts.

    While HUPO is forging ahead with its first projects, the U.S. National Institutes of Health (NIH) is still mapping out its own proteomics strategy. At a meeting* last week in Bethesda, Maryland, proteomics experts went back and forth over possible recommendations on the best way for NIH to encourage the field's development. Many, like Ruedi Aebersold of the Institute for Systems Biology in Seattle, voiced support for a handful of pilot-scale centers to identify proteins en masse from selected tissues or blood serum using mass spectrometers. But because current mass spectrometers have difficulty spotting small amounts of proteins in a sample and cannot detect many of the key regulatory modifications that occur to proteins after they are synthesized, other researchers were less enthusiastic about the value to basic researchers of such pilot studies. That left many looking to HUPO for the early action.

    • *Human Proteome Initiative Workshop, 29 April 2002, National Institutes of Health, Bethesda, Maryland, hosted by the National Cancer Institute and Food and Drug Administration.


    NSF Report Paints a Global Picture

    1. Jeffrey Mervis

    Think of it as everything you ever wanted to know about the global scientific community—but didn't even realize you could ask for.

    The 2002 edition of the National Science Foundation's (NSF's) biennial Science and Engineering Indicators, an 1100-page behemoth report that includes a CD-ROM, hit the streets this week. The new volume, the 15th in a series, provides a banquet of data for science aficionados worldwide. But it also remains true to its humble origins as a report to Congress on the state of U.S. science.

    Spending flows.

    U.S. affiliates of foreign companies performed $22 billion worth of research in 1998, almost 50% more R&D than do foreign affiliates of U.S.-owned companies. The two totals were practically even in 1994.


    Thus, not far from an analysis of how many foreign-born Ph.D.s trained in the United Kingdom return to their countries of origin is a summary of the billion-dollar habit of U.S. legislators to bring home the research bacon for their constituents, a widely reviled but extensively used practice known as earmarking. The National Science Board, a presidentially appointed oversight body and the official publisher of Indicators, also isn't above plugging its own work, in particular, three ongoing studies that examine workforce issues, infrastructure needs, and international collaborations.

    This year's Indicators offers fresh insights on several familiar topics. For example, it shows a shift in the worldwide flow of scientific talent and an increase in capacity within the developing world. China's domestic universities, having recently overtaken Japan, are now the world's fifth-leading producer of science and engineering doctorates; they are poised to surpass France and the United Kingdom for third place, behind the United States and Germany. “It's time to discard the idea that the United States is the vacuum cleaner for the world's students,” notes NSF's Jean Johnson, co-author of the chapter on higher education.

    Trend lines.

    The sharp rise in overall U.S. spending on research since 1980 is driven by a huge jump in the last 5 years in investment by industry (top). Asian production of science Ph.D.s, led by China, now tops Ph.D. production in the United States on an absolute basis. Both trail Europe (bottom). Foreign students are much more likely to head home after earning their Ph.D. in the United Kingdom than the United States (see table below).


    The report also examines a country's commitment to primary and secondary school science and mathematics education with a scatter plot of teacher salaries in relation to a country's overall wealth: In Korea, Switzerland, and Spain, teachers earn twice the per capita gross domestic product, whereas their counterparts in Norway and the Czech Republic earn below-GDP rates. U.S. teachers are paid just above the GDP rate.

    The report, in eight chapters, covers the life of a scientist from preschool to retirement, as well as the factors—from patenting to public understanding—that shape the scientific environment. It's available online at

    View this table:

    Tuning In the Radio Sky

    1. Robert Irion

    If astronomers and agencies from six continents can work together, the world's largest telescope may rise in the next decade—with a radical new design

    HAT CREEK RADIO OBSERVATORY, CALIFORNIA—From the controls of his Cessna, astronomer Jack Welch points to a clearing in Lassen National Forest. There, in this isolated spot in northeastern California, ten 6-meter telescopes stare at the sky—a modest array by the standards of radio astronomy. But by 2005, something more grand will take its place: a cluster of 350 radio dishes, acting in unison to view the universe and watch for signs of life elsewhere.

    Two volcanoes, Lassen Peak and Mount Shasta, overlook the site of this project, called the Allen Telescope Array (ATA). That's fitting, for the array's concept—many receivers linked by cheap electronics—is rumbling through radio astronomy. By cutting the structural costs of huge dishes and instead combining signals from many small detectors, radio astronomers aim to explore the cosmos electronically at a nonastronomical price.

    “In a metaphorical sense, we're learning how to build telescopes out of computers, not metal,” says physicist Kent Cullers of the SETI [Search for Extraterrestrial Intelligence] Institute in Mountain View, California. “This is the future of radio astronomy,” agrees astronomer Leo Blitz of the University of California (UC), Berkeley, which is building ATA with the SETI Institute. “If you want to build a large telescope for a fixed amount of money, I can no longer think of any reason to build a single large dish.”

    Cullers and Blitz are confident that 12 to 15 years hence, this trend will produce a telescope of breathtaking scale: the Square Kilometer Array (SKA). As its name implies, this instrument would gather radio waves with a combined detecting area of a full square kilometer—making it 100 times more sensitive than any existing array. SKA would expose the now-invisible era when hydrogen first clumped together, tracing the “cosmic web” of dark matter that underlies all structure in the universe. Other studies unique to SKA's radio window include mapping magnetic fields in and among galaxies in exquisite detail, finding thousands of pulsars and using them to track gravitational ripples in space, and extending the search for intelligent life to tens of millions of stars.

    Plans call for SKA to reap this scientific harvest by spreading its detectors across more than 1000 kilometers of land. But despite the economy of mass production, it may cost $1 billion to build. That's why a formal international consortium of radio astronomers is pushing SKA as a global project from the outset. Scientists on six continents have contributed ideas for its design and location, sparking a creative outburst that the field hasn't seen in 30 years. “Radio wave astronomy has been so productive, but there is still much to do,” says Peter Dewdney of Canada's National Research Council in Penticton, British Columbia. “We think SKA deserves a place among the world's great telescopes of the next decade.”

    Skating toward SKA

    When proponents make the case for including SKA in their future, they often point to radio astronomy's past. For example, three of the five Nobel Prizes in astronomy have rewarded work at radio wavelengths. Moreover, the instrument responsible for the second-highest annual rate of publications in astronomy—behind only the Hubble Space Telescope—is the Very Large Array (VLA), a network of 25-meter radio dishes in Socorro, New Mexico.

    That record hides a small surprise: VLA's 27 dishes run on 25-year-old technology. “The VLA was completed in 1980, but it hasn't been upgraded at all,” says Welch, who pilots a faded yellow Plymouth Valiant during the half-hour drive to Hat Creek Radio Observatory from the nearest airfield. A $60 million, decade-long project to improve VLA has now begun, he notes, “but it's clear that ultimately we will need a new general-purpose telescope with much more collecting area.”

    Such wave-gathering prowess would give SKA the sensitivity and breadth of the best optical telescopes, says Welch, who conceived the ATA project at UC Berkeley. Today, astronomers can combine signals from widely separated radio dishes to make sharp images, but the objects must be bright and can't cover much area on the sky. As astronomer Alyssa Goodman of Harvard University puts it: “Radio astronomy leads to Nobel Prizes but not pretty pictures. It would be nice if it led to both.”

    New-wave radio.

    The Allen Telescope Array, a planned network of 350 radio dishes in California, relies on low mass-production costs.


    Some efforts are already under way. At millimeter wavelengths—the high-frequency end of the radio spectrum—construction will soon begin on the long-awaited Atacama Large Millimeter Array (ALMA), a joint U.S.-European effort to build 64 12-meter dishes high in the Chilean desert. The U.S. National Radio Astronomy Observatory (NRAO) opened its 100-meter Green Bank Telescope in West Virginia last year, and astronomers at the 300-meter Arecibo Observatory in Puerto Rico and in Australia, France, India, Italy, and the United Kingdom have recently constructed or substantially upgraded their facilities.

    Still, many note that the average hair color at radio astronomy meetings is becoming grayer. “A huge number of us entered the field in the 1960s,” says astronomer Donald Backer of UC Berkeley, noting that Arecibo and Jodrell Bank Observatory near Manchester, U.K., were vital destinations. “That's not happening now. ATA and SKA are exciting, but they have yet to pull in students.”

    If SKA does not enjoy the same cachet among astronomers as major plans in the optical, infrared, x-ray, and other wavelengths, it may be because such a huge and radical concept still seems so foreign. “I was so ignorant of this subject that I thought the telescope was built square to make the Fourier transforms easier,” joked astrophysicist Roger Blandford of the California Institute of Technology in Pasadena. But after he looked into it, Blandford became a convert. “In a 1° field of view, you'll see 300,000 extragalactic sources,” he says. “It will be a splendid probe for cosmology, and it will discover completely new sources as well.”

    Venture capital

    Blandford's latter point is one that SKA's organizers would like to sell. “It's quite possible that the main thing this instrument will do is to show us something that no one expected,” says the SETI Institute's Jill Tarter, chair of the U.S. SKA consortium. But Tarter and her collaborators acknowledge that the lure of the unknown won't sway funding agencies. “In today's climate, you need a sharply focused scientific case,” Backer says. “We won't get $1 billion to build the next bigger thing just because we can.”

    Due for a tune-up.

    The productive but aging Very Large Array is getting a decade-long upgrade.


    So, SKA's partners focus their pitch on the web of primordial hydrogen, which Backer calls “as rich a prize as the cosmic microwave background itself.” Hydrogen suffused the dark era between the origin of the microwave background—when atoms first formed and light streamed freely into the cosmos—and the birth of the first stars and galaxies. A neutral hydrogen atom emits radiation at a wavelength of 21 centimeters when its lone electron flips its spin. As space expands, this weak signal from the early universe stretches into tenuous wisps of meters-long radio waves. They will penetrate through everything, giving astronomers a clear view of mass concentrations in the infant universe.

    “A square kilometer is not a random size,” says astronomer Harvey Butcher of the Netherlands Foundation for Research in Astronomy (ASTRON) in Dwingeloo. “If you put the Milky Way at the very beginning of galaxies, you need a square kilometer to be able to detect it, given the sensitivity of receivers today.”

    Various components of our galaxy's ancestors will pop into focus for SKA. Radio signatures of carbon monoxide and other molecules will trace the history of heavy elements in early galaxies. The whirlings of coherent microwave emissions from vast clouds of water vapor, called megamasers, promise to expose some of the most distant supermassive black holes at the cores of active galaxies. Closer to home, SKA will resolve the magnetic fields that lace through galaxies and the cradles of stars within them.

    Within the Milky Way, SKA should find at least 10,000 pulsars, the dense, spinning remnants of exploded stars. The telescope will track the relative rotation speeds of the fastest of these beacons with an accuracy of better than a millionth of a second. Albert Einstein's theorized gravitational waves—ripples in the fabric of spacetime caused by massive disturbances, such as coalescing black holes in the centers of distant galaxies—may flutter the apparent motions of the pulsars enough for SKA to detect.

    SKA's ability to focus on much larger patches of the sky than ALMA also will make it an ideal radio survey tool, says astronomer Jim Cordes of Cornell University in Ithaca, New York. Cordes is eager to observe what he calls the “transient radio sky”—bursts and new objects that may come and go in days or weeks. “We are very behind our colleagues at other wavelengths in exploring the transient sky,” Cordes says. For instance, SKA might see the afterglows of distant gamma ray bursts, only 1% as bright as any seen so far, or even bursts that appear only in radio waves.

    The wide field of view also will make SKA ideal to search for blips from other civilizations. Electronic processing will let Tarter and other SETI Institute astronomers monitor many stars in the same viewing area as another object being studied. Even if such signals are merely “leakage” of alien radio or TV equivalents, SKA could pick them up from deep within our Milky Way. That's a far cry from the slow pace of searching today, says Welch. “Even with Arecibo, we couldn't hear Howdy Doody beyond Alpha Centauri,” the closest star, he observes.

    Global by design

    Welch and Tarter, who are married, and their colleagues at the SETI Institute and UC Berkeley are building ATA to deepen their own research, but they clearly view it as the forerunner to SKA. At a projected $26 million, ATA will provide roughly the same collecting area as the single-dish Green Bank Telescope at one-third the cost. Its oddly shaped 20-foot aluminum dishes (the manufacturer disdains metric units) will be pressed out by a satellite-dish outfit in Idaho Falls, Idaho, over the next 2 years at the rate of one every other day. “We're light, we're agile, and we're quick,” says UC Berkeley's Blitz.

    ATA also represents a sociological breakthrough for the field. Its funding is entirely private—nearly all of it from technologist Paul Allen, the co-founder of Microsoft and a big SETI Institute fan. “We run this like a skunk works,” says project scientist John Dreher of the SETI Institute. “There are minimal reviews and no ponderous government management structure. We just have to keep one panel happy.”

    Scaling ATA up to a SKA-sized network would take thousands of dishes, each one perhaps 12 meters across. Making that financially feasible will require further cost reduction by a factor of 3 or 4, Welch estimates. India also is pursuing a similar concept for SKA, so the two countries may exchange ideas about how to bridge that cost gap.

    The leading alternative to the many-dish idea, in the minds of most observers, would look like fields full of simple looped wire antennas or wires embedded in tiles. Spearheaded by ASTRON, this concept may arise within 5 years as a $75 million project called the Low-Frequency Array, or LOFAR. “The basic element of LOFAR is really cheap: It's just a long string of wire,” says physicist Joseph Lazio of the Naval Research Laboratory in Washington, D.C., a partner in LOFAR with the Massachusetts Institute of Technology. “It's essentially a big FM radio.”

    LOFAR would focus on wavelengths between 0.5 and 10 meters. About 10,000 wire detectors would spread out in a 400-kilometer-wide pattern in northern Europe, the southwestern United States, or western Australia—forming a possible precursor to SKA. Unlike ATA, LOFAR would “see” most of the sky at once; computer processing would let the researchers retrace where the signals came from.

    The challenge of SKA has spurred creative ideas from other countries as well. Astronomers in China envision 20 giant radio dishes, each 500 meters across, suspended within bowl-shaped depressions in limestone formations. Hydraulic supports beneath the panels would adjust them to the proper shape for focusing on a patch of sky. “We have identified a beautiful site in the Guizhou Province [in south-central China],” says astronomer Rendong Nan of the Beijing Astronomical Observatory (BAO). “It's better than Arecibo.” Nan and his colleagues have built prototype panels, and they await decision next year on a $50 million construction proposal to the Chinese Academy of Sciences for one dish.

    Sky's eyes.

    Concepts for the Square Kilometer Array include proposals from (top to bottom) Australia, Canada, China, and the Netherlands.


    Two other proposals have delighted consortium members with their ingenuity. Astronomers in Canada devised plans for a series of 200-meter-wide reflecting panels, gently curved but mostly resting on the ground. An 18-meter-long aerostat far overhead would carry each telescope's detectors. By adjusting the aerostat's position over the panels with a series of taut tethers, astronomers would bring different parts of the radio sky into focus.

    Australia, meanwhile, has come up with something completely different: fields full of spherical “Luneburg lenses.” Each lens, perhaps 6 meters across, would contain a polymer foam that refracts radio waves to precise focus on the opposite side of the lens. Detectors ringing the lower halves of the lenses would collect the signals.

    As for where to put SKA, regardless of its design, Australian radio astronomer Ron Ekers has a simple answer. “If you're going to spend a billion dollars, you build it in the best place on Earth,” says Ekers, incoming president of the International Astronomical Union. Consensus is building toward western Australia, says Cornell astronomer Yervant Terzian, head of the SKA site-selection committee. At that site, radio interference is minimal and there's plenty of room to spread the array over 1000 kilometers or more. Brazil and Argentina have expressed interest, as has South Africa. Radio astronomers in the southwestern United States feel that the setting around VLA is a strong choice as well.

    Cash across borders

    The international SKA steering committee, also headed by Ekers, gathered “straw man” design proposals this week and will debate their merits at an August meeting in the Netherlands. The group then plans to choose a design—or a hybrid of two designs at different wavelengths—and one or two sites to evaluate in detail in 2005. Three years after that comes the big step: approaching agencies in all of the member governments for funding. If that succeeds, mass production would begin by 2010, with “first light” in 2015.

    To stick to that timetable—and to SKA's $1 billion cost cap—the project must clear both technological and political imponderables. Will the cost of electronics and computer analysis of thousands of discrete signals keep dropping exponentially for another decade? If so, SKA participants say, their project will be affordable—assuming the money is there. “I'm not so worried about people being chauvinistic about their technologies,” says astronomer Douglas Bock, an Australian native now at UC Berkeley. “But I am worried about the politics of getting funding in an international situation. A lot of countries are very parochial about how they fund their science.” One or more countries in the consortium may be loath to invest outside their borders, some astronomers say privately.

    The rough fiscal menu calls for the United States and Europe each to finance one-third of SKA, with the remaining one-third from other countries. The initial commitment for major construction funding will be the toughest row to hoe, says Paul Vanden Bout, director of the National Radio Astronomy Observatory in Charlottesville, Virginia. He points to the U.S.-European ALMA project in Chile as an example. “We might have talked for a very long time indeed if the National Science Foundation [NSF] had not been willing to fund the millimeter array for design and development work, thus signaling that they were serious about contemplating this for real construction,” he says. “Unless one of the parties steps up and throws some real cash at SKA, the conversation about it could go on for a long time.”

    Ekers smiles gently when he hears such comments. “That's a typically U.S. view,” he says. “There are other models to follow. We have always looked at this decade as time for research and development and next decade as the funding one. We know that ALMA will take manpower and resources from the U.S. and Europe, and that gives us time to build prototypes.”

    Individual countries in the formal SKA consortium are each kicking in about $500,000 to $2 million per year for R&D within their borders. Apart from the private ATA, the U.S. is at the low end. NSF recently granted the U.S. SKA consortium $1.5 million for 3 years via a grant to Cornell University, less than one-third of its request. Last year, a national panel of astronomers recommended that SKA receive $22 million in total funding for technology development in this decade. “A half-million dollars per year is what we can do for now,” says G. Wayne Van Citters, director of the Division of Astronomical Sciences at NSF. “We hope to ramp it up as the decade goes on.”

    Design ideas may differ, but SKA enthusiasts tend to agree on one thing: The flow of the money stream, whether trickle or torrent, simply will alter the year in which SKA first scans the heavens. “I view these developments as inevitable,” says ASTRON's Butcher. “If they don't happen in my generation, then my generation has failed.”


    Space Communication for the Video Age

    1. Robert Irion

    Radio astronomers aren't the only ones enamored of huge arrays of cheap receivers. The concept has also caught the eyes of NASA engineers, who long to overhaul the agency's aging Deep Space Network (DSN).

    Consisting of three 70-meter dishes and several smaller antennas in Spain, Australia, and Goldstone, California, DSN is NASA's link to probes that explore the solar system. Since the mid-1970s, its capacity has gone up only modestly. This factor, along with basic transmitters and computers on spacecraft, has stuck space exploration on snapshot mode in a video age.

    For a near-term boost, NASA has upgraded its Goldstone receiver to work at a frequency of 32 gigahertz rather than 8 gigahertz, says Barry Geldzahler, program executive for space operations at NASA headquarters in Washington, D.C. That conversion throughout the DSN system—to be completed by 2006—will quadruple data transmission above the current rate of about 100 kilobits per second for a probe at the distance of Jupiter. Other plans include more efficient software on spacecraft and refurbishing the ground antennas. “The infrastructure has been allowed to go fallow,” Geldzahler acknowledges. “In our budget priorities, missions have come first.”

    Deep dish.

    Data rates have gone up slowly since 1976 (graph) for NASA's Deep Space Network.


    It's vital to hike DSN's capacity much further, says electrical engineer Sander Weinreb of NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California. With a data-transfer rate of 10 to 100 megabits per second, “the virtual exploration of planets could take place,” he says. “Instead of a rather coarse image on a newspaper page, we could have real-time television, even high-definition video.” Each mission could also send back far more data at other wavelengths for deeper analysis of atmospheres and surfaces. More collecting area on Earth would also mean smaller transmitters on spacecraft, reducing their weight and size and perhaps eliminating the need for orbiters to relay data from planetary landers.

    NASA is exploring two options to realize those gains by next decade, Geldzahler says. One is to move communications to the high data rates of optical light, with lasers on spacecraft and 10-meter telescopes on the ground. But if telescope costs prove prohibitive, NASA may mimic the approach of the Allen Telescope Array. Weinreb and his colleagues at JPL will soon submit a proposal to NASA for a prototype DSN array of 100 12-meter radio antennas. Beyond that, Weinreb says, “DSN and the Square Kilometer Array [SKA] could go hand in hand. There is a lot of common technology.”

    Radio astronomers think the solution is clear. “I see the connection between DSN and SKA as completely obvious,” says Alyssa Goodman of Harvard University in Cambridge, Massachusetts.


    Seeking Peace in a Radio-Loud World

    1. Robert Irion

    Although radio astronomers adore the technology the communications industry has spawned, they detest some of its byproducts: blaring antennas, swarming satellites, and chatter-filled airwaves that threaten to wash out dim sources in the sky.

    For years, treaties have shielded key parts of the radio spectrum from commercial interference. For example, at the 2000 World Radiocommunication Conference in Istanbul, Turkey, regulators preserved big chunks of the spectrum at millimeter wavelengths to benefit the planned Atacama Large Millimeter Array in Chile and other high-frequency observatories. Some facilities—such as the 100-meter Green Bank Telescope in West Virginia—also sit within “radio preserves,” where terrestrial signals are curtailed.

    However, such measures aren't cure-alls. Commercial pressures squeeze broadcasters and satellite operators into frequencies next to those in which astronomers try to work, and their signals often bleed into the “protected” bands. Satellites, which don't turn off above radio preserves, will only become more numerous. And as astronomers build more-sensitive radio telescopes to peer into deep space, many observations will shift into commercial wavebands—thanks to the expansion of the universe, which stretches all emissions to longer wavelengths.

    Planners for the Square Kilometer Array (SKA) crave a true radio-quiet zone where satellites might not transmit, such as the sparsely populated Australian outback. The Global Science Forum of the Paris-based Organization for Economic Cooperation and Development is sponsoring a task force to study this notion—and to convince industry that tight control over signal leakage makes smart business sense. “The satellite community is certainly much more aware of the radio astronomy problem” than in the past, says Tomas Gergely, program manager for electromagnetic spectrum management at the National Science Foundation. “But to have total access to the spectrum at any one place on Earth is impossible in my view.”

    Null and void.

    Manipulating waves from the Allen Telescope Array will suppress radio interference in any pattern on the sky (blue).


    Fortunately, the arrays of smaller elements in most SKA designs offer a way to cope. By delaying some of the signals relative to others, astronomers can suppress radio waves in arbitrary patterns on the sky—just as a light beam passing through slits creates an interference pattern of bright fringes and dark spots. Astronomer Geoffrey Bower of the University of California (UC), Berkeley, has shown that a seven-antenna prototype of the Allen Telescope Array (ATA) can beat down signals from satellites by a factor of 1000. This technique, called interferometric nulling, will be even more powerful on the full 350-telescope array. “The radio sky over ATA will look like Swiss cheese,” says Jack Welch of UC Berkeley. “Each satellite will have a little nulled horizon around it. Otherwise, their emissions would go off-scale.”

    A combination of more efficient satellites and clever nulling just may make SKA feasible, says astronomer Michael Davis of the SETI Institute in Mountain View, California, former director of the Arecibo Observatory in Puerto Rico. “We are asking satellite engineers and operators to be technically innovative and creative,” Davis says. “We have no standing if we don't also do that ourselves.”


    Bringing a Long-Lost Library Back to Life

    1. Andrew Lawler

    A stone's throw from the ancient city of Nineveh, Iraq intends to erect a center for cuneiform research. Many scholars, however, are dubious about the scientific payoff of a tablet-based project in a digital age

    MOSUL, IRAQ—More than 2500 years after the fiery destruction of the world's first major library, Iraqi scholars are hoping to see a Mesopotamian phoenix rise from the ashes. Work is slated to start soon on a research center and museum at Mosul University devoted to the study of cuneiform, the wedge-shaped writing system used across Mesopotamia for 3 millennia. It's “a real Renaissance project,” beams Frederick Mario Fales, a specialist on Assyria at Italy's Padua University. But he and other experts worry that, in an age of digital libraries, the tablet-based project could become a scholarly white elephant.

    Modern-day Iraq is the heartland of the ancient Assyrian empire, whose King Assurbanipal created the world's most impressive repository of knowledge in its day. Although his empire soon collapsed, the flames that engulfed his capital city of Nineveh failed to destroy the library's clay tablets. In 1850, British archaeologists stumbled over the trove (including the famous Sumerian Great Flood story, The Epic of Gilgamesh) and carted it off to London. Iraqi officials hope their new center, dubbed the Saddam Institute after Iraq's president, will boost the country's reputation as a hub for research into the ancient texts that originated here.

    Along with housing casts of tablets found at the nearby ancient capital of Nineveh, the institute would link to various international databases, as well as archive archaeological reports and other materials. Iraq's minister of higher education and science, Humam Abdul Razzak, told Science that his government will spend “whatever it takes” to complete the complex within 5 years. He has grand aspirations for the project: “We hope it will be bigger in size and money than the Alexandria Library,” a $200 million complex just completed in the Egyptian coastal city of Alexandria.

    Everyone agrees that there is an urgent need to fire up interest in cuneiform tablets and their unparalleled insights into Mesopotamian life—particularly with new finds in the offing. A dam project south of Mosul threatens to inundate dozens of sites, including the original Assyrian capital of Assur; salvage projects in the next several years are expected to unearth numerous tablets (Science, 22 March, p. 2189). In addition, archaeologists may resume excavations in nearby Nineveh to search for undiscovered remains of the torched Assurbanipal library.

    Many scholars argue that the Saddam center could leave a lasting legacy if it were to encourage preservation and cataloging of the thousands of tablets languishing in the Iraq Museum in Baghdad, as well as prepare for an onslaught of new ones. Researchers also hope that the center will put texts in digital form. “Ultimately the most useful thing would be to digitize images of tablets, so they could be studied anywhere,” says John Curtis, head of the British Museum's ancient Near East section. But digitizing is expensive and technologically challenging, particularly when dealing with the smaller script used in Assyrian times. And Iraqi officials appear at present to be more concerned with architectural plans and obtaining casts of Assurbanipal tablets stored at the British Museum.

    Riding high.

    King Assurbanipal, shown here in a stone frieze, was patron of the world's first great library; his annals are described in this cuneiform column (top) found in his Nineveh palace.


    The apparently narrow focus worries some observers, who fear that the project might benefit Iraqi scholars and museumgoers but not the international community. Retrieving casts of the Assurbanipal texts would boost Iraqi pride, and politicians the world over prefer dedication ceremonies to funding preservation or conservation. Moreover, it's unclear whether the tablets in the Iraq Museum or even new finds would end up in the institute, given a strong rivalry between Baghdad and Mosul. The project's main value, laments one scientist, will be “political and diplomatic” rather than scholarly.

    At the core of the million or so cuneiform tablets recovered so far are the holdings of the Assurbanipal Library, founded some 400 years before its more famous cousin in Alexandria. Ruling at the height of the Assyrian empire, Assurbanipal (668-628 B.C.) sent his scribes to scour the Near East for religious, scientific, diplomatic, and literary works. They gathered more than 25,000 tablets: “hidden treasures of all the knowledge of the scribes,” the king wrote, that allowed him to “resolve the persistent problems of division and multiplication … and decipher the inscriptions written on stones at the time before the Flood.” Assurbanipal claimed to have been the first king to write in cuneiform, citing, with more than a touch of academic arrogance, his own “vast intelligence” and “penetrating acumen for the most recondite details.”

    Model institute?

    The Saddam Institute in Mosul would house a research center and museum.


    Sixteen years after the proud king's death, Babylonian and Mede armies overwhelmed the Assyrian Empire. When scholars recovered the tablets nearly 2500 years later, they found invaluable reference works, dictionaries, compendia of omens and rituals, and mathematical texts that opened a new window on Mesopotamian life. Archaeologists say large numbers of tablets likely remain untouched at Nineveh.

    Foreign scholars visiting Mosul University were recently shown models of the proposed institute and museum. The site is on the edge of campus, less than a kilometer from Nineveh's ancient walls. One floor of the institute will be devoted to publications on ancient Mesopotamia. A second will contain computer terminals with access to cuneiform databases—a potential boon to Iraqi researchers now mostly cut off from colleagues abroad. The grounds will include five houses for visiting scholars.

    Humam is eager to obtain resin casts made from the British Museum's Assurbanipal collection. Curtis, who met with Humam in Baghdad in March, says the museum is happy to make casts, although he notes that the Iraqis will have to provide funding for the expensive work. It takes one technician a whole day, on average, to produce a single cast. But although Assurbanipal casts may be fine to showcase in a museum, they would be of limited value to scholars, as the texts already have been studied intensively.

    The real need within Iraq, experts say, is for better care, conservation, and cataloging of current finds and preparation for a wave of incoming texts. A recent attempt at preserving important tablets discovered in the tombs of Assyrian queens in Nimrud, for example, went awry when the oven malfunctioned, apparently due to power outages, and turned the writing to dust. And even for preserved text, “they use some primitive methods for conserving tablets,” notes Robert Englund of the University of California, Los Angeles. He estimates that there are about 70,000 tablets cataloged in the Iraq Museum—and an equal number not yet tagged.

    But although Englund worries that the project may end up being little more than “a Saddam tourist center,” Iraqi officials imagine the institute serving a much loftier purpose, as a gathering place for scholars to explore all aspects of cuneiform studies. “We are talking about human heritage, not just Iraqi civilization,” Humam says. That notion will be put to the test this fall, when Mosul University holds an international conference to kick off the project. Iraqi officials are betting that, if they resurrect the Assurbanipal Library, the scholars will come.


    Humans' Head Start: New Views of Brain Evolution

    1. Ann Gibbons

    BUFFALO, NEW YORK—About 1200 researchers converged here for the 71st annual meeting of the American Association of Physical Anthropologists (10 to 14 April), where brain evolution was one of the hottest topics, including reports on the diet needed to support an expanding brain and a new tool's view of how the human brain took shape in evolution.

    Something Fishy About Brain Evolution

    Illustrations of human ancestors routinely show brawny hunters bringing home the wildebeest, butchering meat with stone tools, and scavenging carcasses on the savanna. But a more accurate image might be ancient fishermen—and fisherwomen—wading into placid lakes and quietly combing shorelines for fish, seabirds' eggs, mollusks, and other marine food.

    At a symposium on nutritional constraints on brain evolution, an unusual mix of anthropologists, neurochemists, nutritionists, and archaeologists debated the kind of diet that must have supported humans' dramatic brain expansion, focusing on how our ancestors consumed enough of the omega fatty acids essential for brain development. Although a few researchers suggested that the source was brain and other organ meat, most agreed that our ancestors must have relied on fish or shellfish. “A shore-based diet was essential for the evolution of human brains,” says nutritional scientist Stephen C. Cunnane of the University of Toronto.

    That's because humans, intelligent though we may be, are literally fatheads: About 60% of the brain's structural material is lipids, almost all of it in the form of two long-chain polyunsaturated fatty acids, docosahexaenoic acid (DHA) and arachidonic acid (AA), respectively known as omega-3 and omega-6 fatty acids. So when a fetus's brain is developing, a lack of DHA or AA is “catastrophic,” says Cunnane.

    These acids are vital to brain growth and function after birth, too. Infant humans and other mammals that lack these fatty acids show reduced cognitive ability and vision problems. (The retina has the highest concentration of DHA.) In adults, new data suggest that depletion of these acids may be linked to attention deficit disorders, dyslexia, senile dementia, schizophrenia, and other problems, according to a review by geochemist C. Leigh Broadhurst of the U.S. Department of Agriculture's Environmental Chemistry Laboratory and Michael Crawford of the University of North London in the April issue of Comparative Biochemistry and Physiology Part B.

    To catch a fish.

    Thousand-year-old stone fish traps and 90,000-year-old fishbones from Africa (bottom) show humans' long love affair with fishing.


    People must consume DHA and AA in their diets, because the body cannot synthesize these molecules fast enough from other fatty acids found in vegetables, nuts, flaxseed, and other sources. Although by far the best source of DHA is shellfish and fish, particularly cold-water fish such as bluefish and herring, these acids are also found in brain meat and in the liver of some animals, says physiologist Loren Cordain of Colorado State University in Fort Collins.

    But our ancestors couldn't support an expanding brain by eating brain alone: Crawford calculated that a 350-gram brain from a 1-ton rhinoceros would barely feed a party of hunters, much less those who needed it most: pregnant and nursing women and children. To have a reliable source of DHA, particularly to increase brain size rather than sustain it, Broadhurst says, “many generations of women had access to fish.” She adds that many archaeological sites are by lakes and rivers, so our ancestors must have taken advantage of these obvious resources.

    The hypothesis makes sense, says neurochemist Norman Salem Jr. of the National Institute on Alcohol Abuse and Alcoholism. “I would expect that those early brains as they expanded maintained the high DHA content we have today,” Salem says. “It seems reasonable to me that they evolved around water with marine sources available.”

    Indeed, for at least the past 100,000 years, the archaeological record of modern humans includes hundreds of middens—piles of shellfish shells and fishbone—and other signs of fishing. By 70,000 years ago at Blombos Cave in South Africa, and perhaps as early as 90,000 years ago at Katanda, Zaire, people carved bone points for fishing, says anthropologist Alison Brooks of George Washington University in Washington, D.C.

    But the brain underwent explosive growth long before this time, probably beginning about 2 million years ago in hominids who lived in Africa and Asia. Methods to reconstruct their diet by studying the ratios of isotopes of carbon and strontium in their teeth or bone have so far failed to discern whether they ate marine foods, says Julia Lee-Thorpe of the University of Cape Town in South Africa. And although some new methods measuring barium ratios hold promise, it might be difficult to find the right hominid remains to test: “For the past 2 million years, the ocean was 10 meters lower than today,” notes geologist Henry Schwarcz of McMaster University in Hamilton, Ontario. “Where were the fish-eating populations living? On the now-submerged coast.” Many hominids did live near Africa's abundant lakes, however, and their bones may eventually prove whether or not fish gave our ancestors food for thought.

    Hot Spots of Brain Evolution

    Humans may pride themselves on their big brains, but just which parts of the brain expanded during evolution has been fiercely debated. Now it seems that, compared with chimpanzees, humans may be literally more right-minded. A powerful new imaging technique presented at the meeting revealed bulges on the right-hand surface of human brains that are not seen in chimpanzees, suggesting that these areas expanded during our evolutionary history, perhaps to aid in processing the rhythms and tone of speech.

    “The surprise is the degree to which the right side expanded,” says speaker Dean Falk, an anthropologist at Florida State University, Tallahassee. “It's generally been thought that the left hemisphere was most important because it is known to be the language-bearing side of the brain. That's true, but we see more changes on the right.”

    Although the findings are preliminary, the new method, which uses magnetic resonance imaging (MRI) to study and compare the brains of living people and chimpanzees as well as ancient skulls, is already winning rave reviews. “I was totally blown away by the technique,” says Patrick Gannon, a comparative neurobiologist at Mount Sinai School of Medicine in New York City.

    The technical wizardry that impressed the audience was developed as part of an international effort to map the activity of living people's brains with functional MRI, which tracks oxygen use by tissues, says developer Karl Zilles, head of research groups at the Vogt Institute for Brain Research and at the Research Center Jülich, both in Düsseldorf, Germany. To compare scans of different people with different imaging methods, Zilles's team recently developed software that can shrink or blow up brain maps to a standard size without scaling problems.

    Brain spots.

    To match a human, a chimp's brain would have to expand in the right frontal lobe (shown in red and yellow; see arrow on bottom view) and shrink in a spot in the right temporal lobe (in blue, right side view).


    Zilles realized that he could adapt the method to compare the sides of the brain, and even the brains of different species—specifically, people, chimpanzees, and extinct hominids. So Zilles and Falk used MRI to make highly accurate “virtual endocasts”—three-dimensional computer images of the right and left sides of human brains. Using the software to compare the two sides, they found two well-known asymmetries that cause bulges at the surface of the frontal lobe behind the right eye and the left occipital lobe at the back of the brain. But they also found new asymmetries, including many areas of the right brain that were larger than the left, such as a semicircle of expansion from just behind the eye socket to the back of the brain.

    Next, they looked at how these areas changed during human evolution. They made virtual endocasts of 10 human brains. Then, because they couldn't use MRI on live chimpanzees, they submerged skull endocasts from seven bonobo chimpanzees in water and used MRI to image the water inside the braincase, revealing the shape of the brain. Next they took the “average” cast of each species and used the software to “warp” and overlay the chimp endocast on the human one, showing the areas of difference. They also overlaid a scaled functional human brain map to show the functions of these regions. For example, they discovered a spot behind the right temple (see illustration; shown in blue), thought to be used to analyze sound, that is smaller in humans than in bonobos.

    All in all the team found five hotspots where the shape of the human brain differed from that of chimps, and three were more dramatic on the right side. Falk then compared the human and chimp casts with those of 13 hominid skull casts in her collection, ranging from a 2.5-million-year-old australopithecine to more recent archaic Homo sapiens and Neandertals. She found marked changes beginning in australopithecines, whose frontal lobes began to expand above the nose. But this and other areas, such as the bottom of the lobe behind the temples, expanded even more in archaic H. sapiens and Neandertals. In fact, notes Falk, the newly located asymmetries between the left and right brain “are the exact areas that change dramatically in fossils.”

    The next step is to figure out what functions are carried out by the expanded brain areas—and whether they reflect deeper underlying structural changes rather than just rearrangements of the tissue next to the skull, says Daniel Buxhoeveden, a biological anthropologist at the Medical College of Georgia in Augusta.

    Falk thinks, for example, that the expansion of the semicircle on the right side may be important for understanding the prosodic features of speech, such as rhythm, tone, and emotional content. “It was surprising because most people, including me, are fixed on the idea that speech is dominant on the left side,” says Zilles. “Speech is something human, but many changes are on the right.”


    Breaching the Membrane

    1. Joe Alper*
    1. Joe Alper is a writer in Louisville, Colorado.

    A better understanding of the structure and function of the cell membrane and its components is providing drug developers with new avenues for breaching the cell's outer defenses to deliver drugs or DNA

    The cell membrane has a tough job. A mere 7 to 10 nanometers thick, this jittery association of lipids and proteins has two seemingly incompatible responsibilities: to prevent the inside and outside of a cell from mixing and yet allow specific molecules to enter the cell and others to exit. Failure to carry out either mission means certain death for a cell. But success can also have its negative consequences when the cell membrane bars entry to a drug searching for its therapeutic target.

    “Getting a potential drug past the cell membrane to reach its target is a huge challenge and one that we often fail at,” says Gordon Amidon, a pharmaceutical chemist at the University of Michigan, Ann Arbor. “And even when we have succeeded, it's largely been because of serendipity, not because we actually understood how to do it.”

    That sorry state of affairs is changing, though, as researchers learn more about how the cell uses the membrane and its protein components to admit some molecules while excluding others. Already, one drug designed to take advantage of a specific transport mechanism has reached the market—the antiviral agent valganciclovir, used to treat a potentially blinding eye infection—and several biotechnology start-ups are developing drug-delivery technologies based on the new understanding of cross-membrane transport. “By taking advantage of the very mechanisms that cells use to take up or exclude certain molecules, we're making drug delivery a rational science,” says Ronald Barrett, co-founder and chief executive officer of XenoPort in Santa Clara, California.

    Bringing rationality to what has been a hit-and-miss proposition will have tremendous implications for drug development. For example, many of these membrane-bound molecular transporters are unique to particular organs, which could eventually lead to tissue-specific drug delivery, long a goal of medicinal chemists. And recent work in Amidon's laboratory shows that tumor cell lines contain a collection of transporters different from those of healthy cells, which raises the hope of targeting tumors through their transporters. “This field has made such rapid progress that we're only just starting to imagine all the ways we can use cell-specific transport mechanisms in medicine,” says environmental toxicologist Ned Ballatori of the University of Rochester School of Medicine in New York state, who has been studying how transporters help clear the body of toxic chemicals.

    Hijacking the transporters

    Plenty of places in the body are for all intents and purposes unreachable, at least as far as many potential drugs are concerned. The vast majority of drugs that are now injected, such as the antibiotic vancomycin, cannot break through the membrane of the cells that line the digestive system, which means they cannot be taken in pill form. Others never make it out of the bloodstream or past the tightly packed band of cells known as the blood-brain barrier. In each case, the largely impermeable cell membrane is the culprit. Those molecules that do cross the membrane, such as necessary nutrients and hormones, do so courtesy of transporters embedded in the membrane—molecular portals, if you will, that ferry in molecules meeting specific criteria for size, charge, or chemical composition.

    Hijacking a transporter seems a promising way to get a drug across the cell membrane, “but being able to do that assumes that you know a great deal about these carrier proteins, including their distribution on various organs and cell types and their specificity [for particular molecules],” notes Amidon. That's clearly not the case. But fortunately for drug developers, most individual transporters fall into distinct families that share many functional features. For example, two types of transporters, known as PEPT1 and PEPT2, are responsible for ferrying a broad range of nutrients across a variety of cell membranes. Serendipitously, a variety of drugs, including ACE inhibitors and β-lactam antibiotics, can use PEPT1, something that only became known over the past few years.

    Tunneling in.

    Rather than use an existing transporter, a group at Scripps is constructing artificial passageways through the cell membrane using self-assembling peptide nanotubes.


    With nary a single crystal structure available for any transporter, researchers have taken two approaches to characterizing these proteins. One line of attack, taken by biophysicist and pharmaceutical scientist Peter Swaan of Ohio State University, Columbus, and others, is to use site-directed mutagenesis, binding data, and computational techniques to create in silico models of a transporter. Swaan's group, for example, has modeled the binding site of the PEPT1 transporter, creating a tool that can predict those molecules that this particular transporter will accept as cargo to ferry across the membrane.

    More recently, Swaan's group has tackled a bile acid transporter, a high-throughput intestinal transporter capable of shuttling some 10,000 molecules a second across the membrane. Starting with the known structure of the distantly related membrane protein bacteriorhodopsin as a digital scaffold, the researchers created a structural model that identified five unique binding domains at which specific molecules bind depending on their chemical structure and polarity. Swaan says his group has successfully used this model to modify several molecules so they can slip into cells through the bile acid transporter.

    A second approach, taken by Amidon's group and others, has been to synthesize hundreds of molecules to determine the kinds of chemical groups that members of PEPT1 will carry across the membrane using an in vitro model system. With this information in hand, the Michigan team created so-called prodrugs—a chemically modified form of drug that gets converted into the active form by enzymes present in a cell—of the antiviral agents acyclovir and AZT that increased the intestinal absorption of these drugs from three- to 10-fold. The prodrug consists of the parent antiviral coupled with a hydrolysable chemical link to the transportable group. The transporter recognizes its target group and drags it and the rest of the prodrug through the membrane. Ubiquitous enzymes inside the cell cleave the linkage, yielding the active drug. Chemists at drug company Roche used the same approach to make valganciclovir, a transportable prodrug of the antiviral agent ganciclovir that requires less than one-tenth as much active drug to achieve the same therapeutic effect.

    One of the holy grails of drug development is to create therapeutics that only act where they are needed in the body. XenoPort scientists believe they know how to find this grail—by mapping where one widespread class of transporters is distributed in the body. Such a map would enable medicinal chemists to select a particular transporter to target in order to deliver a potential drug to a specific tissue. So far, company scientists have studied over 200 transporters and shown that many are found in specific locations in the body. Using combinatorial chemistry techniques, the XenoPort team hopes eventually to determine what kind of molecules each of these transporters will accept as cargo. For now, though, the focus is on those found in the intestines, with the aim of improving the absorption of drugs as they pass through the intestinal tract. XenoPort researchers have, in fact, identified the chemical tags needed to gain passage through some of these intestinal transporters.

    Breaking and entering

    Rather than rely solely on existing transport systems, some investigators are choosing to make a new pathway through the cell membrane. M. Reza Ghadiri and his colleagues at the Scripps Research Institute in La Jolla, California, have created their own transporters using self-assembling peptide nanotubes that can insert themselves into the cell membrane. In the group's most recently published work, which appeared last year in Angewandte Chemie, a large ring made of 10 amino acids—five D-leucines and L-tryptophan—selectively transported glutamic acid across an artificial cell membrane. The process was not very efficient, Ghadiri concedes, but since then, he says, his group has made significant progress in designing more efficient transporters for molecules substantially more complex than glutamic acid. “We feel we're very close to designing transporters specifically for drug delivery,” he explains.

    Cyrus Safinya, a condensed matter physicist at the University of California, Santa Barbara, thinks he can beat nature at its own game by understanding the chemical nature of the membrane itself. Working with colleagues Nelle Slack, Alison Lin, Ayesha Ahmad, Kai Ewert, and Heather Evans, Safinya has been studying the complexes that form between positively charged, or cationic, liposomes and DNA—a hot technology for delivering genes into cells for gene therapy. “Clinicians use these cationic liposomes in a trial-and-error manner with little idea of why one liposome is better than another at getting DNA across the membrane,” says Safinya.

    DNA delivery.

    Fine-tuning the physical properties of a spherical lipid-DNA package produces big changes in the rate at which it will fuse with the cell membrane and deliver its cargo.


    Disdaining blind luck, Safinya's group has worked out the molecular details of how a cell membrane and a liposome membrane will best fuse with one another. Using a variety of x-ray diffraction techniques, the group determined that DNA and cationic lipids form primarily a sandwich structure with DNA layered between the cationic lipids; on rare occasions they form an inverted hexagonal structure with DNA encapsulated in lipid tubules. Recent studies have begun to unravel the relation between these distinct nanostructured supramolecular assemblies and how effectively DNA gets transported, or transfected, into cells. “We found that by tuning key physical and chemical parameters of the lipid carrier, we are able to controllably vary and increase the transfection efficiency by a factor of 10,000 in an in vitro model,” says Safinya. In fact, Safinya's group can tune the chemical properties of a liposome to get a desired delivery rate into a cell for a particular gene or set of genes.

    Chemist Steven Regen of Lehigh University in Bethlehem, Pennsylvania, is taking a different stealth approach to ferrying drugs—and perhaps DNA—across the cell membrane. He has created a series of molecular umbrellas that can fold around a charged or polar drug and shield it from the cell membrane, allowing the drug to pass through the membrane. The ribs of the umbrella are made of rigid facial amphiphiles, molecules that have separate hydrophobic and hydrophilic faces: The hydrophobic face serves as a membrane-friendly surface, whereas the hydrophilic face provides a hospitable hiding place for the drug molecule.

    Regen's group couples two or more of these long molecules to the top of the umbrella shaft and the drug molecule to the bottom of the shaft. In the watery environment outside the cell, the umbrella is open, but as the construct makes contact with the cell membrane, the umbrella begins closing around the drug. The hydrophilic side of the ribs, facing in toward the handle, shields the hydrophilic drug molecule, whereas the hydrophobic side of the ribs slides easily into the cell membrane, where it can immerse itself in the membrane's water-free interior. As the fatty acids that make up the membrane rearrange themselves, they push the umbrella into the cell's aqueous interior, where it opens, making the handle available to enzymes that then release the drug molecule from the umbrella shaft.

    In work published in the Journal of the American Chemical Society last year, Regen's group showed that one such molecular umbrella was able to carry glutathione across a liposomal membrane and release it into the aqueous interior. The group has also shown that an umbrella containing cholic acid amphiphiles can ferry highly covalently attached nucleotide bases across a liposome membrane, the first step toward delivering therapeutic DNA and RNA across the cell membrane. “In our test systems, these umbrellas seem quite versatile,” says Regen. “Now we have to put them into animals and see if they can deliver drugs.” That's going to be the bottom line for all of these approaches.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution