News this Week

Science  28 May 1999:
Vol. 284, Issue 5419, pp. 1438

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Hubble Telescope Settles Cosmic Distance Debate--Or Does It?

    1. Ann Finkbeiner*
    1. Ann Finkbeiner is a science writer in Baltimore.

    If—as astronomer Allan Sandage once said—all of cosmology is the search for two numbers, then the search might now be half over. A group of astronomers announced this week that they have finally nailed down the so-called Hubble constant, the rate at which the universe is currently expanding. Combined with the other object of cosmologists' search, the universe's density of matter and energy, the Hubble constant gives the age of the universe.

    The constant's exact value has been rising and falling for decades, depending on the observer and the method of observation. So in 1991, the Key Project of the Hubble Space Telescope (HST)—led by Wendy Freedman of the Carnegie Observatories in Pasadena, California, Robert Kennicutt of the University of Arizona, and Jeremy Mould of the Australian National University in Canberra—set out to survey cosmic distances with HST and find the value once and for all. Now they are ready to call it settled. At a press conference on 25 May, Freedman and her 26 colleagues announced their number: The universe is expanding, they say, at 70 kilometers per second for each megaparsec (3.26 million light-years) of distance.

    Not everybody is ready to call the search over, however. For a simple number, the Hubble constant is extraordinarily hard to pin down, requiring ingenious schemes to measure the exact distances to other galaxies. Sandage, also at the Carnegie Observatories, has spent a distinguished career looking for the value and is holding out for a slightly slower expansion rate. “The bottom line,” he says, “is that the problem with the Hubble constant is not solved.” But Freedman notes that the difference between his value and hers is now about 10%. Given the measurement's historical difficulties, she says, “boy, that's progress.”

    Astronomers have been trying to measure the Hubble constant since 1929, when Edwin Hubble—the constant and the telescope are his namesakes—found the first evidence that the universe is expanding. By comparing crude estimates of galaxies' distances with their velocities—easily measured from the “redshift” of their light—he found that galaxies act like tracer particles in a flow, and those farthest from us are moving away the fastest. Hubble later estimated that the universe expands at 558 kilometers per second per megaparsec. Unfortunately, that would make it younger than Earth's rocks. In recent decades better distance estimates have led to more plausible results, usually between 100 and 50.

    In 1994, the Key Project team published a preliminary measurement of 80. The number implied a universe that was younger than its stars, and headlines at the time declared a cosmological crisis. But another team, led by Sandage with Gustav Tammann of the University of Basel in Switzerland, measured the constant at 55, implying a comfortably old universe and no age crisis.

    The Key Project team aimed to settle the differences by remeasuring the Hubble constant using various methods. Each method relies on objects with known true brightness, called standard candles; their apparent brightness as seen from Earth then indicates their distance. The standard candle with the smallest error is a precisely varying star called a Cepheid variable, which brightens and dims in periods that depend on its true brightness. HST can see Cepheid variables out to 25 megaparsecs—the distance of nearby galaxies. But at that distance, any measure of expansion is swamped by the gravitationally induced turmoil of our local cluster of galaxies. A believable Hubble constant has to be measured at distances closer to 100 megaparsecs, where the expansion is fast enough that local motions are mostly negligible.

    Other standard candles do reach those distances. Spiral galaxies, for example, rotate at a measurable rate, which depends on their mass and presumably their true brightness. In elliptical galaxies, the internal motions of stars serve as the same kind of proxy for brightness. These and other not-too-standard candles, however, have measuring errors of 20%. A more precise standard candle is a kind of supernova called a Type Ia, which explodes with predictable brightness, is visible to about 500 megaparsecs, and has errors of just under 10%. All of these techniques give only relative distances, however. To get a Hubble constant, says Freedman, “you want distance in meters. Cepheids give you that.”

    The Key Project took 18 galaxies whose relative distances had previously been calculated with the other standard candles, then observed 800 Cepheids in the same 18 galaxies. They calibrated the other standard candles against the Cepheids. As expected, given the uncertainties in the methods, different standard candles gave different Hubble constants: Ia supernovae gave 68, the internal motions of elliptical galaxies gave 78, and “everything else,” says Freedman, “is in between.” Combining all the candles “gets the systematic errors to cancel out,” she says, and gives an overall Hubble constant of 70. “The uncertainty is 8%—it's what we designed the Key Project to do.”

    Sandage isn't persuaded. “There's still a controversy,” he says, “and this isn't going to settle it.” The current dispute is over distances to the Ia supernovae, the best standard candles. For one thing, Ia's are rare: Only eight have occurred in galaxies close enough to have visible Cepheids. Since 1992, Sandage's team has used HST—although not as a part of the Key Project—to observe those Cepheids and has calibrated the eight Ia's accordingly. Their most recent Hubble constant from Ia's is 61.

    Freedman's team combined the same Cepheid observations with new data and analyzed them with the method used by the Sandage team and another, independent method—a procedure, says Abhijit Saha of the National Optical Astronomy Observatories, who is on both teams, that “reflects differences in philosophy.” As a result, Freedman says, her team found that “the Ia distances based on Cepheids are systematically closer by 8%,” leading to a somewhat higher Hubble constant.

    Whether the number is 70 or 61 or somewhere in between, it won't provoke another age crisis. Astronomers now believe that the universe's density of matter is low and its expansion is speeded up by an energy, called the cosmological constant, that pervades empty space. Both factors would push up the age of the universe with a Hubble constant of between 60 and 70 to around 13 billion years (see p. 1503). The oldest stars are between 11 and 14 billion years old. Because of uncertainty in the stars' ages, says Freedman, “there's still some tension, but there's no crisis.”

    So we can go on with our lives, right? Not yet. The Key Project's results “are 90% of the answer,” says Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, “not the official end of the inquiry.” Michael Turner of the University of Chicago agrees: “We're not quite done with this story. That last 10% is very important.” Theorists' best model of the universe not only accommodates the mysterious energy of the cosmological constant, it actually requires it. “So we're on a roll,” says Turner, who is himself a theorist. “But it could be snatched away by a more accurate measure of the Hubble constant.” That's a matter of time: Satellites planned for the next decade should provide an accuracy of 1%.

    If the final and exact value for the Hubble constant is well below 60, a cosmological constant could make the universe implausibly old, and theorists' favored cosmic model would be in trouble. “If a more precise value for the Hubble constant favors a universe with no cosmological constant,” Turner says, “maybe we'll have another crisis—at least for the theorists.”


    Sequencers Endorse Plan for a Draft in 1 Year

    1. Eliot Marshall

    COLD SPRING HARBOR, NEW YORK—Meeting in a closed session here last week, leaders of a dozen scientific teams endorsed an international plan to complete a “working draft” of the human genome by the spring of 2000 and polish it into a “highly accurate” version by 2003. The decision was a vote of confidence for Francis Collins, director of the U.S. National Human Genome Research Institute, and for Michael Morgan of Britain's Wellcome Trust charity. As chief funders of the nonprofit human genome project, they have been pushing for several months for such a scheme. They say they're doing this to satisfy researchers who want sequence data as soon as possible. But there's another objective: to stay ahead of a commercial rival—Celera Genomics of Rockville, Maryland—which announced in 1998 that it intends to sequence the entire human genome by 2001 and patent many genes (Science, 15 May 1998, p. 994).

    When Collins and Morgan first floated the new plan in March, they embarked on a risky course: They essentially urged their own grantees to accept lower quality data—at least in the short term—to speed up production (Science, 19 March, p. 1822). Until then, the project aimed to produce a genome that is 99.99% complete, with most stretches of the genome sequenced 10 times over to reduce errors, by 2003. Now, they are asking grantees to produce a rough draft that will be at least 90% complete with fivefold redundancy. Some researchers were uneasy about this lowering of standards; had there been open dissent, the plan might have split the community.

    View this table:

    That didn't happen, although some European members of the group were unhappy with the way the plan came about. They felt left out when Collins and Morgan switched gears in March. Andre Rosenthal of the Institute of Molecular Biotechnology in Jena, Germany, and Jean Weissenbach of Genoscope in Evry, France, made their objections known. The Anglo-U.S. leaders were “arrogant,” Rosenthal says, to take this step without including everyone. And even U.S. scientists found the change tumultuous. It “made hamburger of all our plans,” acknowledges Elbert Branscomb, director of the Department of Energy's Joint Genome Institute in Walnut Creek, California.

    Over the past few weeks, Collins and Morgan have tried to mend fences. Most foreign groups are satisfied that they have been included now, according to Rosenthal, and Collins announced at the meeting here that the international teams have given their support. The Europeans and at least one Japanese group—a team led by Yoshiyuki Sakaki of Tokyo University—have signed up for the “working draft” concept and agreed, like other participants, to daily release of the DNA sequence they generate.

    Speaking as “operating manager and field marshal” of the U.S. and British sequencers, Collins said that the major centers' performance in 1998 indicated they had enough capacity to produce a fivefold-redundant working draft human genome by next year. He noted that about 10% of the human genome has now been sequenced in final form and 7% more in draft, and boasted that the collaboration has met all of its milestones, “without exception.” The project, Collins added, will be “more important than the splitting of the atom or going to the moon.”

    Collins, Richard Gibbs, director of the genome center at Baylor College of Medicine in Houston, Texas, and Marco Marra of Washington University in St. Louis described the logistics of the new strategy in some detail for an audience of several hundred scientists gathered here. The new plan will require tight coordination to sustain the rapid pace of sequencing, Collins explained. The five largest human genome centers, calling themselves the G-5, have agreed to use as their source material a clone repository at Washington University managed by John McPherson; it will also serve as a method of allocating the work.

    Teams have been invited to choose the chromosomes they prefer to analyze, but each choice includes performance goals. Gregory Schuler of the National Center for Biotechnology Information recorded an initial chromosome list last week (see table) and plans to track each center's progress. These assignment could change, though. Members of the G-5 confer by phone every week, and the full consortium will review progress every 3 months. If a member stumbles, assignments (and funding) may be reallocated.

    Genome scientists have never attempted a collaboration of this scale or rigor before, and it's not clear how well it will go. As Collins said, he and others are watching with “white knuckles.” Several problems still lurk at the edges. One open question is whether the new automated capillary electrophoresis sequencing machines that the centers are now installing will increase the rate of output, as the users are hoping. The MegaBACE capillary machines made by Molecular Dynamics performed reasonably well in tests at the Sanger Centre but did not get praise from others at last week's meeting. Nor did the new Perkin-Elmer 3700 capillary devices, which will form the core of Celera's sequencing operation. Three major labs (Massachusetts Institute of Technology, Washington University, and Sanger) reported that the new 3700 machines—although they demand less human tending—have proved not much more efficient than their predecessor, the 377, which they were meant to outperform dramatically. Even so, MIT has ordered 115 of the Perkin-Elmer machines and Washington University an initial batch of 27.

    Two more important issues also remain unresolved: how to measure the quality of a lab's output and how to get from the draft sequence to the fully finished version in 2003. Gibbs said that the G-5 teams have settled on a “provisional” quality index that uses software called “Phred” to count the number of acceptable bases per unit of DNA sequence produced. A final index will be established this summer. But the decision on how to finish the genome is “still in flux,” according to Gibbs. He said it may not make sense to try to fill all the gaps in the working draft by reanalyzing previously sequenced clones. It may be more efficient, Gibbs suggested, to start afresh with new clones. At this point, Gibbs said, “we're not really sure” what the best tactic will be.

    That's a puzzle the sequencers hope to solve over the next year—in their spare time.


    A New Look at the Martian Landscape

    1. Bernice Wuethrich*
    1. Bernice Wuethrich is an exhibit writer at the Smithsonian's National Museum of Natural History.

    Mars is 100 million kilometers away, but in at least one respect, we now know it better than our own familiar Earth. On page 1495 of this issue, planetary scientists present a precise map of martian topography, accurate around the planet to within 13 meters of elevation; some parts of Earth are known only to 100 meters or more. “We now have a definitive picture of the shape of the whole planet,” says David Smith of the Goddard Space Flight Center in Greenbelt, Maryland, principal investigator of the instrument, called the Mars Orbiter Laser Altimeter (MOLA), that gathered the data from its perch aboard the Mars Global Surveyor spacecraft.

    Thanks to MOLA, a diverse array of martian features has now snapped into sharper focus, including the polar ice caps and the plateaus and lowlands that hint at the processes that shaped the planet. “MOLA's maps allow you to settle issues once and for all that have been contested in Mars geology for 25 years,” says Jeff Moore, a planetary geologist with the NASA Ames Research Center in Moffett Field, California. “We're seeing things that nobody had an inkling existed,” adds Bruce Jakosky, a geologist at the University of Colorado, Boulder. “In a sense we're seeing the planet for the first time.”

    The new map was made by bouncing laser light off the martian surface and using its roundtrip time to determine distance. The map reveals a dramatic landscape of higher highs and lower lows than previously appreciated, with a total range in elevation of 30 kilometers (km), compared to just 20 km for Earth. The data confirm that the southern hemisphere is higher than the northern hemisphere—6 km higher, to be exact. That means that on Mars, “downhill is north,” and if the planet had flowing water, the northern lowlands would drain a watershed comprising three-quarters of the planet, says co-author Sean Solomon of the Carnegie Institution of Washington, D.C.

    This MOLA-eye view of Mars may also help resolve the genesis of its split geologic personality. Planetary scientists have long realized that Mars is lopsided—thin-crusted, low, and smooth in the north, and thick-crusted, high, and crater-scarred in the south. Conflicting explanations for the mismatched hemispheres include a huge asteroid impact that blew apart and thinned the crust in the north, or internal processes, such as Earth-like plate tectonics or a huge plume of molten rock rising from the interior that melted northern crust.

    MOLA's data point in one direction. “We favor internal processes,” says Maria Zuber, a co-author of the topographic analysis and a geophysicist at the Massachusetts Institute of Technology in Cambridge. Although MOLA found no direct evidence of plate tectonics such as mountain belts or earthquake faults, several features suggest an unprecedented amount of past volcanic activity, signaling a hot interior. For example, the Tharsis rise, a 4000-km-across bulging plateau that straddles the equator, appears to consist of two volcanic domes rather than one. And Olympus Mons, the biggest volcano in the solar system, is not a part of Tharsis as scientists believed, but rises off its western edge. “This argues for a broader mantle heat source for Tharsis than was previously thought,” says Zuber. Added to the magnetic stripes recently spotted on Mars's surface (Science, 30 April, pp. 719, 790, and 794)—a possible sign of plate tectonics—the new evidence suggests that an internal, heat-driven process shaped Mars's spectacular topography, says Zuber.

    MOLA's data also tend to refute the idea of a northern impact. The maps show no sign of a giant northern crater, and the north-south boundary is too irregular to be a circular crater wall. Instead, MOLA's team concludes that the boundary is a mosaic of regional effects, shaped by such factors as erosion, volcanism, and debris flung up from a southern impact.

    Indeed, when it comes to impacts, “we've been looking in the wrong hemisphere,” says Zuber. MOLA has discovered that the south's Hellas basin, 9 km deep and 2300 km wide, is surrounded by a giant ring of topography 2 km high that stretches 4000 km from the basin's center. These highlands were likely raised by rock blasted out by the impact.

    Other researchers aren't ready to discard the idea that an impact gouged out the north. “It's too soon to jump on a bandwagon,” because either an internal mechanism or a megaimpact could produce planetary-scale changes in topograpy, says planetary geologist George McGill of the University of Massachusetts, Amherst. He adds that traces of even a massive impact could have been obliterated over billions of years.

    But McGill and others say they are impressed with the data, which reveal a host of other details, including the size of the polar ice caps. The northern ice cap turned out to be smaller than expected, but MOLA found that the southern polar cap is surprisingly large, because although the visible cap is small, the topography suggests vast layered deposits of ice and dust. Assuming both caps are chiefly water ice, the MOLA team estimated a maximum ice volume of 4.7 million km3—about one-third less than the previous best estimate—suggesting that much of Mars's water has either escaped to space or been sequestered underground.

    MOLA will continue to collect 900,000 elevation measurements daily for the next 2 years, and researchers are now signing up to use the data for questions ranging from the location of ancient water reservoirs to the best places to land spacecraft. “I just can't wait until people have the opportunity to use this map,” Zuber says.


    Britain Struggles to Turn Anti-GM Tide

    1. Helen Gavaghan*
    1. Helen Gavaghan is a writer in Hebden Bridge, U.K.

    For the past year, debate has raged in the British media, on an almost daily basis, about whether genetically modified (GM) crops will harm the environment or if food made from them will harm the people who eat it. Clouding the issue were fears that U.K. regulations weren't adequate to protect the public, should those hazards be real. Now, in a move intended to restore public confidence in Britain's ability to regulate GM foods and crop planting, the government last week announced the creation of two new commissions to advise politicians on the long-term impact of genetic technologies on human health, agriculture, and the environment. To back up the pro-GM position it has maintained throughout the debate, the government at the same time released a report from its chief scientific adviser and chief medical officer, which examined the theoretical risks to public health from first principles and concluded that there was no “current evidence to suggest that the GM technologies used to produce food are inherently harmful.” They did call, however, for a public health surveillance network that will quickly flag any problems that may arise among people eating GM foods, such as allergic reactions.

    The tactics seem not to have worked, however. Newspapers reacted with headlines such as “GM measures scorned.” Environmental groups were similarly scathing. While welcoming the greater openness the new commissions would produce, Friends of the Earth called the report “miserably inadequate.” According to spokesperson Adrian Bebb, “We don't need another layer of committees. That will not solve anything.”

    Despite a small number of GM food products having been available in shops for several years, the issue didn't explode into the public consciousness until last summer's reports of the now-discredited research suggesting that GM potatoes stunted growth and suppressed the immune system in rats (Science, 21 May, p. 1247). At the time, the inept handling by previous governments of the crisis surrounding the apparent spread of bovine spongiform encephalopathy, or “mad cow disease,” from infected animals to humans had already made the British public doubt the government's ability to protect consumers from potentially hazardous products.

    Keen not to see the British biotechnology industry undermined by the barrage of negative coverage, Prime Minister Tony Blair set up a ministerial committee on biotechnology policy headed by Jack Cunningham, minister for the Cabinet Office. The ministerial committee ordered a review of the country's regulatory framework in December, and last week's announcement was the outcome of that review. Addressing the House of Commons, Cunningham said the new commissions would strengthen the existing regulatory system.

    At the moment, any applications to plant experimental GM crops or sell GM foods are examined by the Advisory Committee on Releases to the Environment (ACRE) and the Advisory Committee on Novel Foods and Processes, which make their decisions on the basis of science only. Critics have long said that the case-by-case approach of these committees did not provide a strategic, long-term outlook for dealing with the issue of GM crops and food. The new commissions—to be called the Human Genetics Commission and the Agriculture and Environment Biotechnology Commission—are designed to plug that gap. The precise role of the commissions has not been revealed, but they will identify gaps in regulation and advise government on policy: the Human Genetics Commission focusing on the long-term implications of genetic technologies for human health; the agriculture commission on the impact of GM crops on farming and biodiversity. Government strategy on the introduction of GM foods will fall under the purview of the new Food Standards Agency, created last year.

    Jenny Maplestone, technical liaison officer of the British Plant Breeders Society—a trade association—welcomed the new commissions. “There is a huge amount of emotion and little fact,” she says. “The commissions can put the debate on a sound scientific footing.” Sandy Thomas, director of the Nuffield Council on Bioethics, agrees that the commissions may well act as a focus for debate, but she is not convinced they will restore public confidence. “These commissions need to be seen to be as independent as possible, but already there have been editorials saying that they are just another quango [“quango” suggests a committee in the government's pocket],” she says. But John Berringer, dean of science at the University of Bristol and chair of ACRE, welcomes the agriculture commission, saying it should fill the gap between science and public policy. “Such a body has been needed for a long time.” But he adds, “it is not clear how it will work.”

    What's more, even as government ministers were preparing to release their recommendations, the British Medical Association (BMA) published its own decidedly anti-GM report. The BMA, concerned about health issues such as allergenicity, called for a moratorium on planting GM crops until there is a scientific consensus on the long-term effects of GM products. In its report, the BMA also said that if GM foodstuffs, such as soya, are sold to the public, they should be separated from non-GM foods and clearly labeled.

    John Durant, professor of the public understanding of science at Imperial College London, believes that giving consumers the choice of whether to eat GM foods is one of the keys to quelling the debate. “While GM foods were discrete and labeled, such as the tomato paste made from GM tomatoes, there was little problem,” he notes. It is also crucial for the biotech industry to produce a product in which consumers can see some benefit. The current generation of GM products offers no obvious benefit to the consumer, he says, “but if consumers could buy low-fat crisps, say, made from potatoes genetically engineered to absorb less fat, you'd start to get a real test of what the consumer thinks of genetically modified products.”


    New Memory Cell Could Boost Computer Speeds

    1. Robert F. Service

    In its relentless pursuit of faster machines with more memory, the computer industry has found ways to squeeze ever more transistors and capacitors onto a silicon chip, like carving up a big building into smaller and smaller apartments. But apartments—and capacitors—can become only so cramped. By 2005, companies expect to have reached the size limit for capacitors, the memory storage cells vital to the “working” memory used to store data temporarily as a computer runs programs. Now, however, scientists present a bold new strategy that may break the size barrier: reinventing the capacitor.

    Packing more punch.

    A new design for a working memory chip (right), featuring transistors posing as capacitors, could potentially shrink memory chips and make computers boot up and run faster.


    In the 13 May issue of Electronics Letters, researchers from Cambridge University and the Japanese electronics giant Hitachi describe a new chip architecture that does away with traditional capacitors, slashing the real estate of each memory cell by more than half. The capacitor's job is taken over by a novel type of transistor, recast as a data storage bin. The new design should prove easy to integrate with number-crunching processor chips and should retain working memory even when a computer is off—advantages lacking in the current chip architecture, called dynamic random access memory (DRAM). Such chips could allow computer users to begin work instantly after turning on a machine, rather than waiting for it to call up information from the magnetic hard disk.

    The new approach is “excellent work,” says Stephen Chou, an electrical engineer at Princeton University in New Jersey. Hitachi is so enamored with the early results that it has already begun pushing the experimental design into commercial development.

    Although upstart architectures have tried to unseat DRAMs before, drawbacks—such as being bulky or slow—have curtailed their takeover prospects. Starting from scratch, the Cambridge-Hitachi team, a collaboration underwritten by the company, sought to figure out how to duplicate the ability of DRAMs to store data as 1's and 0's, but in less space. In standard DRAM chips, capacitors are coupled with metal oxide semiconductor field-effect transistors, or MOSFETs, which act like doorways that open when writing and reading data. The open-sesame moment happens when a voltage is applied to a gate electrode, which increases electrical conductivity between two other electrodes, the “source” and the “drain.” In a DRAM, the capacitor is wired to the drain: When data are written, electrons stream from source to drain, onto the capacitor. When data are read, electrons flow the reverse route, back to the source. A state-of-the-art DRAM has 256 million capacitor-MOSFET pairs that are constantly shuffling electrons during calculations.

    Transistors can shuttle single electrons, so their size presents no obstacle to shrinking a chip. To tackle the real problem—space-hogging capacitors—the researchers had to devise a novel way to store charge. What they came up with would make the International House of Pancakes proud: a stack of four silicon pads. The top and bottom pads, doped with phosphorus to conduct like a metal, are the source and drain. The undoped pads in the middle act as a channel for electrons. To further coax the transistor to act like a capacitor, the channel contains insulating layers of silicon nitride between each of the pancakes in the stack, to prevent current from slowly leaking to the drain, as happens in conventional transistors. Surrounding the stack is a gate electrode; the entire array is positioned atop a MOSFET that detects charge in the bottom pancake, or storage bin.

    In their new setup, the researchers write data by applying a voltage to the gate. The current rearranges electrons in the undoped pads, effectively increasing the channel's positive charge. This, in turn, draws electrons from the source through the stack to the drain. “The drain gets charged up,” says Cambridge team leader Haroon Ahmed. “That's the memory node.” Charge pooling in the drain tickles the MOSFET, but not enough to trigger the gate to open.

    To read the data, a second, smaller voltage is applied to the gate. If the drain is empty (the off, or 0, state), the voltage blip has no effect. But if the drain is charged (the on, or 1, state), the voltage gives a big enough nudge to overcome the MOSFET's gate threshold, triggering a flood of electrons to cascade from the MOSFET's source to drain.

    The new setup can read and write data in billionths of a second, as fast as DRAM—one of the traditional architecture's greatest strengths. It also could eliminate some of DRAM's shortcomings. For one, DRAM capacitors are wired to metal contacts that siphon off charge, even when the MOSFET doorway is closed. Thus when a computer is on, DRAM capacitors must be recharged continually, and when it is off, all their stored data are lost. The new memory cell, in theory, can hang onto charge for 10 years or more, allowing it to retain memory with the power off, says Hitachi team member David Williams. Also unlike DRAMs, he says, the new technology contains components of a similar size to those on logic chips, the computer's brains, meaning that it should be possible to better integrate memory and logic chips and boost processing speeds.

    For now, Williams says, there appear to be no showstoppers in scaling up for commercial use. If all goes well, he says, the new chips could be on the market for personal computers within a few years.


    Imaging Living Cells The Friendly Way

    1. Meher Antia*
    1. Meher Antia is a writer in Vancouver.

    Researchers have found a new way to produce images of living cells' interiors without disturbing their biochemistry. The technique, described in the 17 May issue of Physical Review Letters, uses lasers to excite certain chemical bonds within the cells to emit light. Experts say it may be a useful addition to existing imaging techniques.

    When biologists want to study microscopic structures within cells, they often flood them with fluorescent dyes that bind to certain molecules only. Next, they shine laser light on the sample, which makes the dyes light up. But this technique has its drawbacks: The dyes sometimes interfere with the cell's biochemistry—some are even toxic—and after a while, their fluorescence wears out. Harvard University physical chemist Sunney Xie and his colleagues, working at the Pacific Northwest National Laboratory in Richland, Washington, wondered if there was a less invasive way of producing similar images.

    They used a technique called coherent anti-Stokes Raman scattering (CARS), first used for imaging in the early 1980s by researchers at the Naval Research Laboratory in Washington, D.C. In CARS, two laser beams are sent into a cell, at frequencies that differ by exactly the frequency at which a particular chemical bond in the cell vibrates. The photons from the two different lasers “mix,” exciting the bond to vibrate and emit an optical signal of its own, at a frequency different from the lasers. Because the lasers can be focused to intersect in only a small volume of the cell, the technique can create a point-by-point chemical map of the cell—at least in principle.

    Early experiments had produced poor-quality images, but team members Gary Holtom of Pacific Northwest and Andreas Zumbusch of the University of Munich used improved lasers to deliver ultrashort near-infrared pulses, tuned to the frequency of hydrogen-carbon bonds. They scanned the two lasers across samples of living cells. Some cell structures, like mitochondria and cell membranes, are rich in hydrogen-carbon bonds and respond to the laser; thus, they stand out in the resulting image. By tuning the two laser beams into other chemical bonds, like nitrogen-hydrogen, the team believes they can image the cellular distribution of molecules such as proteins as well.

    Physicist Stefan Hell of the Max Planck Institute for Biophysical Chemistry in Göttingen, Germany, sees CARS as a complement to fluorescence imaging techniques. “The images … are very appealing,” he says. But one drawback, says physicist Watt Webb of Cornell University in Ithaca, New York, is that it can take many minutes to produce an image with CARS. That will make it difficult to image rapid changes within cells, says Webb.


    NIH Proposes Rules for Materials Exchange

    1. Martin Enserink

    Over the last 2 decades, scientists have witnessed the gradual erosion of a cornerstone of scientific progress: the free exchange of research materials such as reagents, antibodies, genes, cells, and animals. The principle that such tools should be shared still stands, but the invasion of commerce in biomedical research has meant that lawyers may haggle for months about conditions before a single test tube is shipped. Last week, the National Institutes of Health (NIH) proposed a new code of conduct aimed at curbing this legal wrangling and accelerating scientific discovery. The document, drawn up by NIH's office of technology transfer, has been put on the Web ( for comment.

    The initiative comes at a time when contracts governing the exchange of research tools, so-called Materials Transfer Agreements (MTAs), are causing increasing frustration (Science, 10 October 1997, p. 212). Such contracts often contain far-reaching clauses to maximize profit and prevent proprietary materials from being used by competitors. Scientists often resent the legalese, and university licensing officers, meanwhile, are faced with the “terrible, terrible burden” of doing all the paperwork, says Louis Berneman, director of the technology transfer office at the University of Pennsylvania, Philadelphia. His office alone handles some 400 to 500 MTAs yearly.

    NIH's housecleaning is targeted specifically at MTAs that might delay or prevent publication of research, or that seek so-called “reach-through rights”—a property claim on discoveries that arise from the use of shared materials. “Researchers are desperate to have the latest materials and often are willing to sign anything, promise anything,” says Berneman. But NIH does not want grantees to give away tax-funded work.

    NIH's proposed principles favor traditional academic values. Researchers are expected not to sign anything that unduly limits academic freedom or publication. Withholding of data is “unacceptable,” as are reach-through rights. Conversely, NIH-funded research should be widely distributed, preferably on a nonexclusive basis.

    The initiative is not the first attempt to clear out this legal thicket. Only 4 years ago, NIH led a group of institutes that drew up a simplified Uniform Biological Materials Transfer Agreement (UBMTA), reminiscent of the deeds that real estate agents use for selling homes. But even though 137 universities and institutes have now signed on to this idea, UBMTA is seldomly used in transactions. This time, NIH hopes to have more of an effect. “We're very serious about enforcing these guidelines,” says Maria Freire, director of the agency's office of technology transfer. NIH may consider making them conditions of future grant awards.

    Some welcome NIH's interference. “This is a very good example of the government exercising some kind of moral authority in balancing contrasting needs,” says Terry Feuerborn, director of the University of California's office of technology transfer. But others disagree. Thomas Mays, a patent attorney at the Morrison and Foerster law firm in Washington, D.C. says universities and researchers should be able to decide for themselves whether to go along with restrictive contracts. “NIH appears to be attempting to go back decades, to a point where materials were freely available,” he says. Yet Mays predicts that universities and small biotech companies, many of them dependent on NIH money, are unlikely to criticize the plan too harshly. “NIH is the 800-pound gorilla,” he says. “No party really wants to go up against it.”


    Ideology Rules Debate Over Teacher Training

    1. Jeffrey Mervis

    There's a saying that all politics is local. But unfortunately for David Bauman and other science educators, educational policy may be an exception to that rule as Congress and the Clinton Administration engage in a fierce ideological battle that threatens a $335 million program to train math and science teachers.

    Bauman runs the nonprofit Capital Area Institute for Math and Science (CAIMS), which serves 23 school districts in central Pennsylvania. The institute's efforts to train science and math teachers are supported in part with funds from the federal Eisenhower Professional Development program, part of a massive law concerning elementary and secondary schools that expires in September. The program, which has existed since 1985 under various names, will give out at least $250 million this year for math and science teacher training. Most goes to local districts through a formula tipped toward the poorest schools and students, with the rest awarded competitively to universities and nonprofits for workshops, conferences, and other activities.

    Almost everyone agrees that the money has helped polish teachers' skills, but the program's status is now in limbo as legislators prepare to reauthorize all components of the 1994 law. “There is tremendous pressure to blend Eisenhower into other programs,” Gordon Ambach, head of the Council for Chief State School Officers, testified at a 28 April hearing before the House Science Committee. “If something isn't done, the money for math and science will disappear.”

    Last week the Education Department released a plan that would retain the goals of the Eisenhower program under a new name as part of a broader effort aimed at raising student performance. The Republican leadership has not yet completed work on its bill, although it is expected to eliminate any special earmark for math and science teacher training as part of a wholesale effort to cut the strings on most federal educational funds. The ensuing debate may continue into next year. Last month both sides claimed a small victory when the president signed a Republican-sponsored bill, called Ed-Flex, that points in that direction. It expands a 12-state pilot project that will permit the use of some previously earmarked federal funds, including Eisenhower, for other purposes. But Administration officials say they welcome language that imposes some accountability in return for the granting of such waivers.

    One of the key players in the upcoming reauthorization is Representative Bill Goodling (R-PA), chair of the House Committee on Education and the Workforce, whose district includes many of the schools served by the Capital institute. On 5 May Bauman appeared before Goodling's committee to tout the achievements of the institute, which is providing exactly the type of training—125 hours over 3 years, with an emphasis on content rather than pedagogy and links to actual classroom lessons—that an independent evaluation of the program has found most effective at boosting teachers' skills. Bauman also emphasized the importance of Eisenhower funds in attracting other contributions and allowing schools to hire substitutes.

    Normally, such an appearance would be an opportunity for a legislator to praise a constituent and signal support for the program. But Goodling and his fellow Republicans in Congress don't like the fact that most of the Eisenhower program funds must be spent on professional development in math and science. (The Administration's bill would raise that floor from $250 million to $300 million.) They argue that local officials are in a better position than the federal government to set spending priorities. “The Administration wants to impose Washington solutions to local problems,” says Goodling. “Republicans and others who value flexibility and local initiatives have a better approach.”

    That view came through clearly at the hearing. “He didn't ask any questions,” Bauman recalls. “He spent most of his time espousing the good points of Ed-Flex.” Bauman had a similar experience with his own legislator, Representative Joe Pitts (R-PA), who authored a bill passed last year by the House and reintroduced this session that would fold 31 federal programs, including Eisenhower, into one block grant to local districts. “Pitts told me that they are working from a philosophical stance, even though they agree that Eisenhower works,” says Bauman. “It's hard to argue with a statement like that.”

    Proponents of the Eisenhower program, including many Democrats, worry that if local and state officials call the shots, math and science may take a backseat to everything from reading to renovating old buildings. “I speak for state education officials, and we have no fear of direct federal involvement in this area,” Ambach said in his testimony. Such waivers would be particularly devastating in science, adds Gerry Wheeler, head of the National Science Teachers Association (NSTA), who notes that Eisenhower is the sole federal source of professional development for the estimated 1.4 million elementary and secondary teachers who instruct students in science. “The federal government spends only 7 cents of the U.S. education dollar, and Eisenhower is peanuts within that total,” says Wheeler, who says much more money is needed to tackle a problem high on the list of every critique of U.S. education. “We're running as hard as we can just to stay in place.”

    As the debate unfolds, one Republican legislator active in science policy issues is scrambling to define a middle ground. Representative Vern Ehlers (R-MI) broke with his colleagues during last month's House vote on Ed-Flex, leading a lengthy discussion that ended with approval of an amendment intended to fence off Eisenhower funding through administrative reviews. “The original bill … would have allowed Eisenhower funds to be used for other purposes,” he later explained. “But local school boards don't respect math and science, and we needed to step in. … Now that we've made that point, I don't expect the issue to come up again.”

    But others expect to hear a lot more about it in the months ahead. “We're still trying to figure out the battle lines,” says a lobbyist for one professional society. “NSTA wants to defend Eisenhower, while the Republicans want to roll it into a bigger program. So does the Administration, although they want to maintain the emphasis on math and science. And what will Ehlers do?”

    Bauman doesn't pretend to know the outcome, either. But he's pretty sure about one thing. “There's no way we can be as effective if you take the same amount of money and spread it around to all subjects and other needs,” he asserts. “A sustained effort would no longer be possible.”


    Which Way to the Big Bang?

    1. James Glanz
    1. Ann Finkbeiner is a science writer in Baltimore.

    The idea that the universe as we know it was born in a split second of exponential growth is cosmological gospel. But no one can agree on a single version of the theory called inflation

    CHICAGO—Every story has a beginning, and according to standard cosmology, the story of our universe began with a bang. In this story, an epic that resonates through decades of scientific studies, all the stars, galaxies, planets, and shadowy gases of the heavens can be explained if the universe consisted of a superhot, superdense, outrushing ball of matter and light just a fraction of a second after the moment of creation itself. That very tale, however, begs the question of what happened in the mysterious instant just before the conflagration began, some 13 billion years ago. If there was a big bang, what set it off?

    In the years since Alan Guth wrote the words SPECTACULAR REALIZATION in his notebook in December 1979, cosmologists have come to think that the big bang might be explained by an idea they regard as a thing of beauty. Called inflation, the mechanism Guth proposed for igniting cosmic expansion in all its glory might have operated for as little as 10−35 seconds. Yet it could have whipped up all the matter and energy in the universe and laid the seeds for galaxies and galaxy clusters in that brief sliver of time, while the universe blossomed exponentially from as small as 10−24 centimeters across to perhaps the size of a pumpkin. Ever since then the universe, expanding at a more leisurely pace, has been living off the legacy of this episode. “Inflation,” says Michael Turner of the University of Chicago, “is the most important idea in cosmology since that of the big bang itself.”

    The theory relies on the notion that the vacuum of space is not empty. According to particle physics, space can be suffused with the energy of so-called scalar fields, which control the symmetries and asymmetries of material properties, such as particle masses, and can dissolve into a storm of particles when disturbed. A special kind of scalar field, which owes its existence to a mysterious particle called the inflaton (pronounced IN-flah-tahn), would have sparked the big bang. If just “once in all of eternity,” says Guth, who is at the Massachusetts Institute of Technology, the inflaton in a tiny patch of space found itself in an unusual, energetic state—analogous to a ball pushed far up a hill—the patch would have behaved as if gravity were thrown crazily into reverse, expanding exponentially.

    The concept of inflation is so venerated that Guth's notebook page, carefully dated “Dec 7, 1979,” sits behind glass in a gallery beneath the Adler Planetarium here. And so far, the idea meshes with the broad outlines of what astronomers have learned about the cosmos. Even so, there is no proof that inflation is correct; and, to add to the uncertainty, distinct versions of the theory have proliferated, as physicists grapple with the problem of finding an inflaton that could have produced the universe but is also compatible with known laws of physics. For some cosmologists, that cacophony means that the choiring angels of creation have not been heard quite yet. As Lawrence Krauss of Case Western Reserve University in Cleveland puts it, “Inflation is a beautiful idea in search of a model.”

    Hybrid inflation.

    A recent version of the theory posits a 3D energy landscape. The universe inflated as it moved down a gently sloping energy ridge; then it plummeted from the ridge, shutting off inflation and igniting the big bang.

    Or, rather, a single believable model. The theory now comes in varieties called old, new, chaotic, hybrid, and open inflation, with numerous subdivisions like supersymmetric, supernatural, and hyperextended inflation, each a vision of just how the inflaton might have touched off the birth of the universe we see today. In fact, there now exist so many approaches, with such a wide range of predictions, that a few cosmologists have suggested inflation could never be disproved by observation—a prospect Andrew Liddle of Imperial College in London calls “a bit scary.”

    Such concerns have prompted the search for alternative theories of creation, whose predictions could be compared mano a mano with those of inflation (see sidebar, p. 1450). But other cosmologists think an avalanche of new data on the cosmic microwave background radiation (CMBR)—a sort of afterglow of the big bang—is about to put the inflationary framework to its toughest test yet. Each model has distinctive predictions about the ripples of higher and lower density that inflation would have imprinted on the young cosmos. Those ripples should still be visible in the CMBR, and new instruments that will fly aboard balloons and satellites over the next few years will measure them. The new measurements “will consign to the rubbish bin of history most of the proposed models of inflation,” predict David Lyth of Lancaster University in the United Kingdom and Antonio Riotto at CERN in Geneva in a forthcoming review article.

    Some cosmologists are looking back even further, to ask what set the stage for inflation itself. No one knows whether that question even makes sense, although in some theories the beginning of the beginning can be calculated and might even have subtle observational consequences in the CMBR. “It's written up there in the sky,” says Chicago's Rocky Kolb, “and we have to figure out how to read it.”

    Genesis of an idea

    Alexei Starobinsky at the L. D. Landau Institute of Theoretical Physics in Moscow worked out a somewhat complicated version of inflation in 1979. But it was Guth's version in the same year that jolted the scientific community, because he pointed out that it might solve several cosmological puzzles. For one thing, the universe is remarkably “flat”—light travels great distances through space without revealing any curvature—implying that it contains a certain critical density of matter and energy. That flatness had looked uncomfortably like a lucky coincidence; inflation made it into a natural consequence of early cosmic history.

    Guth's original idea, now called old inflation, invoked the concept of phase transitions, like water changing into ice. The phase transitions Guth was interested in, however, involved the forces of nature. Three of them—the strong, weak, and electromagnetic forces—should merge into just one at extremely high temperatures, according to certain untested theories of particle physics, called grand unified theories (GUTs).

    Those temperatures, corresponding to energies of 1016 billion electron volts, could never be reached in terrestrial particle accelerators. But an infinitesimal mote of space-time at the very start of the big bang could provide them. That superhot mote (however it came about) could have cooled like a spark in the wind, but Guth further supposed that the high-temperature symmetries persisted for a brief instant after the temperature had dropped below the GUT scale.

    Chaotic inflation.

    In this version of the idea, cosmic expansion generates a drag (parachute) that slows the universe's plunge from the heights of an energy landscape, prolonging inflation.

    Such “supercooling” is familiar from ordinary phase transitions. Water, for example, can remain liquid even though it is cooled below 0° Celsius—until it suddenly freezes, losing its amorphousness and taking on the jagged asymmetries of crystalline ice. According to particle physics, supercooling would leave space itself hovering in a state of unnaturally high energy, a condition called a “false vacuum.” This energy can be thought of as residing in a scalar field filling that speck of space. When Guth plugged the false vacuum into Einstein's equations of relativity, he found that it acted like gravity in reverse.

    “I discovered that it would affect the expansion in a tremendous way,” says Guth. “It would cause a gravitational repulsive effect that would drive the universe into a period of exponential expansion.” Once the asymmetries finally froze in and the forces took on separate identities, the vacuum would plunk into its “true” state, liberating the energy as an exploding ball of particles and radiation, like the latent heat given off when a supercooled liquid freezes. Even better, the exponential stretching would create a geometrically “flat” space.

    Incredibly, creation seemed to be calculable. But there was a problem with old inflation, as Guth and Erick Weinberg of Columbia University realized immediately. Bubbles of true vacuum would form at various times in various places in the false vacuum and would have great difficulty merging, because the space between them would still be inflating. “You'd always end up with a cold, Swiss cheese-like universe,” says Paul Steinhardt of Princeton University, “with true vacuum contained only in growing bubbles … and nothing there that looked like the universe in which we live.”

    That mix of ethereal success and final disaster made Guth's talks both exciting and “profoundly depressing,” Steinhardt recalls. Much of the depression lifted when he and Andreas Albrecht of the University of California, Davis, and independently, Andrei Linde, now at Stanford University, invented new inflation. The idea was to make the transition from false to true vacuum very gradually, so that instead of the quick jump—which left plenty of false vacuum but not much of the ordinary space in which we live—an entire universe would have time to grow inside each bubble and there would be no need for many of them to merge. (Separate bubbles would still be separated by huge tracts of false vacuum and hence unobservable to each other.)

    The change in the scalar field's energy at each point in the false vacuum of supercooling can be envisioned as a ball rolling down a hill; old inflation was equivalent to a hill with a dimple at the top, where the supercooled universe would get stuck before suddenly tunneling, via quantum-mechanical processes, down to the true vacuum. New inflation simply posited a hill with a nearly flat top, where the ball would slowly roll out of the false vacuum as the universe inflated, followed by a drop-off as it suddenly crystallized into true vacuum.

    By eliminating the bubbles from Guth's original scenario, this new version allowed inflation to explain a second cosmological puzzle: the remarkable uniformity of today's universe. By stretching a tiny region into the entire universe, inflation could explain how regions so far apart that they could have had no communication since the big bang could appear so similar.

    But turning this beautiful idea into a full-fledged theory was proving difficult. Although the shape of the hill, called the potential-energy curve, should ultimately be derived from particle physics theory, that was easier said than done. In the end, cosmologists decided not to worry for the time being about the physics that drove inflation. They simply attributed the potential energy curve to a still-to-be-discovered particle, the inflaton. As Albrecht put it, inventing the inflaton was a way of saying, “Look, what's the point of putting in the whole baggage of the GUT when we really don't have a clue of physics at those energy scales?”

    New inflation.

    A gradual episode of inflation (gentle slope) before the plunge that shuts it off allows a complete universe to form in a single inflating patch.

    After all, the idea was getting a boost from observation. Inflation should have left ripples of higher and lower density on the early universe, because quantum uncertainty means that the ball would roll at slightly different rates in different places. The ripples could serve as the seeds around which gravity eventually gathers the colossal walls, filaments, and clusters of galaxies seen in the sky today. And in 1992, the Cosmic Background Explorer (COBE) satellite mapped the CMBR—which records the state of the universe 300,000 years after the big bang—and detected huge, weak ripples crisscrossing the early universe.

    Just as expected if the ripples in the universe really did start out as quantum fluctuations in the microworld, the COBE observations suggested that the intensity, or “power,” in the ripples was nearly the same at all wavelengths. “Inflation played this trick,” says Linde. “It took something that was very, very quantum and enormously stretched it and made large, macroscopic objects.”

    Many kinds of beginning

    With this boost, inflation theorists began enthusiastically spinning out their own versions of creation. Linde, for example, showed that a plateau-shaped potential was not necessary after all; a simple parabolic slope or many others would do, as long as they were wide and shallow enough to let the ball start at a high energy—at the GUT scale or beyond. The ball still drops slowly because of a frictionlike term that can be traced to the rate of cosmic expansion. “It's, if you want, God's gift to us,” says Linde of the friction term. He named the theory chaotic inflation, imagining a primordially random cosmos in which patches that happened to be high on the potential would inflate, while others would not. Just one favorable beginning could lead to a universe.

    Eventually it turned out that inflation didn't have to make a flat universe; it could also create “open” universes—those with a spatial curvature corresponding to the low cosmic matter densities that some observations were pointing to. Although regarded by many cosmologists as less than elegant, the theories contrive to have the universe fall from the heights of the inflaton potential, ending the inflationary epoch, before space has been completely flattened.

    The floodgates opened even wider with what came to be called hybrid inflation. Those models allowed two coupled inflaton fields to form a sort of three-dimensional landscape, so that the ball could roll slowly along a ridge, but then shut inflation off by plummeting sideways. Some cosmologists say hybrid inflation gives them more freedom to reestablish contact with particle theories—especially supersymmetry, a speculative theory that predicts the existence of new particles. The idea, says Princeton University's Lisa Randall, is to find “natural” fields—“not some artificially designed potential that has whatever features you want.”

    For all the inventiveness of inflationary modelers over the past 2 decades, says Turner of the University of Chicago, no version stands out as more compelling than the rest. “It's like 4-day-old babies in the maternity ward,” he says. “Most of the theories are only attractive to the person who proposed them.” Fortunately, improving observations should supersede the theoretical beauty contest. Among the most powerful discriminators will be follow-ons to COBE, measuring ripples in the CMBR to high precision. These projects include the California Institute of Technology-led Boomerang experiment, which flew a balloon-borne detector around Antarctica in December, yielding data that are now being analyzed; NASA's Microwave Anisotropy Probe (MAP) satellite, scheduled for launch late next year; and the European Space Agency's Planck satellite, now planned for 2007.

    Such measurements should provide a test of one prediction that most variants of inflation still make, namely, that the universe is geometrically flat. In the first years after the big bang, oscillations would have resonated through the expanding ball of ionized gas as gravity tried to compress some regions, generating acoustic waves. “It's quite like a musical instrument,” explains Wayne Hu, a cosmologist at the Institute for Advanced Study in Princeton, New Jersey. “You see one fundamental frequency and then overtones.” After 300,000 years, the universe cooled enough to let free electrons combine with nuclei, making the gas transparent and releasing the radiation we now see as the CMBR.

    The radiation emitted then should contain an imprint of the density peaks and valleys of the resonances. The wavelength of the main resonance—called the first Doppler peak of the CMBR—represents the distance sound waves could travel in 300,000 years. And because the speed of sound and the distance the CMBR traveled to us are both roughly known, the wavelength of the first peak is essentially a measuring stick laid out of the sky at a known distance. From its apparent size as seen from Earth, cosmologists can calculate the properties of the lens through which we are viewing it—the geometry of space.

    If space is flat, the peak should have an apparent scale of about 1 degree on the sky—twice the width of the full moon. COBE could not map the CMBR in fine enough detail to see this peak or the smaller harmonic peaks. But data from more recent mapping efforts are already suggesting a peak at about 1 degree.

    The kind of perturbations that COBE saw, which were created by quantum fluctuations during inflation itself, should provide more stringent tests. Inflation predicts that the intensity of the quantum fluctuations should have been nearly the same at all scales—but not exactly. If the cosmic ball was speeding up or slowing down at the end of the slow-roll epoch of inflation, the fluctuations should depart just a bit from “scale invariance.” Planck or MAP might measure the deviations, which would say something about the shape of the inflaton potential—and therefore about which inflation models might be right.

    Other cosmologists, such as Marc Kamionkowski of Columbia University, are focusing on another kind of signature inflation could have left in the CMBR. Quantum processes during inflation could have left ripples in the curvature of space itself—gravity waves, which might still be crisscrossing the universe. Much as ripples on the surface of a swimming pool seem to distort the bottom, these waves would leave their mark on the polarization of the microwaves. “If you detect the gravitational background, it would be a smoking gun signature of inflation,” says Kamionkowski.

    Begin the begin

    The possibility that observations could vouchsafe a glimpse of the universe's first 10−35 second has cosmologists pondering the ultimate question of how and why inflation might have started. To paraphrase a line from the folk tune Alice's Restaurant, “What put that ball at the top of that hill?”

    According to one line of reasoning based on separate work by Alex Vilenkin of Tufts University and Linde, the ultimate beginning could be forever hidden from our view. They noted that virtually any type of inflation is “eternal”: The initial patch of false vacuum will make not one inflating universe but will break up and generate infinitely many. As bubble universes are generated ad infinitum, the false vacuum becomes so stretched and smoothed that no trace of its beginning remains. For practical purposes, says Vilenkin, “you would not have to think of the beginning of the universe.”

    That hasn't stopped him from coming up with a vision of the beginning. He and Linde have tried to calculate the chances that the first inflating patch could tunnel into existence, quantum-mechanically, from absolute nothingness. They took the point of view, as St. Augustine did in his Confessions, that neither time nor space could have existed before the instant of creation. Assuming various inflaton potentials—a concept outside Augustine's canon—Vilenkin and Linde found that the most likely universes to burrow into existence would start out high on the potential, setting the scene for a cosmos like our own.

    According to another trio of cosmologists, however, questions about the moment of creation have no meaning. The mathematical formalism developed by Stephen Hawking and Neil Turok of Cambridge University and James Hartle of the University of California, Santa Barbara, makes it impossible to extrapolate the universe back to a singularity—a discrete starting point for time. “It's not creation from nothing,” says Turok, “which I think is a complete contradiction.”

    Most of the universes that emerge from their scheme contain too little matter and are much too open to be in accord with current data. Turok is searching for new inflatons that might do better, but meanwhile he nurtures a hope that the universe will turn out to be open, even if only slightly, for that would mean that all traces of the beginning had not been completely stretched away to flatness and overlaid with the later quantum fluctuations. Although perfect flatness would be a victory for mainline inflation, Turok says, it “would just say that what came before doesn't matter. It would, in a way, end the story.”

    The answer to how far that story can be traced back will be discovered—as Augustine said, for his own reasons—in the testimony of the heavens.


    Wanted: New Creation Stories

    1. James Glanz

    CHICAGO—“I am going to speak of a very radical idea,” said Andreas Albrecht, a cosmologist at the University of California, Davis, during a recent symposium here. Albrecht's modest proposal was a way to produce a universe that looks like the one we inhabit but never underwent a growth spurt at the beginning of its history. Most cosmologists think the remarkable homogeneity and geometric flatness of the large-scale universe is the result of inflation, which would have stretched a tiny mote of space and time into the entire cosmos (see main text). Albrecht's alternative suggestion is that cosmic smoothness and flatness might be the product of a speed of light, c, that was much faster immediately after the big bang.

    Even Albrecht concedes that speeding up light—whose constancy forms the bedrock of Einstein's theory of relativity—is a tall order. But he is one of a tiny number of researchers—another being Gabriele Veneziano of CERN, the European particle physics lab in Geneva—who feel it's important to give inflation some competition, so that it doesn't become the reigning theory of the big bang by default.

    Albrecht's approach, which he is working on with Joao Magueijo of Imperial College, London, simply assumes that c was tremendously larger at some time during the universe's first fraction of a second. In principle, that would allow distantly separated regions to exchange energy and come to the same temperature in a universe that is assumed to be expanding already. The mechanism would be particularly effective at creating a flat and homogeneous universe, because the equations that show how matter responds to a changing c also create and destroy energy, relentlessly dumping it into thermal dips while taking it away from peaks.

    “You actually produce a universe that is very, very smooth,” says Albrecht—too smooth, in fact, for matter ever to have coagulated into galaxies and clusters of galaxies. Albrecht has to invoke some other mechanism for producing the seeds of structure—for example, so-called “defects,” hypothetical seams in space-time.

    Veneziano wouldn't eliminate the growth spurt but would give it a radically different driving force. His scenario is based on string theory, a mathematical framework many physicists hope will provide a unified picture of all the forces of nature, including gravity. In his “pre-big bang” scenario, there's no need to invoke inflation theory's energy-rich “false vacuum” to kick off the exponential expansion. Instead, the gravitational collapse of a collection of matter or gravity waves in some preexisting universe gives birth to a new, expanding universe like our own. “In this picture it becomes not a crunch but a bang,” says Veneziano.

    In Veneziano's scenario, our universe and a collapsing precursor are connected by a string “duality,” a mathematical correspondence between two very different theories that can be linked via a simple change of variables. The duality invoked by Veneziano connects a collapsing universe with ordinary gravity into an expanding one with powerfully repulsive gravity. What flips the switch for the duality to take effect still isn't clear, but there is no question about the main observational signature of the theory: a universe awash in gravity waves, which might be measured by the gravity wave detectors now being built for other purposes.

    Don't bet on it, though. Most researchers don't think the alternatives are much of a threat to inflation yet. And Albrecht himself admits that the effort to come up with an alternative has mainly left him “impressed with what a powerful idea inflation is.”


    Community Divides Over Push for Bigger Budget

    1. David Malakoff

    A plan to double federal civilian research spending over a decade is surprisingly contentious because it could cramp biomedicine's push for even faster growth

    The idea of doubling federal spending on civilian research might seem irresistible to cash-hungry scientists. But a plan to do just that, scheduled to go before the U.S. Senate next month, has been greeted with indifference by some biomedical lobbyists and university officials, and outright opposition from the chair of the House science committee. The split illuminates how, when it comes to lobbying for science, the pressures of the federal budget process can drive political wedges between traditional allies in the scientific community.

    The doubling measure,* proposed by Senators Bill Frist (R-TN) and Jay Rockefeller (D-WV), is aimed at increasing the budgets of 14 nonbiomedical research agencies, from NASA to the Smithsonian, from $34 billion last year to $68 billion by 2010. That's a slightly longer doubling timetable than a 5-year plan that biomedical science supporters are pushing for the National Institutes of Health (NIH). Last year, for instance, biomedical groups helped that agency win a 15%, $2 billion increase, on the way to what they hope will be a $27.3 billion NIH budget by 2003.

    Washington lobbyists who have organized an active “doubling group” to promote Frist-Rockefeller say the bill's simple message has provided an important rallying point for the nonbiomedical science community and helped convince researchers around the country to become politically active. But critics say its status as an authorization bill is a serious flaw. Although authorizing bills can suggest how the government should use funds, only the 13 annual appropriating bills actually give agencies the OK to spend the money. “This is not a bill that pays the bills,” says one observer, who like many in the scientific community was unwilling to speak on the record for this story. He fears the effort could divert attention from more important spending battles, noting that “no postdoc was ever hired on an authorization.”

    Such tensions first surfaced in earnest last year, when Frist, Rockefeller, and other sponsors began talking up a modified version of doubling measures introduced in the past. Although physicists, materials scientists, and others lobbied successfully for it in the Senate (the House never voted on the bill), support from the university and biomedical communities was lukewarm at best.

    That trend continued into this year, when the bill was reintroduced in the Senate. The presidents of several prominent research institutions endorsed the measure, with Chuck Vest of the Massachusetts Institute of Technology in Cambridge calling it a needed step. But many others have chosen to invest their political capital in efforts that promise a quicker payoff, such as more government scholarships, support for teaching hospitals, or a new building. Observers say such choices are understandable. University officials “get advisories from their Washington representatives saying, ‘Save your ammunition for things that matter,’” says lobbyist Mike Lubell of the American Physical Society, a leader of the doubling push. In addition, some university presidents are reluctant to fuel campus tensions by backing the sciences in the absence of a similar doubling effort for the humanities. Still, academia's uneven support has been publicly bemoaned by Rockefeller, a former college president himself, who warns administrators against being too shortsighted.

    A bigger problem for the bill has been the biomedical community's ambivalence. Although the leaders of some biomedical-lobbying heavyweights—such as the 60,000-member Federation of American Societies for Experimental Biology (FASEB)—have spoken out strongly for spending more on nonbiomedical science, they have yet to exercise their substantial lobbying muscle in any real way on behalf of Frist-Rockefeller. The reason, say insiders, is that lobbyists worry that endorsing the bill's call for an 11-year doubling could undermine their own effort to double NIH's budget in just 5 years. Embracing both timetables “would confuse the message—an absolute lobbying no-no,” says a congressional aide.

    In a bid to win over the biomedical community, Frist-Rockefeller supporters earlier this month added a complex provision that allows NIH to grow at its own pace (see graph). Supporters figure that they will need all the help they can get in the House, where they face serious opposition from Science Committee chair James Sensenbrenner (R-WI), who has called the measure “a feel-good” effort that would undermine the panel's credibility with appropriators. Sensenbrenner's distaste has prompted backers to consider ways of sidestepping his committee, which normally would take up the measure. One option is to begin with the powerful Commerce Committee, headed by the friendlier Representative Thomas Bliley (R-VA), in an effort to generate a political groundswell. But some Capitol Hill veterans caution that an end run could backfire. “If there is one fellow I wouldn't want to [tee] off, it's Sensenbrenner,” says one.

    Such criticism, however, hasn't fazed doubling proponents, who are hoping for an overwhelming victory next month in the Senate. “You have to start someplace,” says Kathleen Kingscott, head of the Washington-based Coalition for Technology Partnerships. She and other proponents argue that authorizing bills can be used to convince appropriators to put money behind an idea. “They help put you in a stronger negotiating position,” says Kevin Casey, government relations head at Harvard University in Cambridge, Massachusetts.

    Frist-Rockefeller also “has become a very important organizing tool” for the community, Kingscott says. It wasn't too many years ago, she notes, that the group's biweekly strategy sessions drew fewer than a dozen science politicos. “Now it's become hard to find a room large enough to hold us,” says Betsy Houston of the Washington-based Federation of Materials Scientists, about meetings that regularly draw 30 or more people. The meetings have also become a staging ground for other campaigns, such as the ongoing effort to fight off proposed rules that would require scientists to turn over raw data to anyone who makes a Freedom of Information Act request (Science, 12 February, p. 914). Indeed, participants are making plans to continue meeting even after their work on Frist-Rockefeller is done.

    Similarly, an annual science lobbying blitz sponsored by the doubling partners has proven to be increasingly popular. Late last month, for instance, more than 200 researchers from academia and industry—many of them political neophytes—came to Washington to urge lawmakers to support more federally funded research, including passage of Frist-Rockefeller. The staff meetings and briefings were “an eye-opener” for researchers who had no idea how to approach lawmakers with their concerns, says geologist Gail Ashley of Rutgers University in Brunswick, New Jersey, who represented the 16,000-member Geological Society of America.

    The show of force demonstrated that “science and technology has an active political constituency,” says Kingscott. The event, now in its fourth year, has also had an effect on congressional staff, who actually write most legislation, says another lobbyist. “Two years ago, if you mentioned R&D, you could just see the eyes glaze over,” he says. “Not anymore. Now they are interested.”

    Whether friend or foe of Frist-Rockefeller, science lobbyists are hoping that interest in the bill will carry over to what promises to be an especially nasty fight over federal spending. Last week, House and Senate appropriating committees received sobering news about their allocations for the 2000 budget that begins on 1 October. Confirming a long-rumored strategy, Republican leaders gave the smaller committees—such as the one covering the Post Office—enough funds to get their work done quickly while leaving several major spending committees, including the one handling NIH, some $8 billion to $10 billion short of what the Administration has requested.

    Although the allocations were made ostensibly to satisfy mandated budget caps, few observers expect the committees to impose such cuts. Instead, they say the allocations are designed to cause a budgetary “train wreck” that will force the White House and Congress to jointly take the politically unpopular step of removing the spending caps and dipping into a mounting budget surplus. A similar scenario last year produced NIH's mammoth windfall, and some science lobbyists are hoping that history will be repeated. This time, however, whether or not Frist-Rockefeller becomes law, nonbiomedical scientists are planning to be reading from the same page as their biomedical allies as they lobby for more federal research dollars.

    • *S.296, “The Federal Research Investment Act” (


    Scientific Cross-Claims Fly in Continuing Beef War

    1. Michael Balter

    The European Union cites what it claims are new safety concerns in its long-running battle with the United States over hormone-treated beef

    “In time of war, the first casualty is truth,” declared American radio commentator Boake Carter back in the 1930s. In the ongoing trade war between the European Union (EU) and the United States over the safety of dosing cattle with sex hormones to make them grow faster and leaner, scientific truth may not be a casualty, but it is at least a rapidly moving target. The latest salvo comes from the European Commission, the EU's executive arm, which late last month issued a 139-page report raising what it claims are new concerns about the safety of hormone residues in beef.

    Based on the work of a nine-member panel of European and U.S.-based endocrinologists, toxicologists, and other scientists, the report argues, among other things, that the residues might have cancer-causing potential. It also suggests that young children might be more sensitive to low levels of the hormones than previously thought, especially to their effects on growth and sexual development. These conclusions are themselves coming under fire, however. “The EU report is alarmist, uncritical, and selective” in its marshaling of evidence, says Melvin Grumbach, a pediatric endocrinologist at the University of California, San Francisco.

    The trans-Atlantic dispute began in 1989 when the EU banned all imports of hormone-treated beef. American farmers regard the growth-promoting hormones as essential for keeping their industry profitable, and U.S. officials insist that the practice poses no health concerns for the consumer. But to the EU, even small amounts of hormone residues in beef, liver, and other food organs represent an unacceptable health risk—hence the ban.

    The United States and Canada filed a complaint in 1996 with the Geneva-based World Trade Organization (WTO). They contended that the EU ban is based more on a desire to protect European farmers from American imports than on scientifically valid evidence of health risks. The WTO ruled against the ban in 1997, and a WTO appeal body upheld that ruling in January 1998, asserting that although some theoretical health concerns might exist, the EU had not proven its case.

    The U.S. position was further bolstered in February of this year by a report from a different group, the Joint FAO/WHO Expert Committee on Food Additives (JECFA), organized by the World Health Organization (WHO) and the United Nations' Food and Agriculture Organization (FAO). JECFA, which includes scientists from Europe and Australia as well as from the United States, reviewed the evidence for some of the hormones used in cattle and concluded that the levels of residues normally found in beef are safe.

    So far, however, the EU, braced by its latest report, is hanging tough. Earlier this month, the deadline for compliance with the WTO ruling came and went. As a result, the United States and Canada are now drawing up plans to retaliate by slapping stiff tariffs on imports of European products. But the EU is hoping that the new analysis will eventually help it either to convince the United States and Canada to compromise, or the WTO to reopen the case, or both.

    The controversy concerns the use of six hormones currently approved for use in U.S. cattle: the naturally occurring sex hormones estradiol, progesterone, and testosterone, and their synthetic mimics, zeranol, melengestrol acetate, and trenbolone acetate. The hormones, which are usually administered via ear implants, cause rapid weight gain that brings cattle to market sooner and results in more tender and flavorful cuts of beef—prime reasons why some 90% of American cattle intended for slaughter are implanted.

    The EU report focuses much of its attention on evidence that estradiol, and possibly some of its breakdown products, can cause cancer in humans. Indeed, recent epidemiological studies indicate that estrogens—the class of sex hormones to which estradiol belongs—increase the risk of cancer of the breast and uterine lining in postmenopausal women receiving hormone replacement therapy. Most experts assume that the carcinogenic effect of estrogens is due to their ability to induce rapid cell proliferation in estrogen-sensitive tissues such as breast and uterus. Because these so-called hormonal effects require that the estrogens bind to, and activate, specific receptors in the tissues, the hormones are assumed to have no effects below a minimum, or “threshold,” level require to produce that activation.

    But the EU working group concluded that estradiol and its metabolites, as well as some of the other hormones used in cattle, may also cause cancer through “genotoxic” effects in which they damage the genetic material directly. To support this hypothesis, the panel cited a study in hamsters treated with the synthetic estrogen zeranol, in which liver tumors appeared at lower doses than would be predicted if the compound was acting through hormonal mechanisms alone. It also pointed to other findings indicating that some estradiol metabolites bind to DNA, possibly causing mutations. “If you assume no threshold, you should continually be taking steps to get down to lower levels, because no level is safe,” says James Bridges, a toxicologist at the University of Surrey in Guilford, United Kingdom, and a member of the working group. “The jury is still out on these hormones.”

    But some scientists told Science they do not find these arguments persuasive. Alan Boobis, a toxicologist at the Imperial College School of Medicine in London who participated in the JECFA meeting, says that although “estradiol does have some genotoxic potential … we found no convincing evidence that the tumors produced in humans” were the result of direct gene damage. Toxicologist Stephen Sundlof, director of the U.S. Food and Drug Administration's (FDA's) Center for Veterinary Medicine, agrees. “Genotoxicity is not borne out by the human epidemiology,” he says. “The increased incidence of cancer was only in hormonally sensitive tissues” such as breast and uterus.

    Moreover, some researchers argue that the levels of hormone residues in beef are so low compared to normal concentrations in the human body that they pose no danger to most sectors of the population. “At certain times of the month, women are just bathed in estradiol,” says John Herrman, a WHO toxicologist and JECFA member. And Gary Smith, a meat biochemist at Colorado State University in Fort Collins, says that many other foods have higher levels of various estrogenic substances than beef. “The estrogen activity in peas, butter, ice cream, wheat germ, and soybean oil can be thousands of times that of beef from cattle implanted with estrogen,” he says.

    But the EU working group concluded that even low hormone residue levels in beef could still be a problem for young children who have not yet reached puberty. The report cites work first reported in 1994 by pediatric endocrinologist Karen Klein, who was then at the National Institutes of Health (NIH) in Bethesda, Maryland, and her colleagues. Estradiol levels in prepubertal boys and girls are often at or near the detection limit of conventional assays, making them difficult to quantify. But the NIH team reassessed the hormone levels in children with an ultrasensitive new assay for estradiol that uses a strain of yeast genetically engineered to detect even very small amounts of the compound, thus allowing more accurate measurements. The researchers found that estradiol levels were much lower than previously thought, particularly in prepubertal boys.

    Although FDA regulations make no distinction between children and adults, they are based on risk assessments that allow a maximum residue intake of 1% of the hormone production level of the most susceptible population subgroups. If the natural levels have been overestimated in young children, the EU report argues, those risk assessments could be invalid. “It appears that children might actually be exposed to higher [comparative] levels” than those assumed in FDA's calculations, says pediatric endocrinologist Niels Skakkebaek of the University of Copenhagen's Rigshospitalet medical center, who reviews the evidence for this view in the June issue of the European Journal of Endocrinology. But Sundlof counters that even if children are exposed to higher relative concentrations than previously thought, “the safety factors we have built in are so great, and there are so many other sources of exposure to estrogens, that the additional amount [consumed from beef] would still be very small.”

    Although the scientific community may be divided over the safety of hormone implants, the EU has yet another piece of ammunition in its battle to maintain the ban: According to an as yet unpublished draft report, a spot check last year of 258 meat samples from the Hormone Free Cattle program, which is run jointly by the beef industry and the U.S. Department of Agriculture (USDA), indicated that 12% of the samples had detectable levels of hormones—even though they had been certified to be from cattle raised without hormones and thus eligible for import into the EU. European officials cite this as evidence that use of the substances is poorly regulated and that consumers might be exposed to higher than allowed concentrations if the ban were lifted.

    These revelations are embarrassing for U.S. officials. “If there are deficiencies in our system, we want to work with the EU to correct them,” says Tim Galvin, administrator of the USDA's Foreign Agricultural Service. And EU trade officials are currently negotiating with their American and Canadian counterparts to find a compromise, which might include paying compensation to North American farmers for the revenue they are losing by not being able to sell their beef to Europe. But unless such a compromise can be reached, it would appear that all the scientific arguments in the world might not be enough to end the beef war between the two continents.

Stay Connected to Science