News this Week

Science  10 Sep 1999:
Vol. 285, Issue 5434, pp. 1646

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    After Bosons, Physicists Tame the Rest of the Particle Kingdom

    1. David Voss

    In 1995, physicists captured the world's imagination by creating a Bose-Einstein condensate (BEC)—a new kind of matter in which the atoms are chilled to such a low temperature that they become locked into a single quantum state, forming a kind of superatom (Science, 14 July 1995, pp. 152, 182, and 198). That achievement, by physicists at JILA, a government/university lab in Boulder, Colorado, was only half the story of quantum condensates, however. All the particles of nature fall into one of two categories, bosons and fermions, and BECs could only be made with bosons. Now another team at JILA has gone further by coaxing a vapor of reluctant fermions into a low-temperature quantum state of its own.

    On page 1703, physicist Deborah Jin and grad student Brian DeMarco report how they enlisted some subtle lab tricks to trap and cool atoms of potassium-40 until the fundamental quantum properties of the fermions took over and the cold vapor became totally dominated by their wavelike nature. “The experiment is ingenious, and the challenge of cooling a gas of fermions is considerable,” says Dan Kleppner of the Massachusetts Institute of Technology, whom many consider the godfather of quantum condensation studies. And the creation of the cold fermion gas, which displays energy levels like those of electrons around an atom, “is a very significant development,” he says. With luck, it will be the first step toward an even stranger state of matter—a collection of fermions so cold that they seek each other out as partners, forming pairs that resemble the pairs of particles that are key to superconductors and superfluids.

    Bosons and fermions are distinguished by their “spin”—a purely quantum mechanical property that has magnitude and direction, making them act like tiny bar magnets. Bosons, which include photons and the W and Z particles of high-energy physics, all have zero or integer spin, while fermions, such as electrons, quarks, neutrons, and protons, have half-integer spin. Composite particles, such as atoms, also fall into one of the camps, depending on how the spins of their constituent particles add up.

    This seemingly minuscule distinction has profound consequences. The most significant effect is seen when two identical particles, both in the same quantum state, come together. Identical bosons are neighborly types, happy to share the same location. But fermions are irascible—two identical fermions cannot stand to be in the same place at the same time. This phenomenon is known as the exclusion principle, and it dramatically affects how particles behave. Because electrons are fermions, they can't all occupy the lowest energy level available to them in an atom. Instead they must stack up, like painters on a ladder. The different arrays of electrons on the energy ladders of different atoms give rise to the variety of chemical elements we see in nature.

    Bosons, on the other hand, are happy to coexist in the same quantum state. All they need is a very low temperature so that thermal agitation does not bounce them out again. The cooling technique originally used at JILA relied on collisions that send more energy off with one particle than another. The more energetic particles are selectively spirited away, leaving a colder gas behind—a high-tech version of letting a cup of coffee cool by evaporation. “Evaporative cooling has been the workhorse in Bose condensation,” says Randy Hulet of Rice University in Houston. But the technique won't work for fermions because they are excluded from being in the same place at the same time, so they can never collide in this way.

    To get around this problem, Jin placed a gas of potassium atoms—fermions because the spins of their protons, neutrons, and electrons add up to a half-integer sum—in a magnetic field. Such a field splits the energy levels available to the atoms into many more levels, each a different quantum state. She then used lasers and radio waves to excite her potassium vapor so that half of the atoms were in one of these states and half were in another. Although atoms in one state cannot collide with each other, they can collide with the atoms in the other state because they are no longer identical and no longer forbidden from colliding by the exclusion principle. In effect, one gas cools the other and vice versa. “It's just a beautiful scheme,” says Kleppner.

    When Jin and DeMarco had succeeded in cooling the gas down to about 300 nanokelvin (0.3 millionth of a degree above absolute zero), they saw the critical signs of the atoms locking into their places on the lowest rungs of the Fermi energy ladder. One sign was that the energy of the potassium vapor was very slightly higher than expected for a classical gas at the same temperature, because the exclusion principle forces fermionic atoms to occupy higher states on the energy ladder—they cannot all drop down into the lowest level. The researchers measured the energy by taking snapshots and gauging how far the atoms moved over time, then compared the results with theoretical predictions to confirm the existence of the quantum fermion vapor. The fermion gas with its lack of atom collisions could have useful spin-offs, says Hulet, in the form of atomic clocks with much higher precision than those in use today. Atomic clocks rely on light emission that inevitably gets slightly scrambled by atom collisions.

    Now Jin's group and others are looking toward a still more exciting future prospect: coaxing atoms into pairs. In a superconductor, electrons bounce off the material's crystal lattice in a way that causes them to attract each other slightly and form associations called Cooper pairs, which allow the electrons to surf through the solid with no resistance. Something similar happens in liquid helium-3, where the atoms—also fermions—pair up to create what's known as a superfluid. Physicists hope that if a fermionic atomic vapor can be cooled to still lower temperatures, the atoms will pair up to form a kind of atomic superconductor. But it won't be easy, Jin says, given the challenge of cooling fermions. “The possibility of getting pairs would be quite fabulous,” adds Kleppner, “but it is not something you can do immediately.”


    DOE Slams Livermore for Hiding NIF Problems

    1. David Malakoff

    Halfway through its construction, the world's largest laser faces management turmoil and technical problems. Department of Energy (DOE) Secretary Bill Richardson last week ordered a major shake-up at the National Ignition Facility (NIF), a $1.2 billion device to simulate nuclear explosions and probe the practicality of fusion energy. Richardson said he was “gravely disappointed” to learn that officials at Lawrence Livermore National Laboratory in California, which manages the project, had failed to inform him of impending cost overruns and delays. The criticism, accompanied by a financial penalty assessed on the University of California, which runs the lab, comes on the heels of the sudden resignation of NIF's chief after it was revealed that he had improperly claimed to hold a doctoral degree.

    The tardy warning of NIF's woes, described in an internal report submitted shortly before Richardson made his 3 September announcement, “deeply disturbed” him, he said. NIF officials had assured him as recently as June that the project was “on cost and on schedule,” he noted: “Clearly, we have had a major management surprise in our quest for a quantum-leap program for laser physics.”

    DOE has spent nearly $800 million on the stadium-sized NIF complex, which was originally due to be finished in 2003. Its 192 laser beams are supposed to ignite a tiny capsule of deuterium-tritium fuel in experiments designed to replicate the reactions that occur in exploding nuclear weapons. While many arms control experts say NIF is needed to ensure the safety and reliability of the U.S. nuclear stockpile now that the government has stopped underground tests, critics have challenged its feasibility and DOE's cost estimates (Science, 18 July 1997, p. 304).

    Eleven scientific and management reviews over the last half-decade have concluded that the project is on solid technical and financial footing. In late March, for instance, a consulting firm carrying out a congressionally mandated review found “no major areas of concern” and concluded that NIF was “well-planned, documented, and managed.” But last week, lab officials held a special meeting to look into problems that had been rumored for months.

    “Denial of these kinds of problems is unacceptable,” Richardson said, noting that he had asked Livermore officials to “take action against any personnel who kept these issues from the [DOE].” His six-point reform plan also stripped the lab of major construction responsibilities, ordering that “major assembly and integration” be “contracted out to the best in industry.” In addition, Richardson will withhold “at least” $2 million of a $5.6 million management payment to the University of California, which manages Livermore.

    Richardson plans to name an independent panel to get NIF “back on track.” Although he said its problems are primarily managerial, “not technological—the underlying science of the NIF remains sound”—Livermore sources have identified at least one technical glitch. They say that dust particles in the building holding the lasers, which include hundreds of specialized lenses and windows, could undermine scientific measurements. “There has been a realization that they may have to make [the building] cleaner,” says one academic familiar with the situation. “The intensity of the light is so strong that even specks of dust can burn up and damage the optics by etching or pitting them.” The problem poses an unwelcome choice for NIF planners, he says: Spend more to make the building cleaner, or accept a device that may operate less efficiently and require expensive maintenance later.

    How much it will cost to solve this and other problems remains unclear. Although some observers say the overrun could be $300 million, DOE sources suggest it will be less. Whatever the cost, Richardson said DOE will not ask Congress for additional funding but instead will divert money from existing DOE and Livermore budgets. Although that approach will be unpopular with researchers whose programs are affected, it should help mute criticism in Congress, which has so far supported DOE's $254 million NIF request for next year.

    Still, lawmakers are unlikely to let these events go unnoticed. At a minimum, says one House aide, the overruns could prompt an audit by the General Accounting Office, Congress's investigative arm. Other staffers are pushing Livermore officials to explain what happened to NIF chief Michael Campbell, who stepped down on 25 August after an anonymous whistleblower informed Livermore brass that Campbell had never finished his Ph.D. dissertation despite claiming to hold a doctoral degree from Princeton University. Indeed, says one aide, Richardson's displeasure may be just the first of a series of new problems facing NIF.


    Burnt by the Sun Down Under

    1. Kathryn S. Brown*
    1. Kathryn S. Brown is a writer in Columbia, Missouri.

    When it's winter in the north and summer in the south, many cold-weary tourists from Europe and North America flock to New Zealand for its wild backcountry and radiant sunshine. They may be getting more than they bargained for.

    On page 1709, scientists at the National Institute of Water and Atmospheric Research (NIWA) in Lauder, New Zealand, report that over the past 10 years peak levels of skin-frying and DNA-damaging ultraviolet (UV) rays have gradually been increasing in New Zealand, just as concentrations of protective stratospheric ozone have decreased. By the summer of 1998–99, peak sunburning UV levels were about 12% higher than they were during similar periods earlier this decade. Experts say that the NIWA study provides the strongest evidence yet that a degraded stratospheric ozone layer causes more hazardous conditions for life on the planet's surface. “They have done about as careful a study as you can do,” says atmospheric physicist Paul Newman of Goddard Space Flight Center in Greenbelt, Maryland.

    Atmospheric scientists first detected the notorious “ozone hole” over the South Pole 14 years ago, the apparent result of chemical reactions caused by chlorofluorocarbons and other pollutants in the stratosphere. Ever since, their calculations have predicted that loss of stratospheric ozone—which acts like a protective sheath around the planet, absorbing much of the harmful UV-B radiation (290 to 315 nanometers)—would let through more of the rays. And not just in the sparsely populated polar regions: Researchers soon began to realize that stratospheric ozone was also thinning above populous midlatitude regions such as northern Europe, Canada, New Zealand, and Australia.

    But nailing the expected relationship between ozone loss and increased UV-B radiation has proven to be anything but simple, says atmospheric physicist William Randel of the National Center for Atmospheric Research in Boulder, Colorado (see Randel's Review, p. 1689). Efforts to find a definitive link have been complicated by the fact that transient environmental features—such as clouds, snow cover, volcanic ash, or pollution—can filter or reflect UV-B. In 1993, for example, James Kerr and Thomas McElroy of Canada's Atmospheric Environment Service reported that winter levels of UV-B radiation reaching Toronto had risen more than 5% a year over the previous 4 years, a rate in step with declining peak ozone levels. But that study came under fire for being too short to detect a trend.

    Now, NIWA atmospheric scientists Richard McKenzie, Brian Connor, and Greg Bodeker have come up with data that appear to clinch the connection between ozone and UV-B in the midlatitudes. They began their study in 1989, positioning their spectroradiometers and other equipment on the ground at Lauder, a rural region on New Zealand's South Island that enjoys unpolluted, cloudless days much of the year. In measurements taken each year since, the team has found that the maximum summertime UV-B levels crested higher and higher until they are now at least 12% above what they were at the beginning of the study. That agrees remarkably well with the roughly 15% increase the researchers had predicted based on the known decline in stratospheric ozone levels measured since 1978 in Lauder. Meanwhile, the longer wavelength UV-A radiation (315 to 400 nanometers), which is unimpeded by ozone, remained relatively constant.

    According to meteorologist Jim Miller at the National Oceanic and Atmospheric Administration's National Centers for Environmental Prediction in Camp Springs, Maryland, New Zealand's peak UV-B levels, which are about 20% higher than those that bathe Toronto, could put inhabitants at increased risk of skin cancer, cataracts, and perhaps immune problems. What's more, elevated UV-B levels may perturb marine ecology, killing important algae and bacteria, says Ottawa University ecologist David Lean. Despite the increases, McKenzie notes that UV levels in New Zealand are still lower than levels in unpolluted, low-latitude regions of Australia, Africa, and South America.

    Researchers should have plenty of time to study possible effects in New Zealand and elsewhere. The 1987 Montreal Protocol and its amendments, which restrict the use of ozone-destroying chemicals, have stemmed the flood of damaging pollutants reaching the stratosphere. But it will take decades for the ozone layer to recover, says McKenzie, because chlorine and bromine compounds can hang around in the atmosphere for years. “The problem isn't going to go away until the middle of the next century, at the earliest,” he says.


    Research Lab to Surrender Chimps

    1. Elizabeth Norton Lasley*
    1. Elizabeth Norton Lasley is a free-lance writer based in Woodbury, Connecticut.

    In a move that animal rights activists claim as a victory, the Coulston Foundation, the largest primate research facility in the United States, agreed last week to surrender up to 300 chimpanzees—half its current chimp population—by January 2002. The agreement was reached with the U.S. Department of Agriculture (USDA) in response to charges the agency had brought against the lab in 1998 and 1999, ranging from storing the chimps' food improperly to performing unsafe veterinary and surgical procedures that led to the deaths of several animals.

    As part of a consent decision signed on 24 August, the Alamogordo, New Mexico-based foundation will also allow a USDA-approved external review team to examine its operations and records and has agreed to implement that team's recommendations. Furthermore, the foundation has agreed not to breed or buy any new chimps, to employ an adequate veterinary staff, and to handle the animals in a way that does not cause them “behavioral stress, physical harm, and unnecessary discomfort.”

    The Coulston Foundation is a private breeding and research facility supported by the National Institutes of Health (NIH); it conducts research on AIDS, toxicology, spinal cord injury, and vaccine development. The lab also houses chimps left over from the U.S. Air Force's space program and uses them in research (Science, 22 May 1998, p. 1186)

    According to USDA spokesperson Jim Rogers, the lab paid a $40,000 penalty in 1996 to settle an investigation into the deaths of seven animals. But in 1997, after two more chimps had died, the USDA started a new investigation, leading to a formal complaint in 1998. This complaint was amended this year to include the deaths of three more chimps. By signing the consent decision, the foundation has ended USDA's investigation without admitting the charges. But animal rights activists, who have attacked the foundation for alleged mistreatment of its animals, are claiming vindication. Eric Kleinman, a spokesperson for In Defense of Animals (IDA) in Mill Valley, California, says that the Coulston Foundation lacks the staff and the resources to look after its 600 chimpanzees.

    “The Coulston lab has a history of problems in this area,” adds Roger Fouts, co-director of the Chimpanzee and Human Communication Institute at Central Washington University in Ellensburg. Fouts is also on the board of directors of the Center for Captive Chimpanzee Care, a sanctuary for retired research chimps that has filed a suit protesting the Air Force's decision to hand over its chimps to Coulston.

    Frederick Coulston, the foundation's president, declines to discuss the USDA's charges, but says implementing the agreement “will result in a better foundation.” Coulston also says the lab's research is continuing. It is a contractor for an NIH-sponsored study of benign hypertrophy of the prostate—a condition that causes urinary difficulty in older males—in some 100 chimps. According to the consent decision, the USDA may reduce the number of animals to be transferred out of the facility “based on changes in research needs and funding.”

    The consent decision does not specify where the chimps should go. The lab has already started giving away some chimps to an animal sanctuary, Coulston says. But he notes that the destinations have to be chosen carefully, because sanctuaries are not bound by the Animal Welfare Act, and they are not subject to government oversight. Fouts is also concerned about the chimps' fate. “There are only a few groups that can take in chimps, and none with 300 vacancies,” he says.

    “You can't give them to just anyone,” Coulston agrees.


    Array Plans Blocked by Indian Ritual Site

    1. Mark Muro*
    1. Mark Muro writes from Tucson, Arizona.

    TUCSON, ARIZONA—Two cosmologies have collided on telescope-dense Mount Hopkins south of here—and the loser for now appears to be the Smithsonian Institution's plan to build the largest array of ground-based gamma ray telescopes in the world. On 31 August the U.S. Forest Service rejected the Smithsonian's request to build a $16.6 million telescope array on national forest land near the base of the mountain, citing the proximity of a Native American sweat lodge. “Those folks let us know they did not think the telescopes were compatible, and we made a tough call,” says John McGee, supervisor of the Coronado National Forest.

    Gamma rays emanate from the most powerful and mysterious phenomena in the universe—quasars, supernovae, and the black hole-powered infernos called blazars. Even though they are blocked by the atmosphere, they can be studied from the ground using a technique that scientists from the Smithsonian's Whipple Observatory pioneered at Mount Hopkins in 1968. Gamma ray photons slamming into the atmosphere create a cascade of charged particles, which emit a faint glow of light, known as Cerenkov radiation, that carries clues about the energy and direction of the original gamma ray photon. Whipple scientists have been observing the Cerenkov glow with a single 10-meter optical reflector. They had hoped to maintain their world leadership with a seven-reflector array of 10-meter optical dishes, called VERITAS, for Very Energetic Radiation Imaging Telescope Array System, funded by the Smithsonian, the Department of Energy, and the National Science Foundation.

    Their preferred site, a secluded 4-hectare parcel not far from the observatory's existing base camp, offered excellent shielding from light pollution from the valley below and already has roads and power service, significantly lowering costs. However, it lies less than 1000 meters from a small earth-and-log sweat lodge operated by a Tucson-based group of American Indians called To All Our Relations. The Indians, with the support of at least four Arizona tribal governments, say the array would ruin the lodge's sanctity and disrupt the Indians' twice-monthly traditional steam ceremonies and cleansing rites, in violation of the American Indian Religious Freedom Act of 1978.

    More pointedly, Cayce Boone, the 46-year-old Navajo who founded the lodge and obtained a Forest Service permit for it 9 years ago, declared recently that “gamma ray activity and our spiritual practices are not compatible.” He cited a 1996 executive order requiring federal agencies to “avoid adversely affecting the physical integrity of Indian sacred sites.” The Forest Service appears to have deferred to Boone's concerns in rebuffing the Smithsonian. “There were other factors, such as an emphasis on grazing and wildlife habitat in that area, but the sweat lodge was a significant factor,” McGee said.

    The decision has left the Indian group jubilant and the scientists struggling to find an alternative site. “We feel great: This sets a precedent that you can't just roll over Indian people with these projects,” declares Boone, a technician for a Tucson cable television network. By contrast, Trevor Weekes, principal investigator for the Whipple project, frets that the ruling could cause his group to be eclipsed by at least three other gamma ray observatories from Japan and Germany that are already approved and under construction (Science, 30 April, p. 734). “We're very disappointed, because we've been leading this field and now we're on hold while our rivals move ahead,” Weekes says.

    There is a chance, however, that astronomers may be able to proceed with VERITAS—possibly even at Mount Hopkins. Weekes says he hopes to discuss with the Forest Service the feasibility of two alternative sites in the vicinity of the present Whipple base camp. Although both sites suffer from rougher ground and greater exposure to transient light, they retain most of the cost savings of the original plan. Another possible site in Mexico would be considerably less accessible, he notes.

    Even better, both of the Arizona alternatives lie more than a mile from the sweat lodge. Smithsonian officials hope that distance will allow Trevor Weekes's high-energy view of the universe to coexist with Cayce Boone's more traditional one.


    A New Way to Combat Therapy Side Effects

    1. Dan Ferber*
    1. Dan Ferber is a writer in Urbana, Illinois.

    For decades, physicians have been treating cancer with chemotherapy and radiation, and for decades, the side effects have been brutal. Because the treatments damage healthy tissues even as they kill tumor cells, patients develop anemia, infections, vomiting, diarrhea, and other problems. These side effects can be so severe that they prevent patients from receiving effective treatment. Now, by capitalizing on their knowledge of a powerful tumor suppressor gene, researchers may have found a better way to ease side effects in some patients.

    On page 1733, a team led by Andrei Gudkov of the University of Illinois, Chicago, reports that it has identified a novel compound that protects mice against side effects induced by radiation and allows them to withstand what would otherwise be lethal radiation doses. Other known compounds that help protect healthy tissue from cancer therapies have only limited effects, for example, helping restore the bone marrow's ability to make red blood cells. But because of its unusual mechanism of action, the new compound, a small organic chemical called pifithrin-α (PFTα), may protect all vulnerable tissues.

    Defense line.

    By interfering with p53's ability to induce apoptosis and growth arrest, PFTα may protect cells against cancer therapy side effects.


    PFTα works by blocking a protein called p53. When cells are poisoned by chemotherapeutic drugs or barraged by radiation, p53 spurs them either to commit suicide or to go into growth arrest. People whose tumors contain an active p53 gene won't be eligible for the drug, because it could help their tumors fight the therapy, too. But in about 50% of all human cancers, the p53 gene is inactivated, and PFTα could help people with such tumors endure higher, possibly life-saving doses of radiation or chemotherapeutic drugs. It's a “beautiful” paper, says molecular biologist Scott Lowe of Cold Spring Harbor Laboratory in New York. “A lot of people are going to say, ‘Gee, why didn't I think of that?’”

    To come up with their drug, Gudkov's team reversed conventional thinking about the p53 gene. Loss or inactivation of the gene is thought to be one of the genetic changes leading to cancer, presumably because it contributes to loss of growth control in tumors. That's led researchers to try to restore p53 to the tumors lacking a functional copy. But earlier results had shown that the protein also mediates the side effects of cancer therapy. For example, the healthy tissues of p53-deficient mice suffered less damage from gamma irradiation than the healthy tissues of normal mice.

    That meant that blocking p53 could potentially prevent side effects—but only if it could be done without triggering the formation of additional tumors. Many researchers doubted that was possible, but Gudkov says he gambled that a drug that blocks p53 “temporarily and reversibly” would do the trick. But first the researchers had to find such a compound. None was available, he says, “because nobody was ever interested in suppressing p53.”

    Because p53 turns on cell-suicide genes, the team devised a cultured cell system they could use to screen rapidly for compounds that block this activation. Several dozen of the 10,000 synthetic chemicals so tested had the desired effect, and about one-fifth of these were not toxic to cultured cells. One member of this group—PFTα—looked particularly promising. It blocked apoptosis triggered by radiation as well as by four chemotherapeutic drugs, and it also inhibited growth arrest induced by radiation. But it had no effect on the responses of p53-deficient cells—an indication that it works as postulated. “That satisfied us a lot, because it was what we expected,” Gudkov says.

    The Gudkov team went on to test PFTα in mice barraged with a near-lethal dose of gamma radiation. “Amazingly,” he says, “a single injection rescued [normal] mice completely” from a radiation dose that usually kills 60% of the animals, while having no effect on p53-deficient animals. What's more, the treated mice have survived more than 8 months—about half the normal mouse life-span—and none have developed any tumors.

    The group has begun testing PFTα to see if it also protects mice from chemotherapeutic drugs, and Gudkov says the “preliminary data are promising.” Moreover, other potential p53 inhibitors are in the pipeline. Still, before any of them can be used in the clinic, more long-term animal studies are needed to make sure that the drugs don't induce tumor formation or have other dangerous side effects, warns medical oncologist Ronald Bukowski, director of experimental therapeutics at the Cleveland Clinic.

    But if the new compounds pan out in humans, it would be great news for cancer patients. “What it means is that there may be … a selective way to decrease side effects and give optimal doses of treatment,” Bukowski says. “That's what we're all looking for.”


    Hubble Snaps Some Moving MACHOs

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    The name may suggest swagger, but MACHOs have been frustratingly reclusive. For the past decade, teams of astronomers across the globe have been scouring the halo of old stars that surrounds our galaxy for MACHOs (massive compact halo objects)—hypothetical objects that might make up the invisible “dark matter” pulling on the galaxy's visible stars. The results have been tantalizing but ambiguous. Now a team of astronomers led by Rodrigo Ibata of the European Southern Observatory (ESO) in Munich, Germany, believe they have captured five candidate MACHOs on camera.

    By comparing two Hubble Space Telescope images that probe the distant universe, Ibata's team found the five objects moving very slightly in the foreground. The team believes the objects may be old white dwarf stars—dim, burned-out stars—in the halo. If the numbers can be extrapolated to the entire halo, “it would be a fair statement to say that at least part of the dark matter mystery has been solved,” says team member Harvey Richer of the University of British Columbia in Vancouver, Canada. But other astronomers caution that vast numbers of white dwarfs would create their own problems for theorists. And Ken Freeman of Mount Stromlo Observatory in Canberra, Australia, cautions: “It is not clear at this stage exactly what the MACHO objects are, so I am not sure if this is the first time that MACHOs have been imaged directly.”

    Thus far, the standard technique for MACHO searching has been gravitational microlensing. Astronomers monitor stars in the Large Magellanic Cloud, a companion galaxy to the Milky Way, watching for the flicker that might indicate that the gravity of an unseen object passing between a star and Earth has slightly focused the star's light. These efforts have detected almost 20 candidate MACHOs, and many astronomers believe MACHOs may account for a sizable chunk of the galaxy's dark matter. But others say the lensing objects could lie outside our galaxy, in the Large Magellanic Cloud itself (Science, 17 July 1998, p. 332). And no one knew what MACHOs were—extremely dim stars, stray giant planets, or even small black holes.

    Ibata, Richer, and Douglas Scott, also at the University of British Columbia, decided to see if they could spot MACHOs directly in a Hubble image called Deep Field North. This image, an exposure made over several days in December 1995, shows the faintest objects ever recorded by astronomers, including galaxies in the far reaches of the universe. A single image could not reveal whether any of the objects were MACHOs, but in a second image, any object orbiting in the halo of the galaxy would probably betray itself by moving across the sky.

    In 1997, the group learned that Ron Gilliland of the Space Telescope Science Institute in Baltimore, Maryland, planned a second deep image of the same spot to search for extremely distant supernova explosions. Gilliland agreed to share data from the image, made in December 1997. “I did find two supernovae,” he says, “but in the end, the search for moving objects turned out to be the more important project.” In a paper to appear later this year in Astrophysical Journal Letters, Ibata and his colleagues list five faint, bluish objects that changed position between December 1995 and December 1997. Two of them display a substantial proper motion (about 1/20th of an arc second in 2 years), while the other three are “right on the detection limit,” says Richer.

    He and his colleagues suspect the objects are old, dim white dwarfs at a few thousand light-years' distance. Although it is hard to extrapolate accurately from such a small population, if this number of white dwarfs were scaled up to the whole of the galaxy, the total would be on the order of a few trillion—in good agreement with the microlensing studies. Assuming that the moving objects are white dwarfs, “the whole picture is self-consistent,” says Richer.

    ESO's Peter Quinn notes, however, that swarms of white dwarfs surrounding galaxies like our own would create astrophysical problems. If they formed in the usual way, from sunlike stars that grew old, shed their atmospheres, and then cooled, the process would have enriched the universe with far larger amounts of elements heavier than hydrogen and helium than are seen. Ibata and his colleagues hope to confirm their proper motion measurements once Hubble has taken a third Deep Field North image next December. Other researchers have spotted what they think may be white dwarfs in a southern Deep Field image, and Ibata's team hopes to make a second image of the same spot to see if they are halo objects.

    Definitive confirmation may come from wide-angle surveys with large ground-based telescopes, says Gilliland. “If much closer members of this population are found, they could be studied spectroscopically to determine their true nature,” he says. Richer and his colleagues have their fingers crossed: “If the ground-based surveys don't find them, our scenario is not correct,” he says.


    View From the Top of a Biomedical Empire

    1. Eliot Marshall

    As politicians square off for another fight over social spending, Harold Varmus discusses how NIH has achieved exceptional growth recently, and how its organization could be improved in the future

    Six years ago, Harold Varmus arrived in Bethesda, Maryland, a very green new director of the National Institutes of Health (NIH). Although he had achieved distinction as a basic scientist—sharing a Nobel Prize in 1989 for research on genes that cause cancer—his administrative skills were largely untested. He had never run an organization bigger than his 20-person lab at the University of California, San Francisco. The challenges would be immense even for a seasoned administrator. Bureaucratic and fraught with internecine rivalries, NIH is the world's largest basic research center. In 1993, when Varmus took over, it employed more than 17,000 people and boasted a budget of $11 billion—and it was clearly in distress.

    The NIH research hospital, the Clinical Center, was falling apart. The building had long been slated for replacement, but Congress was wary of releasing the hundreds of millions of dollars the new construction would cost. Hampered by a tight budget, NIH was pinching its funds to support extramural grants. At the same time, many of NIH's best and brightest intramural scientists were fleeing the Bethesda campus for jobs in academe and industry. Varmus himself and two colleagues wrote in 1993 that they were concerned about “outmoded procedures” at NIH and threats to the “long-term viability” of U.S. biomedical research (Science, 22 January 1993, p. 444). Given the many problems, observers wondered whether it made sense to tap Varmus, a devotee of “pure research” and a novice in politics, to lead NIH in this difficult period.

    Today, the record suggests that such worries were exaggerated. NIH's reconstructed Clinical Center is on course for completion in 2002. NIH is enjoying in 1999 not only the largest federal appropriation ever, but also the largest 1-year increase—a boost of $2 billion, for a current budget of $15.6 billion. The overall “success rate” for extramural grants—the percentage of approved investigator-initiated applications that get funded—is as high as it has ever been, heading toward 33% this year, up from 23.6% in 1993. Many small but destructive “brush fires,” as Varmus called them in an interview with Science in 1993—controversies over employment discrimination, sexual harassment, scientific misconduct—seem to have fizzled out. New leaders have taken over key posts, like the long-vacant directorship of the National Institute of Mental Health, now filled by former Harvard professor Steven Hyman. Whether all this makes Varmus the Clinton Administration's “most effective backstairs politician,” as a New Yorker article called him in June, or the lucky heir to favorable politics is debatable. But the record suggests that at least part of the credit is his.

    At a meeting last month, Science invited Varmus to reflect on what he sees as his major accomplishments at NIH, his disappointments, and some ideas about NIH's future. A condensed version of his conversation with Science's news and editorial staff, edited for clarity and concision, follows.

    Reviewing NIH's budget, Varmus argues that his decision to seek modest growth in the mid-1990s established the agency's credibility. This, he says, paid off handsomely in larger appropriations when the U.S. budget deficit began to fade in 1999. He's also proud of his record in recruiting new scientists to NIH, including 12 institute directors. His concerns include the fragmentation of NIH's administrative structure, now divided into 25 major institutes and offices. He also regrets that the NIH director does not have adequate authority to launch major scientific initiatives on his own. And he wishes his efforts to rationalize the scientific jurisdictions of the review panels that rank grant applications for NIH had moved along more rapidly.

    The only subject Varmus placed off limits was his own future. He declined to talk about the rumor that he is being considered for the presidency of the Memorial Sloan-Kettering Cancer Center in New York City. When asked about his plans, Varmus would only say, as he has many times before, that his tenure is determined by the president. However, he also said in the past that he considered a term of “about 6 years” about right for NIH director.

    Q: NIH got a 2% increase in the president's 2000 budget. Are you expecting more?

    A: I think it's not unreasonable to envision an increase of approximately 10%. But at this point, it is very dangerous to make that kind of prediction. … It's obvious that the committees that are responsible for paying us are short of cash. … We don't want to see social programs and educational programs that are supported by those committees suffer just to pay us. … I think the mood of the country is to support science, and when push comes to shove, there will be some kind of omnibus, emergency appropriation that allows NIH to be reasonably supported, and, hopefully, other science as well.

    Q: Is there an upper limit on NIH's budget in your mind?

    A: If we turn our attention to neglected areas [undersupported grants, infrastructure, instrumentation] and exploit new opportunities … we would accommodate very well a doubling over a period of about 5 years. … What concerns me more is the structure of NIH. NIH has prospered in part because of the strong advocacy we've enjoyed from members of Congress and the public. … On the other hand, this has been responsible for making an institution that has become much more complex over the years. … It makes me think that at some point there will have to be a moment [when everyone says]: Whoa, let's have a look at how NIH is organized. Let's try to simplify that organization, make it more rational, more manageable.

    Q: What would the ideal NIH look like?

    A: The proposal I threw out for the sake of argument [at Jackson Laboratory in Bar Harbor, Maine] was one in which there were roughly six institutes, of roughly comparable size [embracing internal medicine, cancer, environment and infectious diseases, human development and aging, brain and behavior, and an NIH central]. The central component would be the home of the NIH director and would include the clinical center and the review groups. It would also undertake special initiatives that do not become part of the commitment base but are expected to come and go. And it would provide also a little more scientific authority for the director. One of the things we have to be concerned about in the future is what kind of authority and leadership the NIH director can command. So much goes on in categorical institutes that there is a danger of having … very little for the NIH director to do but maintain conformance with broad policy issues. …

    Q: How would that structure improve things?

    A: It would allow more flexibility in budget formulation. Priority setting wouldn't be quite as focused on what percentage increase each institute gets each year. Movement between fields would occur more effectively.

    Q: Is the NIH director's authority more limited than you expected?

    A: It's mixed. … There is inevitably a kind of tension, because the institutes have independent authorities and independent budgets. I certainly have had the sense that it is difficult to follow through on scientific initiatives that I try to get under way myself. A good example might be malaria, where I've tried to push funding and develop international consortia. … It is a slightly frustrating experience. Without a laboratory of my own I think I would feel pretty starved for scientific conversation.

    Q: What accomplishments have given you the most satisfaction?

    A: One is surviving a period in which the budget prospects looked very bleak. There was much amusement about my early speeches in which I emphasized living within a “steady state.” But I do think that was a useful position to take. It was realistic. I think it gave NIH credibility. We were willing to say: Look, the budget situation is tough; if you can give us some reassurance of stability, that we're not going to get cut, we will try to live within reasonable means. I think that was a smart approach. With that reasonably high level of credibility within Congress and the Administration, we prospered when the economic status of the country improved.

    Second, I would point to recruitments. I think I've been incredibly successful, and lucky, in not just attracting some good talent, but imbuing the scientific community with a sense that public service is something that people should look forward to in their career. A good scientist should say at some point, there is an opportunity here to pay back the federal government. And you can have a tenure in public office that is enjoyable, not inconsistent with thinking about science, and not even inconsistent with practicing some science. … All those things have affected my ability to recruit institute directors, scientific directors, lab chiefs. … We have had some colossal successes. … The spirit in Bethesda is quite different.

    We've also had success in trying to cope with the erosion of support for clinical research. One of the biggest issues that faced me when I came to NIH was the dilapidated condition of the clinical center, a sense that clinical research had had its day. … Well, as you can see, we are building a wonderful new building. Clinical research has been given important lifts from new training that affects the extramural community and from a number of programs we've started intramurally.

    Q: Is it healthy for a person to be director of an institute for 20 years?

    A: Not necessarily. [But] it may not be wrong. One of the things that I've done that may not have gotten as much attention as it should is that I've instituted 5-year reviews. I have a group of five or six people who go out and solicit opinions from a very broad swath of people who are affected by the institute and then I and the chair of the committee get back to the institute director the opinions that have been collected. Those opinions have made a difference. … My personal feeling is that institute directorships should not be considered lifetime entitlements. The most healthy situation would be for people to come and do those jobs for 5 or 10 or 12 years. Less than 5 years is probably too short a time to have an imprint. As in any way of life, change is usually a good thing.

    Q: How substantial are the changes you've made in peer review?

    A: We have made some big improvements. Looking back on what I thought was important 5 or 6 years ago, there's no doubt that we've streamlined peer review in at least two important ways. First, by doing triage. On the whole, in 98% of the cases, it's been a very successful enterprise. Second, by implementing a modular grant formula so that you don't have to be so precise about your budget. You can submit your budget in $25,000 increments. That's clearly the wave of the future. It's essentially going to be implemented across the board. We've had some improvement in setting criteria for review, especially in putting innovation—novelty—into the review process. … We've made a limited effort to incorporate the public into the review process. I started with a degree of skepticism. But in certain areas—for example, clinical trials—it's quite appropriate to have people who represent health care consumers. …

    Q: What about unfinished work?

    A: There are a couple of areas where I think things have happened more slowly. Reorganizing the peer review study sections. That's now coming along. [National Academy of Sciences president] Bruce Alberts and his colleagues are doing the “boundaries report” [remapping the scientific jurisdiction of panels] and have published a policy forum in Science (Science, 30 July, p. 666). I think we're on the right track. … The other thing that's been disappointing to me is electronic review of grant applications. I thought we could get electronic submission and review into practice much more quickly than we have.

    Q: The Administration's computing initiative has fallen flat with Republicans on the Hill. Are you concerned that the NIH computing initiative could have similar problems?

    A: No … I think we are prepared to make the case, and would be listened to by our appropriators, that computer science is undersupported, and that the flood of new genomic data and from imaging and other new fields of biological science demand personnel and software and hardware that we don't have. The competition from industry for talent and the undervaluing of computer scientists by our grantee institutions are problems that we need to rectify. … This requires at least bidisciplinary training in computer science and biology. … I'm reluctant to say, here is a request for $100 million. What we are saying is that [the computing initiative] is a consortium of many institutes—at least 10, maybe 15 new centers—to train and support computer scientists who know biology. It's a little bit here, a little bit there. There isn't going to be one package that can be cut.

    Q: The Defense Advanced Research Projects Agency (DARPA) gives program chiefs great latitude to create new initiatives. People suggest that NIH and the National Science Foundation should adopt a DARPA-like philosophy. What do you think?

    A: I've been advocating that myself the last few years. … The initiatives we're thinking about are much more expensive than the traditional grant and would be undertaken by teams. One novel effort is being made by the National Institute of General Medical Sciences where [director] Marvin Cassman has come up with “glue grants” in which investigators who work on a single problem—in one case G proteins—form a complex network and ask for money to support the whole endeavor. This group then identifies common needs and asks for money to help support building databases, getting certain structures done.

    Q: NIH plans to distribute scientific articles through PubMed Central. Suppose publishers went to Congress and complained that NIH is trampling on private enterprise: How would you explain the policy?

    A: We support the research; we want the research findings to be available to our communities in the easiest, most searchable way. If technology has given us the tools to promote the dissemination of the information, I think we should use them. … We have an opportunity here to put anybody who has a desktop computer in contact any time day or night with the current scientific literature. That seems to me a very important public service.

    Q: Early this year, NIH enunciated a controversial policy of supporting the research use but not the derivation of human embryonic stem cells. If you could do it over, would you handle that policy any differently?

    A: No … I solicited a legal opinion [from the Department of Health and Human Services] on what we could do legally. It is not my view necessarily of what should happen. … I think we should be supporting research with embryonic stem cells. I also think we should be supporting research on the derivation of stem cells from spare embryos. It may not be politically appropriate to do that at this point. But that is my view, and it is a consistent view. … I think any ethical evaluation has to take into account the consequences of not doing research that would benefit living people who have serious diseases now or in the future. I take that responsibility very seriously.

    Q: How useful is the Council of Public Representatives (COPR, a consumer advisory group created this year at Congress's urging)?

    A: That's hard to evaluate, because it has only met once. … When we advertised the positions on the council, we got hundreds of applications from interesting and energetic people. Rather than say you're in, you're out, I said you're all in. Just 20 of you are on the council and the rest of you are COPR associates. And I would like to build on that cadre. … They are our advocates out there in the community, and sometimes our critics. That's all right. I would like to expand the COPR associates to thousands, hundreds of thousands. Why not? It's a way of building an advocacy community that's really paying attention to the details.

    Q: On AIDS research: The field seems to have plateaued. Is it being funded well enough?

    A: I would say it's not a static situation. We are increasing our investment in vaccine development tremendously. And that also involves a very serious investment in immunology. I think we have here a novel problem in immune response, but one that will be applicable to vaccines against tuberculosis, malaria, hepatitis C, other organisms that coexist with a host even though there is a partial immune response. Secondly, we are making a greater effort in behavioral research in AIDS. Right now the emphasis has got to be on prevention. … At the same time we are recognizing that there are serious deficiencies in even the quite good drugs that have been developed. There is much more interest in drug design than there was a few years ago, in understanding the nature of viral resistance to drugs and how drug combinations work together. It is a changing field; it's not static at all … although the AIDS budget is no longer rising faster than the overall NIH budget.

    Q: How long will you stay in this job?

    A: I'm a presidential appointee. …

    Q: Any regrets about what you were not able to do because you were in Washington?

    A: My [scientific] productivity probably could have been greater. Actually I've had a very good lab experience. … I would rather have a little more time to read scientific literature.

    Q: Has it affected your physical training?

    A: Washington has been great for my physical training, because I live 12 miles from NIH. I ride [my bike] to work almost every day. I've also taken up sculling. The ideal day begins for me with a ride to the boathouse, about 4 miles, then I row for 45 minutes, then I ride my bike out to [NIH], then ride home after work through Rock Creek Park. That's a beautiful day.


    Joint Projects Allow a Peek Into an Impoverished System

    1. Michael Baker*
    1. Michael Baker writes from Seoul.

    A new wave of collaborations could give North Korean scientists a chance to end their historic isolation and to bolster a civilian research sector long starved for resources

    SEOUL, SOUTH KOREA—In a laboratory somewhere in North Korea, a prototype high-tech machine is under construction. The lab is plagued with power cuts and all the equipment is outdated, but the researchers forge ahead, assembling the frame with welders and metal files instead of precision-controlled machine tools. As they put the prototype through its paces, the engineers monitor such variables as pressure, voltage, and temperature—in other countries, a simple task for a desktop computer. Here it takes three workers: One watches the needle of an analog meter, another holds a stopwatch, and a third records the results on a pad of paper.

    That scene, described by recent visitors to North Korea, who withheld details about the project to protect team members from possible recrimination, seems bleak. Indeed, North Korea is one of the most difficult places in the world to do science. Like many communist countries, it has a large pool of scientists and technicians. But the end of the Cold War snapped years of active ties with East Bloc countries and triggered a decade-long recession that has been worsened by a 3-year-old famine. The combination has brought this country of 22 million near collapse. While defense research is well funded, the rest of the scientific community goes without equipment and basic necessities. Although most still report to work each day, says one foreign scientist who requested anonymity, “they have lost conviction and will.”

    The country's dire straits haven't reduced respect for scholarship. Proclamations by “Dear Leader” Kim Jong Il encourage the young to pursue science, and a calligraphy brush accompanies a hammer and sickle in the country's communist emblem. A constitutional revision in 1998 noted the importance of research to national development, and high-ranking scientists can still expect to earn a decent living, with a house and car.

    But most North Korean scientists have little opportunity to earn the respect of their peers abroad. “Twenty years ago, North Korean scientists and professors were excellent because they studied in the USSR, Hungary, East Germany, and Romania,” says a foreign scientist familiar with North Korea. Those countries also top the list of scientific collaborators, according to a database of articles in international journals maintained by the Institute for Scientific Information in Philadelphia, although the numbers are small—36 papers with North Korean authors since 1981, and only 16 this decade, with none in the last 18 months. Now only the children of diplomats have the privilege of studying abroad. College graduates are educated at the level of Chinese high school students, he says, and the equipment in many universities is 30 or more years old. In addition, for the past 4 years scientists and other white-collar workers have been required to spend 2 months a year doing physical labor in the countryside.

    Lacking Internet access, most journals, and links to colleagues around the world, North Korean scientists often tackle questions that have been solved elsewhere years earlier. “They end up doing everything from scratch … trying to reinvent well-understood technologies,” says another foreign scientist.

    Although it would take a major political upheaval to fully open up the country to outsiders, there are some channels of communication. Sympathizers in Japan and elsewhere occasionally send information, and the Pyongyang Informatics Center in the nation's capital even has a branch in Singapore that supplies journals from The Institute of Electrical and Electric Engineers. Some North Koreans are still allowed to travel, and a few experimental joint projects are under way, particularly with the country's southern neighbor. They involve areas closely linked to the country's survival—notably agriculture and energy—where the official animosity toward the South takes a back seat to economic necessity.

    Kim Soon-Kwon, an agricultural scientist from Kyongpook National University in Taegu, South Korea, has teamed up with the North Korean Academy of Agricultural Sciences to set up 17 research stations across North Korea to breed varieties of high-yield corn and disseminate improved farming methods that might eventually help to end the famine. Kim provides the stations with everything from pollination bags to computers using funds raised by his International Corn Foundation, which this year expects to donate $1 million. He says he's impressed with the quality of the younger scientists and rewards their hard work with bicycles.

    The results are already noticeable. In the past 18 months, various teams have introduced intercropping techniques to improve soil fertility, taken antierosion measures, applied thousands of tons of imported fertilizer, and grown high-yield seeds bred in South Korea for similar ecological niches. “The sustainability of North Korean food production has really been improving, especially in the uplands,” says Kim.

    Such positive results are the key to continued collaboration, he adds. Pyongyang is unlikely to back a project “without a special target,” he says. Kim's success should also help an effort begun in June to improve potato production led by Chung Hyuk, a South Korean potato specialist at the Korean Research Institute of Bioscience and Biotechnology in Taejon.

    The next North-South project may involve a computer scientist who says that North Korean programmers represent a large untapped pool of talent. They have produced impressive programs for medical diagnosis, accounting, and “edu-tainment,” says Park Chan Mo, a professor at Pohang University of Science and Technology. Park, who says most programmers have only recently learned C++ after working for years in BASIC, COBOL, and Fortran, wants to set up training courses to convert North Korean math teachers into computer science teachers and to develop the North's software industry. The idea is to make use of the two countries' common language and the huge supply of cheap, well-trained labor.

    U.S. environmental activists have also begun building ties. A Berkeley, California- based nonprofit called the Nautilus Institute believes that one major cause of the famine is an energy shortage triggered when China and Russia began cutting back on fuel subsidies. Nautilus, which tracks Asian security issues, first made contact with North Korean leaders in 1991, and last fall, after 4 years of up-and-down negotiations with its Korean counterpart, a team installed seven advanced wind generators in a small village. In addition to their goal of supplying power for a clinic, irrigation pumps, and other vital functions, project leaders hope their North Korean counterparts, who are maintaining the generators, will adapt and disseminate the new technology throughout the country. “The engineers and technicians we met were well-educated, nimble-minded, and eager learners,” wrote Jim Williams in the May/June issue of The Bulletin of the Atomic Scientists.

    The most active area of science and technology in North Korea—military research —is also the most secretive. The country has developed and tested long-range missiles that could reach Japan, and it may be working on one that could threaten parts of the United States. It is also believed to have shared weapons technology with other countries. In June, Indian customs officials intercepted a North Korean ship carrying crates of blueprints, parts, and tools for making Scud missiles that was presumably headed to Pakistan. That country's Ghauri missile is thought to be an exact copy of North Korea's Rodong missile, which is based on the Russian Scud.

    Ironically, the best hope for the country's neglected civilian sector may be outside pressure to curb its military activities, combined with its desperate food situation. In April North Korea allowed U.S. inspectors to visit an underground site suspected of being used to make nuclear bombs in exchange for technical help in growing potatoes. Such assistance could pave the way for stronger scientific ties with the outside world for this closed and secretive country.


    The First Step to Heaven

    1. James Glanz

    Every effort to survey cosmic distances relies on a common yardstick, found in our own neighborhood. But astronomers can't agree on its length

    In the old Led Zeppelin song, there's a lady who is buying a stairway to heaven. Astronomers who measure distances across the glittering heavens must wish it were that easy to get what they're looking for. They have a staircase of sorts: Distance indicators that work within our galaxy and its immediate environs measure the first step, which sets the size of every subsequent step out into the cosmos. But while astronomers have gotten better at counting the distant steps, and thus getting relative distances in the distant universe, they have been unable to beg, borrow, or steal a final answer for the size of the very first step.

    The almost absurd tininess of that step, cosmically speaking, is galling all by itself: The distance is the short hop between us and the Large Magellanic Cloud (LMC). One of two dwarf galaxies hovering on the outskirts of the Milky Way, it's plainly visible as a twinkling wisp in the sky of the Southern Hemisphere. In the favored units of astronomy, the LMC is somewhere between 40 kiloparsecs (or 40 kpc, about 130,000 light-years) and 60 kpc away. But because the LMC is where astronomers prepare for their next step out into the cosmos by calibrating a set of cosmic surveyors' beacons known as Cepheid variable stars, that 40% uncertainty propagates outward as far as the cosmic distance ladder can reach.

    Because cosmic distances are critical to calculating the age and expansion rate of the entire universe, “the consequences of that uncertainty are enormous,” says Barry Madore of the California Institute of Technology in Pasadena, a member of the Hubble Key Project, which aims to measure the cosmic expansion rate, or Hubble constant. Madore and his colleagues can now find relative distances far beyond our galaxy with such precision that their latest Hubble constant results claim a formal uncertainty of only ±10%, for a 20% spread (Science, 28 May, p. 1438). But the doubts about that first step in the ladder mean that the central value could change by more than that. As Alistair Walker, an astronomer at the Cerro Tololo Inter-American Observatory in La Serena, Chile, writes in a forthcoming review, “If we cannot agree upon the distance to two galaxies that are only a few tens of kpc from us, how can we be sure of the distances to more remote galaxies?”

    Measuring the measures.

    Different indicators give a wide range of distances to the Large Magellanic Cloud.

    1. 1. RR Lyraes (Layden et al.; Udalski et al.; Popowski and Gould)

    2. 2. RR Lyraes (Reid; Gratton et al.)

    3. 3. Red clump stars (Udalski; Stanek et al.)

    4. 4. Red clump stars (Girardi et al.)

    5. 5. Cepheids (Madore and Freedman; Walker)

    6. 6. Cepheids (Hubble Key Project)

    7. 7. Cepheids (Feast and Catchpole)

    8. 8. Cepheids (Luri et al.)

    9. 9. Supernova 1987A (Gould and Uza)—upper limit

    10. 10. Supernova 1987A (Sonneborn, Kirshner et al.)

    11. 11. Supernova 1987A (Lundqvist and Fransson)

    12. 12. Eclipsing binary HV2274 (Guinan et al.)

    13. 13. Accretion disk in NGC 4258 (Herrnstein et al.; Maoz et al.)—implied distance

    14. 14. Supernova in M81 (Bartel et al.; Freedman et al.)—implied distance

    The lack of a resolution “is not for lack of trying,” says Wendy Freedman, an astronomer at the Carnegie Observatories in Pasadena, California, and a leader of the Key Project. At least 10 different indicators have been applied to the problem. They include “standard candle” stars thought to have a known intrinsic brightness, so that their faintness as seen from Earth gives a distance measure, as well as clever geometric techniques that derive distance from the apparent size of objects with known dimensions—in effect, cosmic yardsticks. The results are all over the map, a situation that only worsened 2 years ago, when results came in from a European spacecraft called Hipparcos. Designed to settle the issue of galactic distances, Hipparcos instead “produced a lot of confusion,” says Princeton University's Bohdan Paczy′nski.

    A cloudy view

    For most of this century, the two dwarf galaxies named for the Portuguese navigator Ferdinand Magellan, one of the first Europeans to see them, have played a crucial role in astronomy. Around 1910, while studying photographic plates of the Small Magellanic Cloud, Harvard's Henrietta Leavitt discovered the period-luminosity relation for Cepheids: Those that flicker more slowly are brighter. Since then, Cepheids have become the linchpin of cosmologists' efforts to survey the universe. By using the period of a Cepheid as a proxy for its actual brightness, then measuring its apparent brightness, astronomers can infer how far away it is. They can thus arrive at a distance to the galaxy containing the Cepheid, which allows them to calibrate standard candles that can be seen at even greater distances, such as the exploding stars called supernovae.

    But Cepheids themselves have to be calibrated. Astronomers have to determine the absolute brightness of Cepheids with different periods—the so-called zero point of the Cepheid distance scale.

    They've traditionally done so by measuring the distance to a handful of Cepheids in our own galaxy. Few Cepheids are close enough for observers to apply the only truly direct technique for determining astronomical distances, called parallax—measuring how much the stars seem to move back and forth in the sky as Earth orbits the sun. So observers made parallax measurements of nearby “main-sequence” stars, garden-variety stars like our sun, which have a characteristic relationship between color and brightness. Using stars at various points of the main sequence as rough standard candles, they estimated the distance to star clusters within our galaxy that also contain Cepheids. This rough galactic calibration could then be applied to the numerous, easily observed Cepheids in the LMC to find its distance and determine the zero point for the entire range of Cepheids.

    But astronomers never placed great confidence in this multistage process. “The galactic calibration is very uncertain,” says Andrzej Udalski of the Warsaw University Observatory, in part because obscuring dust in the plane of the galaxy confuses matters. And applying that calibration to Cepheids in the Magellanic Clouds is still more uncertain. Stars in the Milky Way tend to have a greater complement of “metals,” or elements heavier than helium, than those in the Magellanic Clouds, and some theorists think metals can change the brightness of a Cepheid that has a specific period.

    But with the launch of the Hubble Space Telescope in 1990 and other gains in instrumentation and theory, measures of relative distances into the far reaches of the cosmos —based on the relative brightness of supernovae, whole galaxies, and other objects—got better and better. The importance of Cepheids for calibrating these indicators grew. So in 1992, Madore and Freedman wrote a paper that took into account all existing evidence and argued that the zero point should be fixed by setting the LMC distance at 50.1 kpc. “We just decided, we'll move the battlefield away from the whole galactic situation, where things are uncertain,” says Madore. “Rather than trying to solve the problem, we just deferred it and said … ‘This is our zero point; if that changes, then everything goes with it,’” says Madore.

    But challenges to that value began mounting in 1997, when Michael Feast of the University of Cape Town in South Africa and Robin Catchpole of the Royal Greenwich Observatory in Cambridge, U.K., announced results from the Hipparcos satellite, designed for high-precision cosmic surveying. Hipparcos could measure parallaxes to far more Cepheids in our galaxy than ever before—more than 200 of them. The measurements, said Feast and Catchpole, pushed the most likely LMC distance to about 55 kpc (Science, 21 February 1997, p. 1064).

    The result has failed to gain much traction among astronomers, however. “The solution is completely dominated” by the three to five Cepheids that are closest to the sun, says Floor van Leeuwen of Cambridge University's Institute of Astronomy. “The question then remains: How representative can this sample be for the entire population?” Feast defends his findings, saying there is no reason to think those Cepheids are atypical: “The Hipparcos results place the Cepheid scale on a much firmer basis than hitherto,” he says.

    But other distance indicators aren't falling into line. Another set of pulsating stars, called RR Lyraes, suggests either that the LMC distance should be revised down to about 44 kpc or up to 53.7 kpc, depending on how the RR Lyraes are calibrated. “We should probably call this the cosmic minefield rather than the cosmic distance scale,” says University of Pennsylvania's Neill Reid, who studies these stars.

    To add to the confusion, a short LMC distance is supported by yet another galactic distance indicator calibrated by Hipparcos: so-called red clump stars. These old, giant stars, from 2 billion to 10 billion years in age, fired by a core of slightly less than half a solar mass of helium, “are very consistent in brightness,” making them good standard candles, says Krzysztof Stanek of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts. Hipparcos measured parallaxes to better than 10% accuracy for over 1000 red clump stars within a few hundred light-years of Earth. Because Udalski and his colleagues had observed tens of thousands of red clumps in the LMC, the Hipparcos calibration led directly to a distance. The result: 43.3 kpc. “That method is intriguing,” says Andrew Gould of Ohio State University in Columbus, although he adds, “It's a young method, and you have to worry that the bugs aren't worked out yet.”

    Geometric certainty?

    Faced with all these conflicting results, some astronomers have looked for certainty from geometric methods—akin to using the apparent size of the Sears Tower to measure your distance from downtown Chicago. Although most agree that geometry is “the wave of the future,” says Paczy′nski, its apparent simplicity can be hazardous. Astronomy has very few Sears Towers, whose actual size can be looked up or measured directly. Instead, the size of something seen in a telescope often has to be deduced from secondary observations.

    Take the ring of gas around the supernova that exploded in the LMC over 10 years ago, called 1987A. In principle, the true ring size should have popped out when astronomers measured the time delay between the burgeoning flash of the supernova and the later reflection off the ring. The speed of light times the delay would give the size. Then, Hubble or ground-based measurements of the ring's angular size would yield the LMC distance. “That's pretty comforting,” says Gould.

    But it's not that easy. “The first real snag comes from the fact that light is not reflected —it's fluorescing,” says Gould, and fluorescence takes time to develop, adding a lag that has to be calculated. Worse, the ring is not face-on but tilted, meaning that the secondary bursts of light are further spread out in time because of the varying distances they have to travel from the ring to Earth. Although initial estimates were all over the map, several recent calculations, including one by Gould, come up with similar distances to the LMC, about 47 kpc.

    Perhaps the most promising geometric method relies on eclipsing binaries in the LMC—mutually orbiting stars that regularly pass in front of each other as seen from Earth. In essence, this method uses geometry to turn the stars into standard candles. The duration and frequency of the eclipses combined with the speed at which the stars are orbiting each other (which can be found from the Doppler shifts of their light) reveals the stellar sizes. By combining that information with the temperatures of the stars—indicated by their colors—astrophysicists can calculate their absolute brightness, which can be compared with the observed brightness to get distance.

    The procedure has been refined over the last couple of years, says Edward Guinan of Villanova University in Pennsylvania, who led the first detailed observations of a binary in the LMC, and he thinks it is now on solid footing. “Our determination from the first target does favor the ‘short’ distance,” he says—45.7 kpc. With a dozen new targets scheduled for observation, Guinan thinks eclipsing binaries could eventually nail down the distance to within a percentage point or so.

    Many astronomers agree that eclipsing binaries are the best hope for pinning down the LMC distance once and for all, simply because times and angles are easier to measure than absolute luminosities. But there is also great promise in planned interferometers—essentially arrays of telescopes operating in synchrony—that would fly in space and be able to measure parallaxes much more precisely than Hipparcos did. For example, NASA's Space Interferometry Mission, tentatively scheduled to fly by 2005, might be able to pin down parallax distances to the LMC or more distant galaxies, bypassing all the indirect methods and fixing the distances in one fell swoop.

    Until then, the Hubble constant will bounce up whenever the estimated LMC distance shrinks, and vice versa. The upward Hubble bounce causes the most consternation among astronomers. That's because a fast expansion rate indicates a universe that is young, conceivably younger than its oldest stars. Recent evidence of a bizarre energy in empty space, which may be speeding up the expansion of an older universe (Science, 18 December 1998, p. 2156), could ease the so-called age crisis, however. “Quite often, the reaction I have heard to our low distance to the LMC is, ‘It can't be right, because the age of the universe would be too low,’ or something in a similar spirit,” says CfA's Stanek. But he thinks that reasoning is flawed. “Astronomers should measure the distances as best they can,” says Stanek, “and let the cosmological consequences fall where they may.”


    Bypassing the Magellanic Cloud

    1. James Glanz

    Traditionally, the first stop in surveying cosmic distances is the Large Magellanic Cloud (LMC), a satellite galaxy of the Milky Way studded with flickering stars called Cepheids. By measuring the distance to the LMC, astronomers can determine how the flicker period of Cepheids relates to their brightness, turning them into standard candles that can reveal more distant objects. But as the tug-of-war over the distance to the LMC continues (see main text), a few astronomers have boldly tried to leapfrog it by using more remote galaxies.

    Bypassing the LMC requires the help of clever geometric techniques for directly measuring distances to other galaxies, such as the one Jimmy Herrnstein of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, described at a recent American Astronomical Society press conference. Herrnstein announced that his team had finally found the “golden rod” that would settle the cosmic distance debate once and for all. It's an accretion disk, a whirling platter of gas and dust thought to be circling a black hole, at the center of a galaxy known as NGC 4258.

    The team, which included CfA's James Moran and Lincoln Greenhill, clocked how fast bright clumps of radio-emitting gas in the disk, called masers, appeared to creep across the line of sight, like traffic on a distant highway. They also measured the actual speed of masers at the edges of the disk, where they are moving toward or away from Earth, creating a Doppler shift in their radiation. Comparing the angular speed of the masers to their true speed gives the distance to the galaxy, because nearby objects move across the line of sight more quickly than distant objects going at the same speed. The result, 7.2 megaparsecs, is the most precise distance ever measured to a remote galaxy, Herrnstein claimed.

    And it could have implications for other cosmic distances. A team led by Eyal Maoz of the University of California, Berkeley, which has studied Cepheids in the same galaxy, puts NGC 4258's distance at 8.1 megaparsecs, based on the LMC calibration, with wide error bars. Herrnstein's measurement, if correct, would force a substantial downward revision of that distance, and hence of the entire Cepheid distance scale.

    But other astronomers aren't ready to trim their cosmic yardsticks yet. The CfA result, they note, is built on a pyramid of debatable assumptions, among them that the masers orbit the black hole at a steady pace like planets around the sun. That argument, Moran later conceded, “is a little hard to make,” although he strongly supports the conclusions. “It's a method that holds enormous potential and promise,” agrees Wendy Freedman of the Carnegie Observatories in Pasadena, California. “But we just can't tell with a sample of one.”

    Freedman and others who support the existing Cepheid distance scale point out that another geometric distance to a remote galaxy—a technique that compares the true and angular speeds of radio emissions from the expanding shock front of a supernova that exploded in 1993—closely agrees with the existing scale. Norbert Bartel of York University in Canada, who led the radio measurements, says the discrepancy between his result and Herrnstein's is no surprise, given all the uncertainties. “I look at it from the relaxed point of view,” he says. For now, it appears that there's no sure way to bypass the LMC.