News this Week

Science  29 Oct 2004:
Vol. 306, Issue 5697, pp. 788

    Accident Shuts Down SLAC, Spurs Probe of Safety Rules

    1. Adrian Cho

    Officials at the Stanford Linear Accelerator Center (SLAC) have shut down the laboratory's particle accelerators indefinitely after an accident that left one worker seriously injured. The accident has triggered an investigation by the U.S. Department of Energy (DOE) into electrical safety at the facility in Menlo Park, California. Repercussions from the accident are being felt by DOE's nine other science laboratories.

    “The consequences for [SLAC] are severe, and the accelerators will be shut down for a while,” says Milton Johnson, chief operating officer for DOE's Office of Science in Washington, D.C. Johnson has instructed the directors of DOE's other science laboratories to review their procedures for working with electrical equipment while it is powered, a practice known as “hot work” that is discouraged and requires permits. Elsewhere in the DOE lab network, Argonne National Laboratory in Illinois has tightened its hot work regulations, and Brookhaven National Laboratory (BNL) in Upton, New York, is planning a weeklong safety review for all employees.

    On 11 October, an electrical discharge struck a SLAC technician while he replaced a circuit breaker near a 480-volt power panel. Electricity from the panel set the worker's clothes on fire, and he suffered second- and third-degree burns to his torso, arms, and thighs. At press time, the worker, a contractor, was listed in serious condition at a burn center in San Jose, says SLAC spokesperson Neil Calder.

    The shutdown leaves SLAC researchers with little to do and plenty of time to worry, says Michael Kelsey, an experimental physicist working with SLAC's BaBar particle detector. “Part of the problem is the uncertainty,” Kelsey says. As SLAC's machines sit dormant, BaBar experimenters are losing ground to the Belle experiment at Japan's KEK laboratory in Tsukuba, which is up and running.

    The accident occurred in the “klystron gallery,” a 3-kilometer-long building stretching above SLAC's subterranean linear accelerator. The klystrons generate radio waves that propel particles through the accelerator, which feeds SLAC's PEP-II particle collider. The linear accelerator was running at the time of the accident and was shut down immediately. A week later laboratory director Jonathan Dorfan suspended all work. The lab's 1500 employees were told to read the laboratory's safety manual and write safety protocols for their individual tasks, including desk work and car trips across the campus. Calder says people are returning to work as they complete their safety reviews.

    Dangerous territory.

    A SLAC technician suffered severe burns earlier this month while working in the lab's klystron gallery.


    In the meantime, DOE investigators are trying to find out what happened and whether the technician violated SLAC's hot work rules and other safety protocols. Each DOE science lab formulates its own safety regulations tailored to its mission and facilities.

    Three months before the accident, a laboratory safety team had questioned SLAC's hot work practices. The 23 July report by an electrical safety review team found that from 25 February to 25 May the laboratory issued 31 hot work permits at various voltages. Eight of the permits were for diagnosing problems with equipment that could be tested only while powered. None of the other 23 permits were justified, the SLAC review team found. The team also noted that contractors were not required to complete the laboratory's electrical safety training.

    Safety officers at other physics laboratories say they rarely grant permits for hot work. This year, for example, Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, has issued three permits for hot work at 480 volts or higher, says Gerald Brown, associate director for operations support. BNL has not issued a permit for hot work on a 480-volt source “in more than 10 years,” says James Tarpinian, assistant laboratory director for environment, safety, health, and quality.

    The accident is the latest in a string of safety incidents at SLAC. In January 2003, a researcher fell from a ladder and suffered a serious head injury. On 22 September, a large chunk of concrete fell from a crane; no one was injured in the incident. SLAC's accident and injury rate, which for several years was below the average for DOE science labs, jumped this year to 2 per 100 workers, up from 1.6 last year and 1.7 in 2002, according to DOE. That uptick in accidents makes SLAC a cause for particular concern, says DOE's Johnson. The average for DOE science labs is 1.6 per 100 workers so far this year. It was 2 last year and 2.7 in 2002.

    All of DOE's science labs follow a system that makes managers directly responsible for safety all the way down the work line. But such systems only work if employees continually place safety before all else, say safety officials at other laboratories. “Complacency sets in, and when that happens people take shortcuts. That's human nature,” says Fermilab's Brown. “Management has to stay on top of them.”

    Investigators hope to complete their work at SLAC early next month before returning to Washington, D.C., to write a report. DOE officials and SLAC administration will then work together to implement the report's recommendations, Calder says. Only after those changes have been made will SLAC's accelerators power up again.


    New Species of Small Human Found in Indonesia

    1. Ann Gibbons

    Archaeologists have made the startling discovery of a lost world of small archaic humans, who hunted dwarf elephants and Komodo dragons on an Indonesian island as recently as 18,000 years ago. The researchers uncovered the skull and skeleton of an adult human female with a brain the size of a small chimpanzee. This diminutive new species lived on the tropical island of Flores at the same time that modern humans inhabited nearby islands and were circling the globe. “It is literally jaw-dropping,” says anatomist Bernard Wood of George Washington University, who was not involved with the discovery.

    Wood and other paleoanthropologists say that the Flores skeleton ranks as one of the most outstanding discoveries of the past 50 years and shows that until recently, modern humans were not alone on the planet. “My [first] reaction was that this is a hoax,” says Harvard University paleoanthropologist Daniel Lieberman. “But after I read the paper, I realized it is a normal skull that happens to be very small. There is no apparent evidence for pathology. It is wonderful.”

    In a report in this week's issue of Nature, paleoanthropologist Peter Brown and archaeologist Michael Morwood of the University of New England in Armidale, Australia, and their colleagues at the Indonesian Centre for Archaeology in Jakarta, Indonesia, describe the remains of an adult skull and partial skeleton found last year in the Liang Bua Cave on Flores. This cave woman and the isolated bones of several other individuals are so unlike modern humans—the partial skeleton stood only 1 meter tall—that the researchers baptized them as a new species, Homo floresiensis. In a second paper, the authors describe the Lilliputian world where these hobbit-sized people lived perhaps 95,000 to 13,000 years ago. The team found stone tools and the bones of Komodo dragons and dwarf animals, such as a new species of extinct elephant called a Stegodon.

    The team, led by Morwood, discovered the fossil in September 2003. They were excavating a new site on the same island where they had previously found 840,000-year-old stone tools; these were probably made by Homo erectus, which evolved in Africa about 2 million years ago and spread throughout Asia (Science, 13 March 1998, p. 1635). Morwood followed a trail of more recent stone tools into a cave, where he also found a human premolar and Stegodon remains. Excavating 6 meters down, the team found the skeleton.

    Shrunken head.

    A new 18,000-year-old human species (right-hand skull) is much smaller than its putative ancestor, Homo erectus (left) and modern humans.


    It was a surprise from the start. A suite of methods, including radiocarbon dates of charcoal in the soil, date the skeleton to a mere 18,000 years ago, yet the bones are clearly not those of a modern human, says Brown. The skull looks most like H. erectus, with a protruding brow, a slight keel on the top of its head, and no chin. Interestingly, the skull closely resembles the oldest H. erectus specimens in Africa rather than more recent, bigheaded specimens from the nearby island of Java, says Brown. “There is very little in this description to distinguish it from H. erectus except for size,” notes H. erectus expert G. Philip Rightmire of the State University of New York, Binghamton.

    But the size is a showstopper. The skull packed a tiny brain of only 380 cm3, the size of a grapefruit—and half the size of the brain of H. erectus on Java. The skeleton stands at about the same height as the famous australopithecine nicknamed Lucy. It shares some pelvic and thigh traits with her species, but that may be the result of petite size rather than close kinship.

    The researchers considered whether the Flores female was tiny because she was deformed in some way. They conclude that she was not suffering from disease, nor was she a pygmy, dwarf, or midget, whose brains are proportionally large for their bodies. And the bones of other individuals are also tiny. An initial scaling analysis indicates that the proportions most resemble those of a shrunken H. erectus.

    The leading hypothesis for H. floresiensis's origins is that it was descended from H. erectus, says Brown. He theorizes that during thousands of years of isolation on the islands, the lineage shrank in a dwarfing process that has been observed in other island mammals. Eventually, these isolated little people evolved into a new species of human. “This shows that humans are not special cases: The evolutionary processes that shape life on Earth operate in the same way on humans,” says paleoanthropologist Russell Ciochon of the University of Iowa in Iowa City.

    The skeleton also confirms that until recently, the human family tree was bushy. At about 30,000 to 50,000 years ago, for example, there is now evidence for modern humans, Neandertals, Homo floresiensis, and perhaps H. erectus—and three of the four species were in Southeast Asia.

    Morwood thinks that the Flores people died out about 12,000 years ago, when the stone tools and elephants disappear suddenly from the record, perhaps as the result of a catastrophic volcano. Modern humans were on the island soon after, bringing deer, macaques, and pigs, says Morwood. Soon the moderns were the only survivors of a time when there were three or four types of humans on the planet. “I think there will be more surprises,” says Rightmire. “We better get used to the idea that we haven't accounted for all the little turns and twists in human evolution.”


    A Wily Recruiter in the Battle Against Toxic β Amyloid Aggregation

    1. Ingrid Wickelgren

    In Alzheimer's disease (AD), large, abnormal clumps of a peptide called β amyloid surround and clog the insides of neurons. These clumps are suspect because they kill cultured neurons, and several human mutations associated with early-onset AD are linked to problematic β amyloid. Hoping to retard the disease, researchers have tried using drugs to block such clumping, but with little success until recently.

    The problem: Lilliputian drug molecules are no match for relatively massive amyloid peptides. Using them as blockers is like trying to prevent strips of Velcro from adhering by inserting grains of salt between them. But now Stanford researchers report a new blocking strategy that seems to work. On page 865, molecular biologist Isabella Graef, chemist Jason Gestwicki, and biologist Gerald Crabtree describe the synthesis of an ingenious drug that recruits a gargantuan cellular protein to insert itself between two amyloid peptides, preventing the formation of large, toxic β-amyloid clumps.

    “It's very clever,” says molecular biologist Roger Briesewitz of Ohio State University in Columbus. “By binding a small drug to an endogenous protein, the small drug becomes a large drug that can push away the protein that wants to bind to the drug target.”

    The method has not yet been tested in animals, and because the current form doesn't cross the blood-brain barrier, it has no clinical use in AD. But if the Stanford team's trick can be parlayed into therapy, it could lead to novel treatments for a variety of disorders—including perhaps other neurodegenerative ailments such as Parkinson's disease—in which protein-protein interactions are thought to play a key role. “We think the idea of fighting protein bulk with protein bulk is going to be general,” says Gestwicki. “It's like fighting fire with fire.”

    Bully tactics.

    A new strategy to prevent clumping of β amyloid (micrograph) combines Congo red (red structure) with another molecule (blue) to maneuver a large cellular protein called FKBP in between amyloid peptides.


    The approach has a precedent in nature. For millions of years, soil bacteria have made chemicals that cripple enzymes in bacterial foes by first binding to a giant cellular protein, which then walls off the enzyme from its usual substrate. A prime example is the immunosuppressant FK506. It inhibits the enzyme calcineurin by first recruiting a bulky protein chaperone—a protein that helps other proteins fold—called the FK506 binding protein (FKBP).

    Graef was thinking about FK506's mechanism while reading an article about misfolded proteins in spring 2003. She immediately thought: “Why haven't we tried this as a way to block protein aggregation?” She thought β amyloid would be a good test protein because it has been so well studied.

    Back in the lab, Graef recruited Gestwicki, who chemically tethered a synthetic ligand for FKBP to Congo red, a dye that sticks to β amyloid but doesn't block clumping except at high concentrations. The resulting small molecule could grab FKBP on one end and β amyloid at the other and thus usher the bulky chaperone in between two amyloid peptides. Gestwicki made several versions of the drug, varying the length and flexibility of the section that linked Congo red to the FKBP ligand.

    When added to tubes of β amyloid along with FKBP, Gestwicki's compounds either greatly delayed or completely prevented large clumps of β amyloid from forming, as detected by a fluorescent dye that binds to protein aggregates. The best compound blocked β-amyloid aggregation at concentrations 20-fold lower than any compound previously developed, Gestwicki says, a critical feature for a potential therapeutic. Without FKBP, however, the Stanford drug held no advantage, showing that the chaperone is critical to its modus operandi.

    Under an electron microscope, Gestwicki saw that the β-amyloid aggregates that formed in the presence of his drug were much smaller than those in brains with AD, suggesting that the drug traps the aggregates in an intermediate state. But the researchers still didn't know whether that state was less toxic to cells.

    To find out, Graef exposed cultured rat neurons to β amyloid with and without the new drug and FKBP. As expected, β amyloid alone killed the cells. But the drug, along with FKBP, prevented much of the cell death, indicating that the smaller bundles are indeed more benign.

    Whether this protection can be extended to animals, let alone humans, remains to be seen. “They've played a creative chemical trick that clearly could be practical,” says Peter Lansbury, a chemist at Harvard Medical School in Boston, “but the path from this to an Alzheimer's drug is going to be extremely difficult.” One huge problem, Lansbury says, will be finding an alternative to Congo red—which doesn't enter cells or cross into the brain—that targets β amyloid.

    The strategy might be easier to employ in other diseases, Lansbury suggests, in which the protein targets are more rigid and stable than β amyloid, which has a floppy, disordered structure. Some oncogenes, for example, work as dimers, so blocking the dimer from forming might lead to a cancer therapy. Viruses and bacteria also enter cells through protein-protein interactions. Says Briesewitz: “If we could use small molecules to disrupt protein-protein interactions, we could target many more biological processes to fight disease.”


    Prozac Treatment of Newborn Mice Raises Anxiety

    1. Constance Holden

    The U.S. Food and Drug Administration this month ordered drugmakers to put strong new labels on serotonin-based antidepressants, warning that they may raise the risk of suicidal behavior in children. Now a study by researchers at Columbia University indicates that fluoxetine (the generic name for Prozac), paradoxically, seems to raise anxiety levels in newborn mice.

    The study, published on page 879 of this issue, “suggests that fluoxetine and probably other SSRIs [selective serotonin reuptake inhibitors] may have additional unexpected problems,” says Miklos Toth, a pharmacology professor at Cornell University's Weill Medical College in New York City. Some scientists caution, however, that the mice in this study were at a much younger developmental age than children likely to be treated for depression.

    Fluoxetine is the oldest of the SSRIs and the only one approved for pediatric use. It operates primarily on the serotonin transporter (5-HTT), which is responsible for helping neurons vacuum back up excess serotonin that they have released. By blocking the transporter, the drug enables serotonin to linger in synapses, making more available to be taken up by target receptors.

    Previous animal research had shown that in early life serotonin acts as a growth factor in the brain, modulating nerve cell growth, differentiation, and migration. Interfering with this function can have behavioral consequences. Mice who have had their serotonin transporters genetically knocked out—and thus reuptake disrupted—exhibit increased depression- and anxiety-related behaviors.

    The Columbia researchers, led by psychobiologist Mark S. Ansorge, sought to determine whether fluoxetine would have the same effect as knocking out the two copies of the transporter gene. They bred sets of mice with one, two, or no functioning copies of the 5-HTT gene. Then they randomly gave either saline injections or fluoxetine—at doses equivalent to therapeutic ones for humans—to newborn mice between 4 and 21 days old in each group. Nine weeks after the last injection, mice were given tests that revealed their emotional states.

    Chemical imbalance.

    Mice treated with Prozac as newborns showed reduced exploratory behavior when tested on an elevated maze.


    As expected, the drug had no effect on the mice lacking any 5-HTT; they already exhibited anxiety. But the two other groups started acting like the 5-HTT-deficient group when they were treated with fluoxetine. In comparison to the saline-treated pups, they showed reduced exploratory behavior in a maze test. They also took longer to start eating when placed in a novel setting and were slower to try to escape a part of the cage that gave them mild foot shocks. All these behaviors are regarded as signs of anxiety and depression in animals.

    The authors conclude that disruption of 5-HTT early in brain development affects the development of brain circuits that deal with stress response. Co-author René Hen explains that when serotonin reuptake is blocked, the increased levels in the synapse lead to “abnormal activation [of] a bunch of receptors” during a critical phase of development. “Overstimulation could result in abnormal development” in areas of the limbic system, he says.

    The scientists believe that their work could help explain a noteworthy finding announced last year from a longitudinal study of New Zealanders (Science, 18 July 2003, p. 386): that people with a polymorphism that reduced their 5-HTT activity were more likely than others to become depressed in response to stressful experiences.

    Another implication, of course, is for those exposed to SSRIs at a tender age. The authors say the period of brain development studied in the mice corresponds roughly to the last trimester of pregnancy through age 8 in humans. So, they conclude, “the use of SSRI medications in pregnant mothers and young children may pose unsuspected risks of emotional disorders later in life.”

    Toth notes that in contrast to humans, a partial deficit (having one defective 5-HTT allele) is not enough to adversely affect mice's behavior. So “it is possible that humans are more sensitive than rodents to the adverse effect of fluoxetine.” But he agrees with Harvard child psychiatrist Timothy Wilens, who says that the “very early exposure calls into question the generalizeability [of these results] to children.” Columbia psychiatrist John Mann, who was not associated with this study, adds: “This has nothing to do with the issue of SSRIs in kids because they get the SSRI well after the equivalent period in this study.”

    Mann says, however, that “this is an important study” because it shows that even transient loss of transporter function during a critical period in brain development may lead to depression in adulthood.


    Fundamental Constants Appear Constant--At Least Recently

    1. Charles Seife

    If the times are a-changing, they're not a-changing so fast—at least according to the latest argument in a debate about whether the fundamental constants of nature have varied over time. In 2001, astronomers found controversial evidence that the fine-structure constant, a number related to the speed of light, had been smaller in the past. Now, a group of German physicists concerned with precision timing measurements has presented new evidence—based on the spectra of ytterbium atoms—that the fine-structure constant isn't changing very much at all.

    “I think it's a beautiful result,” says James Bergquist, a physicist at the National Institute of Standards and Technology in Boulder, Colorado.

    The fine-structure constant is an amalgam of other physical constants (including the speed of light) that gives physicists a shorthand way to describe the strength of the electromagnetic force. In 2001, a team of Australian and American scientists caused an uproar by arguing that the light from distant quasars indicated that the fine-structure constant was smaller billions of years ago than it is today (Science, 24 August 2001, p. 1410).

    Since then, scientists have presented evidence both for and against a changing fine-structure constant. One of the authors of the 2001 study, Victor Flambaum of the University of New South Wales in Sydney, Australia, says his group's latest observations point to a four-standard-deviation departure from the present-day value of the fine- structure constant. However, he concedes, new results from the Very Large Telescope (VLT) in Chile “are consistent with zero variation.” Another line of evidence, based on radioactive decay in an ancient natural nuclear reactor in Gabon, has indicated that the fine-structure constant has either stayed static or decreased over billions of years—rather than increasing, as the quasar study implied.

    Club for growth?

    Ancient quasars gave controversial hints that the fine-structure constant had increased over billions of years.


    Ekkehard Peik of the national Physical-Technical Institute in Braunschweig, Germany, and his colleagues have now stepped into the fray. In the 22 October Physical Review Letters, Peik's team describes an experiment that used extremely sensitive atomic-clock measurements to gauge how the fine-structure constant changes over time.

    The researchers trapped and cooled a single ytterbium ion and zapped it with a laser to excite its electrons over and over. By using a cesium atomic clock, they measured precisely which frequency of light induces a particular excitation in the ion. The frequencies of electron excitations depend on the strength of the electromagnetic interaction between atomic nuclei and electrons—and the fine-structure constant. By comparing the ytterbium transition frequency with other atomic transition frequencies, the team calculated the fine-structure constant to 15 significant figures. Then they did the same thing nearly 3 years later. To the limits of their experiment's precision, the team saw no change in the fine-structure constant over that time.

    Because the quasar study uses low precision to look back over huge time spans, and the atomic-ion study uses very high precision to look back over short time spans, “the sets of experiments are comparable in sensitivity,” says Peik. If the fine-structure constant has slowed over time, then the ytterbium measurements should have spotted the change—unless the quasar study picked up a change that was much more rapid in the past than it is today.

    The new measurement backs those arguing for a changing fine-structure constant into a tighter corner. For there to be a changing constant, either the VLT quasar study was wrong, or the fine- structure constant changed at different rates in different parts of the sky, or the rate of change was faster in the past. Although the case isn't closed, theorists will have to split hairs ever more finely to salvage the idea of an inconstant constant.


    Broad-Novartis Venture Promises a No-Strings, Public Gene Database

    1. Andrew Lawler

    Collaborations between university researchers and the pharmaceutical industry are no rarity these days. But Novartis's biomedical research arm and the new Broad Institute across the street in Cambridge, Massachusetts, plan an unusually open partnership. This week they intend to announce an effort to understand the genetic basis of adult-onset diabetes and release validated data publicly rather than keep it proprietary.

    Companies typically demand that data created in cooperative ventures—which can be mined for new discoveries—be kept safe from competitors' prying eyes. But Novartis is betting that the benefits of openness will outweigh those of secrecy, and the company intends to put the genetic variation data it collects on a public Web site. “I'm doing this to make a statement in the world of medical science that the patient should come first,” says Mark Fishman, president of Novartis's biomedical efforts and a former Harvard Medical School professor. “You gain much more by being open.” While the team will forswear filing patents on the database, it will allow others to patent a new therapy or diagnostic test based on the shared information.

    More than 170 million people suffer from adult-onset diabetes, a figure that is expected to nearly double within the next 2 decades. The disease is “one of the most pressing public health problems in the industrialized world,” says David Altschuler, a Broad researcher and the project's principal investigator. The deal would funnel $4.5 million in Novartis funding to the effort over 3 years, with Broad contributing its vast array of genomic equipment as well as the expertise of its 149 Ph.D. scientists. Leif Groop and his colleagues at Lund University in Sweden, who have collected thousands of DNA samples from diabetes patients, will also participate in the venture.

    Going public.

    The Broad Institute's David Altschuler (left) and Novartis's Mark Fishman team up on a $4.5 million project.


    The initial goal will be to gather data on genetic variants associated with adult-onset diabetes. Once researchers are confident of the quality, and after they have removed details that could be used to identify individual patients, both genetic and clinical information on gene associations will be posted on the Web. Raw data cannot be released publicly because patients were not asked to give consent for this, adds Altschuler.

    The idea of public release of data produced with industry funding excites many in the research community. “This fits nicely with a growing and laudable trend for public accessibility of research data,” says Francis Collins, director of the National Institutes of Health's National Human Genome Research Institute in Bethesda, Maryland. Eric Campbell, a health policy researcher at Harvard Medical School in Boston who has studied industry-academic partnerships, adds that “clearly everyone could benefit by making data public.” Openness is the best spur to scientific advances, he notes. “And an arrangement which fosters sharing of data and reduces potential redundancy is good.”

    Fishman acknowledges that he struggled to convince Novartis's board that the approach made sense, and he adds that public data release was not a condition set by Broad but a mutual decision. “This is a very remarkable step for both parties,” he says. Novartis relocated its research effort to Cambridge 2 years ago, he adds, to take advantage of institutes such as Broad. That proximity gives him confidence that he won't be giving away the store to the competition, Fishman says. Broad researchers will have to sign an agreement prohibiting them from discussing information they learn about other Novartis projects during the course of their work.

    A steering committee with Broad, Novartis, and outside members will set the research direction for the effort, and Altschuler pledges that the first data will be made public in 2005. “There will be no restrictions or delays on publication,” he adds. “No matter how effective we can be” at making use of the data, says Altschuler, “we can't be as effective as the rest of the world.”


    Worm's Light-Sensing Proteins Suggest Eye's Single Origin

    1. Elizabeth Pennisi

    Despite incredible variation in size and shape, eyes come in just two basic models. The vertebrates' photoreceptor cells, typified by rods and cones, are quite distinctive from the invertebrates'. And although both use light-sensing pigments called opsins, the opsins are quite different in their amino acid makeup.

    For years biologists have argued about how these varied components came to be. Some insist that eyes evolved only once, despite this modern difference. Others have argued that optical structures evolved at least once in invertebrates and again in vertebrates.

    New data showing unexpected similarities between photoreceptors of a marine worm and humans add a new twist to this debate. Detlev Arendt and Joachim Wittbrodt, developmental biologists at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany, and their colleagues have found that in addition to its regular opsin pigment, the worm contains another one almost identical to the human's. Their finding suggests that even the earliest animals had the makings of both vertebrate and invertebrate visual systems, and that some of the photoreceptor cells in the invertebrate brain were transformed over a series of steps into vertebrate eyes. Although some researchers are skeptical, others think the data are sound. Arendt and Wittbrodt “make a convincing argument,” says Russell Fernald, a neurobiologist at Stanford University in California.

    Eye opener.

    Not only does the ragworm have the eyes of an invertebrate, it's also got a brain with the photoreceptor cells of a human.


    Arendt and Wittbrodt jumped into the fray over eye evolution after Arendt noticed some odd cells in the brains of ragworms, a relic marine annelid species that's been relatively unchanged for the past 500 million years. “We were surprised,” Arendt recalls, as these cells looked very much like rods and cones. Such a vertebrate photoreceptor cell has been found in only a few invertebrates—scallops, for example, which have both. And those observations were based primarily on morphology, says Alain Ghysen of the University of Montpelier, France.

    In the new work, Arendt and his colleagues went beyond morphology and began to look for genes and proteins that might confirm whether vertebrate and invertebrate structures are shared and work similarly in both groups. Collaborator Kristin Tessmar-Raible of Philipps University in Marburg, Germany, first searched for the invertebrate opsin gene in the ragworm brain photoreceptor. After finding none, she combed the genome for other opsin genes. She identified a new one that was expressed only in the brain photoreceptor and had a different sequence from that of the eye opsin. By assessing DNA differences among opsins from various species, the team determined that this second opsin was more like a vertebrate opsin than an invertebrate one.

    They also looked in the ragworm genome for retina homeobox proteins, which are key to building the nascent retina in vertebrates. The retina lines the back of the eyeball and consists of multiple layers, including a layer of photoreceptor cells. These homeobox proteins exist in all vertebrate photoreceptor cells, but so far they had not been found in invertebrates. The homeobox proteins are present in the photoreceptor in the ragworm's brain but—as expected—not in its eye, Arendt and his colleagues report on page 869.

    These findings complement the discovery 2 years ago that retinal ganglion cells in humans, which form the optic nerves that connect the eye to the brain, were quite similar in appearance to the invertebrate photoreceptor. “Even in the human eye, there are two types of photoreceptors that are surviving together,” says Claude Desplan, a developmental biologist at New York University.

    These findings drive home the antiquity of invertebrate and vertebrate photoreceptors and opsins. “Not only the morphology but also the molecular biology of the two types of receptors was already set in our common ancestor,” Ghysen explains. Arendt, Tessmar-Raible, and Wittbrodt propose that both optical systems existed in this extinct common ancestor, called Urbilateria. One likely sensed the light needed to set up a circadian rhythm, and the other might have been a primitive prelude to the eye.

    They go further to suggest that the two types likely arose in a predecessor of Urbilateria. In that organism, they speculate, the gene for one opsin and the genes to build the one type of photoreceptor cell were duplicated. The extra set of genes might have evolved into a different visual system: “We think both photoreceptor cells track back to one cell type,” Wittbrodt says.

    Although researchers such as Walter Gehring of the University of Basel in Switzerland find the work “perfectly compatible” with the idea of a single evolutionary origin of the eye, Peter Hegemann of the University of Regensburg, Germany, still wants more data. No matter what, says Fernald, “what this study shows is that evolution stories are subtle and complex.”


    Only the Details Are Devilish for New Funding Agency

    1. Martin Enserink

    PARIS—Who said the wheels of European policy grind slowly? Barely 2 years after researchers first dreamed up a brand-new funding agency called the European Research Council (ERC), it seems all but unstoppable. Indeed, many scientists and administrators are so confident that politicians will seal the deal in 2005 that they began filling in the details at a meeting here last week—such as how the ERC should organize peer review, whether it should fund big instruments like particle smashers, and even whether it should hop on the open-access publication bandwagon. “We will get an ERC,” says former Portuguese science minister Jose Mariano Gago. “What we are discussing now is the day after.”

    The meeting, hosted by UNESCO and attended by some 150 people from across the continent, showed widespread agreement about the basic principles of the ERC. The council should be independently run by scientists at arm's length from Brussels and fund science-driven projects from both the natural sciences and humanities, speaker after speaker said. It should go easy on the paperwork and ditch “juste retour,” the entrenched E.U. principle that every country gets back roughly what it puts in. Instead, it should reward excellence only and let the chips fall where they may.

    Still, there are many issues to sort out—including some that can make or break the venture. One key worry is that, even if its annual budget reaches the €1 billion or €2 billion currently being considered, the rejection rate on grants is likely to be very high, which could demoralize researchers and make the best look elsewhere. Meeting participants discussed—and rejected—several possible ways to temper the expected deluge. Setting quotas for applications by country would undercut the very goal of the project, for instance, whereas requiring letters of reference or a list of previous high-impact papers could discourage young talent.

    How to create a governance structure that is truly independent of bureaucrats in Brussels—unlike the E.U.'s current research funding system—yet accountable and somehow geographically balanced is another unresolved key issue. Several participants made impassioned pleas to involve non-E.U. members such as Russia or Ukraine, but how exactly they could fit in remained unclear.

    The time for decisions is near. Europe's science ministers will discuss the ERC during a meeting of the so-called Competitiveness Council in the Netherlands next month; if they support it, it's up to the European Commission to hammer out the details next year in its proposal for Framework Programme 7 that funds E.U.-wide research, of which the ERC likely will be a part. How much money the ERC can disburse will be determined after Europe's finance ministers discuss their countries' future contributions to the E.U., also next year. Indeed, convincing politicians of the need for a well-funded ERC is now more important than discussing the nitty-gritty of its operations, cautions Pieter Drenth, president of the European Federation of National Academies of Arts and Humanities.

    Whatever the outcome, the debate has already produced one interesting side effect, Gago notes: Dozens of science organizations have gotten involved in European science policy for the first time. To wit, more than 50 of them joined the Initiative for Science in Europe (ISE), a new group that published a ringing endorsement of the ERC in Science (6 August, p. 776). The high level of interest should help make the project a success, says Gago, who serves as acting chair of ISE: “The ERC will not be alone. It will be accountable to all of us.”


    United Nations Tackles Cloning Question--Again

    1. Gretchen Vogel

    The United Nations attempted this week to break a 3-year deadlock on an international convention regulating human cloning. But after 2 days of public debate and months of behind-the-scenes lobbying, a broad consensus is apparently elusive. As Science went to press, sponsors of two competing proposals were discussing a compromise between countries that want to ban all forms of human cloning and those that support research on nuclear transfer techniques that could produce human embryonic stem cell lines useful for studying or possibly treating disease.

    If that effort fails, the U.N.'s Sixth Committee, which handles legal issues, could vote as early as 29 October on a Costa Rican proposal, strongly backed by the United States, which would empower a committee to draft an international agreement banning all forms of human somatic cell nuclear transfer. Observers say the measure could win a majority of votes in the committee, especially if enough undecided countries abstain. However, that committee's vote is just the first step, and several key countries, including the United Kingdom, which allows human cloning research, said they would simply ignore any resulting treaty. “We would not participate in the negotiation of such a convention, and we would not sign up to it,” said U.K. Permanent Representative Emyr Jones Parry during the first day of public discussion on 21 October.

    Strong words.

    George W. Bush encouraged the U.N. General Assembly to ban all forms of human cloning.


    A competing proposal by Belgium, co-sponsored by the U.K. and more than 20 other countries, would draft a convention banning so-called reproductive cloning, in which a cloned embryo would be implanted into a woman's uterus and allowed to develop to term. The proposal would let countries draft their own regulations governing nuclear transfer research.

    This was the third time the U.N. has tackled the thorny issue. A resolution sponsored by France and Germany passed the General Assembly with wide support in 2001, creating a committee that was to draft an international ban on reproductive cloning. That effort was sidetracked, however, by the United States and other countries that argued that any treaty should ban all human cloning research as well. The research is immoral, they said, because it would create human embryos and then destroy them to derive stem cell lines. They also argued that success in research cloning would make it easier for rogue scientists to clone a baby.

    Costa Rica and Belgium introduced their resolutions last year, but before either one could come to a vote in the Sixth committee, Iran sponsored a motion to postpone discussion for a year. Muslim countries have been largely undecided on cloning, and many say they want more time to determine whether such research is consistent with Islamic teaching.

    Many developing countries support the Costa Rican proposal, perhaps in part because it “strongly encourages” countries “to direct funds that might have been used for human cloning technologies to pressing global issues in developing countries, such as famine, desertification, infant mortality, and diseases,” including HIV/AIDS. The United States has also been lobbying hard for the measure, and President George W. Bush called for a broad ban in his September speech to the General Assembly.

    Costa Rican Ambassador to the U.N. Bruno Stagno told Science that the fact that several countries would refuse to sign an eventual treaty would not deter him. “There are very important treaties or conventions that do not enjoy universal support,” he says. “The International Criminal Court is up and running, despite the fact that some countries have not joined in.”

    Even if the Costa Rican proposal passes the committee this week, it would not likely take effect for years. The measure would next need approval from the General Assembly, which would not discuss the matter before late November, after the U.S. presidential election. A committee to draft the convention would then begin work in 2005.


    Gambling With Our Votes?

    1. Charles Seife

    On the eve of the U.S. elections, many experts warn that it will take a major overhaul to make reliable, secure electronic ballots more than a virtual reality

    Ancient Athenians voted their fellow citizens into exile by inscribing names on pieces of pottery. When Americans head to the polls next week, tens of millions of them will vote in much the same way: by making ticks or writing names on slips of paper. As many as 30% of ballots, however, will be cast electronically, on touch-screen or push-button computerized tabulators built by vendors such as Diebold, Sequoia, ES&S, and a handful of others.

    Many computer scientists and voting experts fear that those machines are putting the election at risk. In rushing to scrap the butterfly ballots and hanging chads that shadowed the 2000 election, the experts warn, municipalities have embraced machines that are badly designed, trouble-prone, and insecure. “Something's fundamentally wrong. The problems are worse than before,” says Peter Neumann, a computer scientist at the Stanford Research Institute in Menlo Park, California. “It throws everything in doubt.” As the campaigns gear up for post-Election Day confusion, some experts say electronic voting machines need a radical rethink—and warn that some common-sense solutions to the problems may not be solutions at all.

    Electronic balloting is a very simple concept: Push a button or press a region on a screen, and the machine records your vote. When the polling place closes, election workers either transmit the votes or hand memory devices over to officials.

    Not only does electronic voting eliminate the need to store and transport secure paper ballots, but in theory it can also tally votes much more accurately, says Massachusetts Institute of Technology (MIT) computer scientist Stephen Ansolabehere. Studies using multiple counts show that hand counts of paper ballots are inherently error prone, Ansolabehere says. “The standard discrepancy between the first and second counts is of the order of 2%,” he says. “With an optical scanner, the discrepancy is smaller, on the order of half a percent. An electronic machine, assuming that the programming is done correctly, will have virtually no discrepancy.”

    In practice, though, electronic machines have been riddled with problems. “[Electronic] machines come up missing dozens, hundreds of votes,” says Rebecca Mercuri, a voting expert with Philadelphia, Pennsylvania- based computer security firm Notable Software. For example, the Washington Post reported in August that its own audit of returns in the 2000 presidential election indicated that electronic machines failed to record nearly 700 votes in New Mexico, a state Al Gore won by only 366 votes. Last year, during an election in Boone County, Indiana, electronic machines initially registered 144,000 votes in a county with about 19,000 registered voters. This August, when electronic-voting-machine vendor Sequoia demonstrated its latest system to California senate staffers, the machine recorded votes cast with English-language ballots but ignored Spanish-language ballots.

    Those foul-ups were apparently due to sloppy programming and can be fixed or worked around relatively easily. But experts say they point to a more fundamental issue: the sheer difficulty of making an electronic voting system at once secret, verifiable, and secure. The need for secrecy rules out the receipts that guarantee security and confidence in e-commerce. “You have to give evidence to the voter that his vote is counted, but you can't give him enough evidence to prove who he's voted for,” says Ronald Rivest, a computer scientist at MIT. Old-style paper ballots and other physical means of voting do that through a rigid procedure for setting up a polling place, casting votes, and then counting them all with some degree of public oversight. “You have controls in place. There are people sitting around staring at the boxes all day,” Mercuri says.


    With electronic devices, however, there's nothing to stare at. The problems start in the voting booth. It's not easy to prove that when you push the button for candidate A that the machine will actually record a vote for candidate A—or for any candidate at all.

    What's more, many experts claim that an electronic device is vulnerable to vote tampering in a way that ballot boxes and lever-operated voting machines are not. “If you're going to tamper with them, you have to do it the old-fashioned way: one vote at a time,” says Mercuri. With electronic machines, however, a rogue programmer who slipped an unnoticed trapdoor into the software or exploited a flaw in the code for the operating system could potentially change the outcomes on many machines at once.

    These problems are not insurmountable, though; computer scientists have long been making secure and reliable devices for critical systems, from wartime communications to airliner controls. “Electronics can be made to work very well,” says Ansolabehere. Neumann agrees. “We know how to do this stuff,” he says. “The research community can do a great deal.”

    Researchers have tackled the problems on two fronts: ensuring that each vote is recorded as intended and that the recorded votes are counted properly. Mercuri argues that each electronic machine should print out a paper chit that details all of your votes—a chit that you can examine for accuracy before dumping it in an old-fashioned ballot box. If votes are lost or misrecorded, the paper trail should reveal the problem. David Bear, a spokesperson for Diebold, notes that many machines already keep internal paper recordings of votes cast and that it “wouldn't take much modification to give a paper receipt to the voter.” But he says the additional costs and difficulties of maintaining printers and of keeping the receipts would negate one of the major advantages of electronic voting.

    In any case, Bear says, a paper trail is no panacea. Suppose there's a mismatch between the paper count and the electronic one. “It raises the interesting question of what the official vote is. Do you use the paper record or the electronic one?”

    And voter-verified ballots don't ensure that the votes are counted properly. “[Voter-verified paper trails] are easy to understand, but the sense of security they give is a little deceptive,” says David Chaum, an independent cryptographer and electronic-commerce pioneer who is based in Los Angeles, California. “It can verify that a vote is recorded in the booth as intended, but there's no assurance that votes so recorded are tallied.”


    Chaum and other researchers have come up with cryptography-based voting schemes that allow a voter to see both that the machine has recorded the vote as cast and, later, that the vote was tallied properly. In Chaum's rather intricate scheme, the voting machine splits the image of the electronically marked ballot into two pieces and prints them out on paper. Each piece alone looks like gobbledygook, but when superposed using a projector, they form an image of the ballot the voter is casting. The machine also stores a multiply encrypted version of the whole ballot on each ticket as a small set of symbols that can be read only after multiple rounds of decryption with a mathematical key. The machine destroys one of the pieces of paper and keeps the other after the machine has made a digital image of it. The image goes to election officials, who decrypt the ballots in steps, shuffling them between each step to ensure that nobody knows which ballot came from which voter.

    After the election, officials post digital images that each voter can compare with his or her paper receipt to check that the vote was recorded. Officials also post copies of all the fully decrypted ballots and selected pictures of ballot images before and after one layer of encryption so that anybody can check that all the votes were counted and that the decryption was properly carried out. Any meddling with the system, such as digitally altering or failing to decrypt a voter's ballot, would tend to show up as a failed check at some level of the process. Indeed, all the checks ensure that election officials have a vanishingly small probability of spoofing the system and preventing votes from being counted or inserting their own fake ballots into the mix. “This solves the problem totally,” says Chaum. On the other hand, the scheme is so complicated that few voters would bother to verify their votes; it also requires new equipment.

    Neumann agrees that Chaum's scheme and similar systems have merit, but he says that any cryptographic scheme is “compromisable” if it's not implemented correctly. And, so far, electronic-voting-machine vendors don't instill Neumann with confidence. “These systems are not well designed,” he says.

    Nevertheless, the experts agree, electronic voting is here to stay, and improved machines will come along—although not by 2 November. “The security problems will eventually be resolved,” says Ansolabehere. Until then, he says, optical-scan systems, which use infrared beams to read paper ballots, offer a “good intermediate” between the ease of electronic tabulation and the security of traditional ballots.

    Of course, better voting alone won't ensure trouble-free elections. Problems with absentee and provisional ballots, voter registration and intimidation, and many other weak spots in the democratic process will remain. “Voting is a very interesting and complicated system, and you need to get all the details right,” Rivest says. On the eve of what could be the most contentious election in decades, the details seem guaranteed to harbor devils galore.


    How Strategists Design the Perfect Candidate

    1. Mark Buchanan*
    1. Mark Buchanan is a writer in Normandy, France.

    As campaigners increasingly apply science to snare the most voters, presidential races will get closer and closer

    This year the two major parties in the U.S. presidential race have spent nearly $500 million wooing voters. Yet for all their talk of polls, strategies, and spin control, many political scientists acknowledge that the “science” in their discipline often resembles a black art. What really sways the electorate? A candidate's record? Grandiose promises? Plain-folks likability? Fear? “The surprising reality,” says political scientist Larry Bartels of Princeton University, “is that we still understand relatively little about how presidential campaigns affect the vote.”

    Political analysts are on the case, though, tackling age-old problems with brute-force number crunching and even mathematics imported from theoretical physics. In the future, some researchers believe, elections will grow ever tighter as campaign strategists on both sides master the same scientific approach. If they are right, then—as in Florida in 2000—the detailed mechanics of voting will be increasingly pivotal in determining election outcomes (see p. 798), and exotic-sounding reforms may become mainstream. Meanwhile, researchers have reached a few tentative but salient conclusions.

    Myopia reigns. Some political scientists still cling to the idea that voters punish bad government, but that's not necessarily so. Earlier this year, for example, Bartels and Princeton colleague Christopher Achen examined American presidential elections over the past 50 years. They found that the average economic growth during an incumbent's time in office has no effect on his or her chance of reelection. “Voters tend to forget all about most previous experience with incumbents,” they concluded, “and vote solely on how they feel about the most recent months.” Overall competence counts for little; last-minute promises and efforts to manipulate public perceptions really determine whether a leader stays or goes.

    Pandering pays. In studying how campaigns craft their candidate's image and positions to appeal to a diverse public, simple mathematical models can yield intriguing insights. Political scientists often represent voters as abstract points in a “policy space.” The axes of the space might represent policies on economic issues, for example, or international affairs but might also include nonpolicy factors such as personality, religious views, and so on. Given the preferred positions of several voters (X, Y, and Z in the diagram), a politician can optimize his or her standing by trying to occupy a point equidistant from all three. Bill Clinton was a master at this sort of “triangulation.”

    Precision vagueness.

    If voters X, Y, and Z have different preferences, then a candidate should abandon a fixed, central position (the blue dot) and instead “waffle” more or less (colored contours) over an extended region of policy space to appeal to the largest number.


    But if that's the case, political scientists reasoned several decades ago, different parties scrambling to target the median voter should wind up with identical policies. Why don't they? Many academic political scientists say it's because parties care not only about winning elections but also about gaining the voters' approval of specific policies they prefer. “By moving toward the median voter's preferences, they increase their likelihood of winning,” says political scientist Oleg Smirnov of the University of Oregon, Eugene. But by moving away from the median voter toward their own preferred policy, they increase their chance of actually putting that policy in place if elected.

    Waffling works, too. At the University of California, San Diego, physicist David Meyer thinks the real story is more paradoxical. It seems obvious that if most voters prefer policy A to policy B, and B to another policy C, then they will also prefer A to C. As it turns out, however, that may not be true for a group of voters even if individuals have such “consistent” preferences themselves. The French philosopher Marie Condorcet showed in the 18th century that preference “cycles” are quite possible. Consider, for example, three voters who, respectively, rank three policy alternatives in the following orders: A > B > C, C > A > B, and B > C > A. It is easy to see that two out of three will prefer A over B, B over C, and yet also C over A.

    If cycles exist among the voting public, Meyer has found, then a candidate's best bet is not to occupy one point in policy space but to spread out over a region and, roughly speaking, be as inconsistent as the voters. Politicians seem to have figured that out by experience, Meyer says: “This is why it's so hard for us as voters to discern exactly for what they stand.”

    Meyer also speculates that increasingly sophisticated campaign science may make for closer races in the future. Bartels says that makes sense. “This seems especially plausible in presidential election campaigns,” he says, “since both major parties typically draw from roughly similar pools of candidates, spend roughly similar amounts of money, and adopt roughly similar campaign strategies.” Recent history seems to fit the pattern: The 2000 vote hung on results in just a few districts, and polls show that this year the candidates are again locked in a virtual dead heat.

    Closer elections could put the shortcomings of the current system under greater scrutiny. For example, voting experts agree that one serious problem with the U.S. presidential vote is that unpopular third-party candidates can potentially swing an election. That happened in 2000 when Ralph Nader took more votes from Gore than from Bush, and it could happen again this year. A “good” voting system would not let the voters' views on an “irrelevant” candidate influence the choice between relevant candidates.

    One way to avoid third- party-candidate interference would be to use “Condorcet's method,” under which voters would state their preferences between every pair of candidates (Bush-Kerry, Bush-Nader, and Kerry-Nader). The winner would be the candidate who wins both of his head-to-head competitions. This method might still yield a circular deadlock, but Eric Maskin of the Institute for Advanced Study in Princeton, New Jersey, and other economists argue that it would be better on the whole than the current system. Another of many alternatives, for example, is so-called approval voting, in which voters say which of the candidates they “approve” (more than one being allowed), and the candidate with the most approvals would win.

    Increasingly close elections might drive Americans to change the electoral college—the body of “electors” citizens select to choose a president on their behalf, in lieu of voting directly. “I don't see much chance of replacing the electoral college system with popular voting, as that would require a constitutional amendment,” Maskin says. But individual states “might move to something like Condorcet,” he suggests, “especially if third-party candidates like Ralph Nader continue to influence the outcomes in states like Florida and New Hampshire.”


    Measuring the Significance of a Scientist's Touch

    1. David Malakoff

    The observer effect is well known in many fields. But for plant scientists its existence, much less its magnitude, is a subject of debate

    Talk about a touchy subject. Three years ago, plant ecologists were nervously discussing new findings that showed that the mere act of touching a plant during field studies could significantly alter its growth rate and vulnerability to insects. If widespread, that “observer effect”—documented in a 2001 Ecology paper by James Cahill of the University of Alberta, Canada, and colleagues—threatened to undermine decades of painstaking study.

    This month, however, many plant ecologists are breathing more easily. In a pair of dueling papers published in the current issue of Ecology, teams led by Cahill and Svata Louda of the University of Nebraska, Lincoln, review a harvest of findings that show a real but extremely subtle observer effect. The teams still differ on whether biologists need to alter traditional, measurement-intensive methods to avoid problems. But findings that once “threatened to pull the rug out from under an entire field” now appear to require few radical changes in field studies, says Louda.

    Plant ecologists aren't the first scientists to ponder how their presence can influence studies. Physicists have long struggled with the “uncertainty principle” first posited by Germany's Werner Heisenberg in 1927, which states that the act of measuring one property of a subatomic particle, such as its position, can change another, such as its momentum. Field researchers observing birds and other animals have for centuries taken pains to avoid influencing their subjects—with mixed success. In the last decade, scientists have reported marring studies by unintentionally leading scent-following predators to bird nests, killing whole colonies of insects through overhandling, and even clipping too many toes off frogs targeted for mark-and- recapture studies.

    Although plant ecologists have long known that plants can respond to touch, insect bites, and even windy breezes by retooling leaf chemistry and stem architecture, they had reported relatively few observer effect problems in field studies. But when Cahill was completing his doctorate at the University of Pennsylvania in Philadelphia in the late 1990s, he noticed that insects were attacking his marked wild plants with unusual ferocity. “I began to wonder if the herbivory increased because the plants were being manipulated,” he recalls.

    To find out, Cahill and biologists Jeffrey Castelli and Brenda Casper marked 605 plants from six species growing in a dozen plots in an old hayfield. Setting aside half as controls, they visited the rest weekly, mimicking measurement-taking by stroking plants once from base to tip. After 8 weeks, one species of the stroked plants showed greater insect damage, whereas two others appeared to benefit. The three other species showed no significant differences between the stroked and untouched plants.

    Touch me not?

    Handling plants during field studies, such as this one in Pennsylvania, could skew findings.


    The team concluded that “the long- standing assumption that field researchers are ‘benign observers’ is fundamentally flawed.” Even subtle observer effects could have a major influence on studies, they argued, adding that it would be very difficult for researchers to predict how plants would respond to handling. To prevent problems, they urged researchers to set up more untouched control sites and to minimize measurements.

    The Herbivory Uncertainty Principle?

    Some researchers soon replicated the effect—which Cahill's team dubbed “The Herbivory Uncertainty Principle.” Alberta's David Hik, for instance, recorded more insect attacks on handled plants in alpine meadow and grassland study sites in western Canada, but not in a woodland site.

    Other researchers, however, couldn't replicate such results. Studying about 1400 individuals from 13 species in a Minnesota grassland, a team led by Stefan Schnitzer of the University of Wisconsin, Milwaukee, found “very little evidence to support” the herbivory principle, they reported in a 2002 Ecology Letters paper. “We question whether this phenomenon should be considered a ‘principle’ of plant ecology.” A nine-member team led by Kate Bradley, a doctoral student at Nebraska, reached a similar conclusion last year in a study published in Ecology. The group, which included Louda, found no significant visitation effect on 14 species located on grasslands in Minnesota, Nebraska, and South Carolina. Other variables, such as site characteristics or insect populations, were more important, they concluded.

    But Bradley's statistical methods were flawed, Cahill argues in the current issue of Ecology, and her data, when reanalyzed, “actually support our finding that visitation effects are real, although often subtle.” To address the issue, he advises his graduate students to think about establishing greater controls and to “minimize measurements.”

    Bradley and Louda fire back in the same issue. Cahill's “incorrect” reanalysis still shows visitation effects to be “uncommon and small,” they write. More than 80% of the species and more than 95% of sites studied so far show no herbivory effect, they argue. “Adding control plots probably isn't worth the investment,” says Louda. “I wouldn't tell my students to do it.”

    The latest exchange may leave young field scientists uncertain how to proceed. But both Cahill and Louda say that the debate has been useful. “Everybody knew [the observer effect] was an issue but pretty much ignored it,” says Cahill. “Now we're openly talking about it.”

    The exchange is also spurring new research into how, exactly, plants respond to handling. Ecologist Richard Niesenbaum of Muhlenberg College in Allentown, Pennsylvania, for instance, is documenting how visitation influences leaf chemistry and growth. He says “the mechanisms are so far pretty poorly understood.”


    China Takes Bold Steps Into Antarctic's Forbidding Interior

    1. Yang Jianxiang*
    1. Yang Jianxiang writes for China Features in Beijing.
    2. With reporting by Jeffrey Mervis.

    Increased spending and ambitious plans for exploration aim to strengthen China's foothold in the polar regions

    BEIJING—Covered by the East Antarctic Ice Sheet, the Gamburtsevs are probably the least explored, and most poorly understood, mountain range in the world. It's not for lack of scientific interest: According to climate models, the 600-km-long Gamburtsev range is the likely birthplace of the ice sheet that formed some 30 million years ago, and the mountains hold important clues about geological and climatic forces in the region that have shaped global change over the eons. But their inaccessibility—thanks to a 1000-meter-thick blanket of ice combined with the harshest weather on the planet—has allowed them to retain vital geological secrets such as their age, composition, and topography. Their very existence—a 3600-meter-high formation far from the edge of any tectonic plate—is a major mystery.

    This week a team of Chinese polar scientists set out on a journey to lift that veil of secrecy. It's the first step in a plan to build a permanent station at Dome A, the highest, driest, and coldest spot on the continent. Once the station is completed, in 2008, the scientists plan to outfit it with instruments that can make use of those inhospitable conditions to gaze into the distant universe, monitor the polar upper atmosphere, extract ice cores, and drill through the ice sheet to the underlying bedrock.

    Their 1200-kilometer trek inland from China's Zhongshan station on the continent's east coast is part of the country's broader commitment to polar research that includes a new Arctic station, an expansion of its two existing Antarctic stations, an upgraded research ship, new support facilities, and ultimately a new home for its premier Polar Research Institute of China (PRIC) in Shanghai. Chinese officials hope that an extra $64 million over the next 3 years, doubling the current annual polar science budget of roughly $20 million, will also lift the country into the major leagues of polar research in time for the high-profile International Polar Year (IPY) in 2007 (Science, 5 March, p. 1458).

    News of these ambitious plans, which has dribbled out over the past several months, has generated a buzz in the polar research community. “It's like manna from heaven,” says Christopher Rapley, director of the British Antarctic Survey and chair of the IPY planning committee, who learned about the details earlier this month at an international conference in Beijing marking the 20th anniversary of China's first scientific foray into the polar regions. “It's the biggest financial commitment to IPY to date from a government, and I hope that it will stimulate other countries to do the same.”

    Crunch time.

    Zhang Zhanhai, director of the Polar Research Institute of China, with the ice- capable Xue Long during an Arctic expedition.


    Rightful interests

    Like most government-supported research in China, the value of polar research is seen in geopolitical as well as scientific terms. This summer, for example, President Hu Jingtao hailed the opening of China's first permanent arctic research station in Svalbard, Norway, by exclaiming that it “would open important windows to scientific exchange with other countries” as well as help discover “natural secrets that will benefit both current and future generations.” The country's 5-year plan for polar research justifies the station, called Yellow River, as a way “to enhance China's influence on issues concerning Arctic research and protect its rightful interests.”

    Those interests are quite broad. At the Beijing conference, PRIC Director Zhang Zhanhai described three overarching research themes that will drive China's efforts for the rest of the decade. One involves the Antarctic continent, including the construction of a Dome A station and installation of environmental monitoring systems throughout the vicinity, drilling into the Gamburtsevs, and exploring the variability of the coupled (air-sea-ice) climate system along the Amery Ice Shelf and throughout the Southern Ocean. A second theme focuses on exploring various upper atmospheric phenomena, with scientists combining observations taken at the Zhongshan, Dome A, and Yellow River stations. The third examines the factors contributing to rapid climate change in the Arctic, making use of the new Svalbard station. Each activity is connected to an existing global polar initiative, says Zhang, who adds that China welcomes foreign collaborators on any and all projects.

    The Gamburtsev drilling project is probably China's best bet to carve out a niche for itself in the polar regions. Most of the country's scientific work to date has been derivative, notes Dong Zhaoqian, the former director of PRIC, who led China's first Antarctic expedition in 1984. But the Gamburtsevs are virgin territory. Obtaining samples and doing on-the-ground measurements would be a real coup, say geologists. “It would be a big scientific advance,” says Slawek Tulaczyk of the University of California, Santa Cruz, an expert on subglacial drilling.

    The Chinese team faces formidable challenges, to be sure, beginning with the logistics of setting up and maintaining a station to support the multiyear effort. In contrast to the air support that's available to scientists working at the U.S. Scott-Amundsen Station at the South Pole, the Chinese team must lug all their equipment overland. And keeping the hydraulics and electronic equipment in working order during the brutal Antarctic winter won't be easy. But the drilling should be relatively straightforward, says Tulaczyk, aided by the technological advantages of going from cold ice to bedrock without passing through an intervening layer of water. “They're playing it smart,” he adds, by picking a place where the ice is relatively thin.

    Spanning the globe.

    China's new Yellow River station in Svalbard, Norway, will bolster studies of polar lights and other upper atmospheric phenomena being conducted at Zhongshan station in Antarctica.


    A renovated polar research ship would also enhance China's ability to conduct all kinds of climate change studies. The Snow Dragon (Xue Long in Chinese), purchased from Russia in 1993, is China's first ship with ice-breaking capabilities. The spacious (167 meters long) ship is a real workhorse of the country's polar program, serving as both a supply vessel and a research platform. Xiaojun Yuan, a research scientist at Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, for example, has piggybacked on the ship's biannual visits to Zhongshan and China's Great Wall station on King George Island to take measurements of salinity and temperature in an understudied portion of the Southern Ocean. She hopes to get one more set of data this season from the project, funded jointly by China's Arctic and Antarctic Administration and the U.S. National Science Foundation, before the ship goes into dry dock. “It's filling a gap in the global picture of interannual sea variability,” she says.

    But its equipment dates from the 1980s, and the $20 million upgrade would give it a modern navigational system, more lab space, and the ability to accommodate two helicopters. At the same time, the government has pledged $20 million to provide a dedicated berth and warehouse facilities for the ship, and there is talk of eventually moving PRIC to the new site, too. There's also an effort to drum up support for a second research vessel, a domestically built ship that would be smaller and better suited to work in the open ocean.

    The additional funding is also expected to draw scientists into the field, a step that Rapley and others say is necessary if China hopes to take advantage of its improved scientific infrastructure. “This gives them the opportunity to really ramp up their capacity,” he says. Jihong Cole-Dai, a geochemist at South Dakota State University who has worked with PRIC scientists on Antarctic ice cores, agrees that “they need more people” to broaden their work at the poles. But he predicts that “if the decision [to make polar research a priority] has been made, then the resources will be provided.”

    The increased support can't come soon enough for Chinese polar scientists. “Sometimes when foreign researchers talk about joint operations, we just shy away because we're unable to raise our share of the funds,” says one scientist who requested anonymity. “But now I'm feeling more optimistic.”


    Getting Inside Your Head

    1. Elizabeth Pennisi

    Drawing on the latest technology, Susan Herring is revealing how the push and pull of muscles and other forces shape skulls

    Sue Herring ponders weighty matters. To be more specific, this functional morphologist at the University of Washington (UW), Seattle, probes how skull bones respond to weight, pressure, and other forces. During her 30-year career, she's brought increasingly sophisticated approaches to bear on this question, achieving an ever more precise accounting of how physical factors shape the overall skull. While chewing exerts one such force, there are many others that compress, tug, and push on skull bones.

    Unlike arm and leg bones, which are relatively straight and simple in design, the skull's bones curve and twist so as to mold around the developing brain and set up the scaffolding that muscles need for chewing, smiling, or snarling. Genes and proteins only go so far in shaping a skull, according to Herring. She and others have found that the forces inside the skull, created by the expanding brain, the tongue, or head muscles, also influence skull bone growth and continue to modify these bones throughout life. “We think of skeletons as being permanent, but they are really dynamic,” she says.

    Herring's experiments have corrected misconceptions about the jaw and other parts of the skull that she and many others have long had—and in doing so, her data have found practical use. She has recently revised the traditional view of the nasal septum, the wall of cartilage and bone between the nostrils, and inspired new thoughts about the design of prosthetic jaws. Her work “has great applications in terms of dentistry and medicine,” says Anthony Russell, an evolutionary morphologist at the University of Calgary, Canada.

    Many little pigs

    Herring's fascination with skulls began as a graduate student at the University of Chicago during the 1960s after she had studied the distinctive head shapes of the warthog and other pigs. She decided to focus on the source of the differences. At first, the work “wasn't very satisfactory,” she recalls. Herring dissected and measured jaw and facial muscles and compared her pig skulls to those of fossil pigs, getting ideas about how muscles had shaped the skulls. But she had no way to test whether her ideas were correct.

    She persevered, focusing on a miniaturized pig breed, which gave her insights into the human skull. Both pigs and people, she notes, are omnivores, and the chewing forces that shape their jaws and buffer their skulls and brains are similar.

    In 1971, Herring moved to another Chicago school, the University of Illinois, where she began measuring the electrical activity of moving jaw muscles. At the time, she recalls, “I didn't really understand the way muscles move the jaw” for biting or for changing facial expressions. When animals chew, they move their jaws side to side to enable teeth to grind and cut food. She and others had long assumed that this sideward motion was achieved by one set of muscles pulling left, followed by another set that pulled right, in reciprocating pulses. But her experiments showed that chewing requires circular movements of the jaw. Muscles controlling one side of the jaw, say, the left side, pull the jaw back, whereas on the other side of the jaw, different muscles pull the jaw forward. Then the sides of the jaw move in the reverse direction. “It's like turning a steering wheel,” Herring explains.

    Even more important was her discovery that a single jaw muscle can have several regions of activity. Electrodes “gave a different recording when you put them in a slightly different place,” she says. That meant the brain viewed what looked like one muscle as several muscles and could make specific areas contract at different times.

    Always on the prowl for better technologies, Herring tried several new methods of monitoring the muscular forces acting on the skull. For a while, she turned to strain gauges, devices typically used to measure the forces buffeting airplane wings in wind tunnels. But they could not be used on soft or wet tissues, such as cartilage. Motion sensors called differential variable reluctance transducers solved that problem and became part of Herring's technology portfolio about a decade ago. Attached to tissues by a barb, the magnetic coils within these devices, which were originally used to measure structural strain in buildings, enabled Herring to measure how much a tissue deformed under stress. By 1998, she was using miniaturized flat piezoelectric pressure transducers to measure compression against a bony surface. And most recently, she and her colleagues have begun implanting crystals that emit and receive ultrasonic signals. They yield data that enable Herring and her colleagues to look at changes in muscle shape in three dimensions.

    Over the past 5 years, Herring and her colleagues have used these approaches to measure small, specific forces acting on various parts of the skull—and track them over a relatively long period. Most other functional morphologists typically look at animals at just one point in time. In contrast, Herring starts tracking growth in piglets, building a coherent picture of how stress and strain influence skull bones. “She's established a correlation between mechanical events and what the bone cells [are doing],” says Johan van Leeuwen of Wageningen University in the Netherlands.

    Skull mechanics

    Herring's investigation of the nasal septum reflects her team's questioning approach. For decades many morphologists thought that the nasal septum was a strut that supported the nose—or in the pig, the front part of the snout—against deformation during chewing. Drawing upon studies of piglet skulls, Herring, UW oral biologist Tracy Popowics, and graduate student Rosamund Wealthall now challenge this idea. “The septum is much, much less stiff than the bones,” says Herring. “So if it's serving as a strut, it's not a very good one.”


    Sue Herring overcame technical challenges to understand skull mechanics.


    The group initially showed this weakness by cutting out and analyzing small rectangles of the porcine septum and nasofrontal suture—the bone it is supposed to support. By determining at what point the two skeletal elements ceased to hold up against a given stress, they showed that the septum didn't do the job expected. It deformed when forces equivalent to chewing were applied and buckled under strain equivalent to clenching, they reported in August at the 7th International Congress of Vertebrate Morphology in Boca Raton, Florida.

    Herring also attached transducers inside snouts of young, anesthetized pigs and stimulated each pig's jaw muscles, causing them to contract as if biting. The transducers confirmed the earlier deformation results. During chewing, the septum bends as if squished from the front, says Herring. She concludes that the nasofrontal suture provides its own support.

    Another critical tissue that has drawn Herring's attention is the periosteum, a membrane that covers all bones. It works like a sculptor, producing cells that make or resorb bone until a bone is the right shape and size. Depending on where along a bone it is located, the periosteum produces different proportions of these builders and destroyers, Herring reported in Florida. Contracting muscles that attach to and tug the periosteum can stimulate bone-building. In contrast, compression, caused perhaps by tissue pushing against the periosteum, leads to bone loss.

    The structure of sutures, the seams that join bone to bone, is also affected by physical forces, Herring's work shows. Their straight seams develop interlaced fingers when muscle contractions jam them together. Herring has found that internal pressure from the expanding brain, which threatens to push pieces of bone apart, cause these “fingers” to develop in sutures.

    While other morphologists acknowledge the importance of the forces that shape a skull, colleagues credit Herring for being one of the few to actually measure what is going on and to put the data together into an increasingly coherent picture. “This is really a valuable synthesis,” says Elizabeth Brainerd, a functional morphologist at the University of Massachusetts, Amherst.

    Based on this synthesis, one group is trying to come up with a remedy for temporomandibular joint disorder, a painful problem created by malfunction of the jaw joints. Although jaw prostheses can correct this disorder, currently available devices tend to break, says Herring. To remedy this, a group at the University of Michigan is using her data on the functioning of the jaw's joints to design replacement bones.

    Herring's studies have even caught the attention of those studying dinosaurs and ancient birds. They “throw new light on many things I see in fossils,” hinting at how muscles might have attached and caused skulls to look the way they do, says Andrzej Elzanowski, a paleontologist at the University of Wroclaw in Poland, who adds that Herring's research has given colleagues in “many, many” areas plenty to chew on.


    Temperature Rises for Devices That Turn Heat Into Electricity

    1. Robert F. Service

    Long-sought materials that can harness waste heat and revolutionize refrigeration are on track to become more than an engineer's dream

    Imagine throwing away 65% of every paycheck. Not an inviting prospect. But that's essentially what happens every time we turn on our cars, lights, and many other modern conveniences. Roughly two-thirds of the energy that is fed into these gizmos radiates away as heat without doing any useful work. In the United States alone, that's a whopping $265 billion a year worth of power that, “poof,” is just gone. But, thanks to a decade of steady progress in a once sleepy field of semiconductor engineering, that may soon change. Researchers around the globe are working to improve “thermoelectric” materials that convert waste heat to usable electricity. Such chips aren't proficient enough yet to be an economical power source. But after decades of stagnation, “this field is moving very fast right now,” says thermoelectric pioneer Mildred Dresselhaus of the Massachusetts Institute of Technology (MIT).

    If this progress continues, it could pay big dividends by allowing everything from power plants to cars to turn some of their waste heat into power. “If you can save 10% using thermoelectrics for waste heat recovery, it means a lot,” says Gang Chen, a mechanical engineer at MIT. Thermoelectrics also operate in reverse, using electricity to cool things down or heat them up. Thermoelectric chips are already used to cool everything from light-emitting diodes and lasers to picnic coolers, and researchers are pushing hard to create solid-state home refrigerators that will be free of noisy, bulky pumps and ozone-depleting gases. Many researchers hope that marrying thermoelectrics with nanotechnology will spark another round of dramatic improvements. “I think there is between 5 and 10 years of very intense research that is going to happen,” says R. Ramesh, a materials scientist at the University of California, Berkeley.

    Fits and starts

    That's just a blink of an eye for a field that has already been around for nearly 200 years. In 1821, an Estonian physicist named Thomas Johann Seebeck discovered that when he joined two dissimilar conductors in a loop or circuit and heated one, it caused a compass needle to deflect. (Researchers later determined that the experiment produced an electric voltage that in turn created a magnetic field that tweaked the needle.) In 1834, French physicist Jean Peltier found the reverse was also true: If you fed enough electricity to a circuit composed of two different conductors, you could push electrons to carry heat from one to the other, causing the first conductor to cool while the other warmed. In the early 1900s, other investigators discovered that the key to making efficient thermoelectric materials is to boost their electrical conductivity while keeping their thermal conductivity as low as possible. That allows power to move easily through the device while maintaining the temperature difference between the junctions necessary to produce the effect. These properties were later incorporated into the thermoelectric figure of merit known as ZT, which researchers use to compare different thermoelectrics much as baseball fans track ERAs to compare pitchers. In particular, ZT depends on several factors: a material's thermopower (how much voltage is created when a temperature gradient is put across it), its electrical and thermal conductivity, and the temperature.

    Cool chip.

    Prototype “superlattice” device, made from 1000 semiconducting sheets, can generate power or pump heat.


    In the early 20th century, researchers investigated all sorts of combinations of metals for their thermoelectric potential. To their frustration, they found that in metals the two kinds of conductivity are linked: Trim the thermal conductivity, and the electrical conductivity drops as well. By the 1950s, however, researchers had shown that by engineering different semiconductor alloys they could control the thermal and electrical conductivity of their materials separately.

    That was good news for would-be devicemakers. For makers of thermoelectric generators, it held out hopes of simply heating up a material and sitting back as it produced a voltage that could drive a device or charge a battery. For refrigeration experts, it held the prospect of creating solid-state coolers that worked when plugged into a standard outlet.

    Hopes rode high in the 1950s and '60s that researchers would be able to create thermoelectrics that generate large amounts of power. And ZTs rose from a middling 0.2 or 0.3 to about 1 for materials such as bismuth telluride. Unfortunately, despite the development of thermoelectric generators for spacecraft that use heat generated by radioactive elements to produce a trickle of electricity, practical applications needed higher ZTs than even semiconductors could provide. “From the 1960s to the 1990s there was not much development,” Chen says.

    But that story began to change in the early 1990s with the rise of nanotechnology. In the mid-1990s, Dresselhaus's group and another team led by physicist Gerald Mahan at Pennsylvania State University, University Park, independently determined that if thermoelectric materials could be made on the nanoscale, their ZT should shoot up dramatically, potentially even above 6. “For ZT, the expectation is if we could get above 5, it would enable a wide range of applications,” including solid-state refrigeration and power generation aboard cars, says Heiner Linke, a physicist at the University of Oregon, Eugene. “Now there is a new pathway to approach that.”

    Walking the path

    As with the previous era's focus on semiconductors, the ability to walk that path depends on independently controlling the electrical and thermal behaviors of a material. Dresselhaus and Mahan's simulations suggested that this control would come about by limiting at least one dimension of a thermoelectric material to the nanoscale. That means crafting thermoelectrics either out of stacks of thin planes or, better yet, out of long, thin wires. This approach, they found, would bring several benefits. First, confining electrons in one or more dimensions allows researchers to tune their electrical properties and make them more conductive. If controlled properly, that same confinement could also lower the material's thermal conductivity. In this case, vibrations of a crystalline lattice, called phonons, carry heat through a material. A critical measure is the so-called mean free path: the average distance that the phonons as well as electrons travel in these materials before reflecting off one surface and traveling in another direction. If researchers create materials in which one dimension is smaller than the mean free path of the phonons but larger than that of the electrons, then the electrons will zip through the material with few collisions, while the phonons will slow to a crawl, knocking into obstacles wherever they look.

    Over the past couple of years, experimenters have begun making impressive strides toward harnessing those ideas. In the 11 October 2001 issue of Nature, for example, Rama Venkatasubramanian and his colleagues at the Research Triangle Institute in Research Triangle Park, North Carolina, reported creating a chip-based semiconductor sandwich thermoelectric with a ZT of 2.4, more than twice that of the commonly used bulk semiconductor bismuth telluride. The sandwich, made with computer-chip manufacturing techniques, consists of ultrathin layers of two alternating semiconductors, bismuth telluride and antimony telluride. The interfaces between these alternating layers, the researchers found, acted like additional speed bumps to slow the progression of phonons as they attempted to travel along with the electrons vertically through the sandwich. In their Nature paper, Venkatasubramanian's team reported crafting tiny computer chip-sized refrigerators capable of cooling a room-temperature heat source by as much as 32°C. Since then, Venkatasubramanian says that his team has data suggesting that they may be able to increase the ZT to over 3.5, although the work is not yet published. And for now, Venkatasubramanian says, his team is focusing on making working modules for cooling chips and other applications.

    On 27 September 2002, Ted Harman and colleagues at MIT's Lincoln Laboratory added their own new twist, reporting another type of layered semiconductor called a quantum dot superlattice in which they grew layers of nanometer-sized islands of an alloy of lead, selenium, and tellurium in layers of lead telluride. Those superlattices displayed a ZT of 2 at room temperature. But just a year later, Harman reported at the Materials Research Society meeting in Boston that his team had created a similar superlattice with a ZT of 3 when tested at 600 K. Not only do the islands help scatter phonons and therefore reduce the thermal conductivity of the material, but Harman says he suspects they also force electrons to have tightly controlled amounts of energy. As a result of that restriction, the quantum dot superlattices boast a high density of electrons at a particular energy level, a condition favorable to increasing the conductivity of the material.


    Heat drives electrons (inset) through a thermoelectric module to generate power.


    Dresselhaus says the new superlattices are impressive but have a long way to go before making it out of the lab. “This has to be consolidated and put into practice on a much higher level,” Dresselhaus says. Even more daunting, says Mercouri Kanatzidis, a chemist at Michigan State University in East Lansing, is turning such tiny devices into the bulk materials needed for large-scale applications such as generating power from a car's or factory's waste heat.

    But Kanatzidis's team has progressed at least partway to a solution. In the 6 February issue of Science (pp. 777, 818), they reported creating a bulk crystalline semiconductor made from silver, lead, selenium, and tellurium with a ZT of 2.2 when working at 800 K. Although that temperature is far too high to be of much use in household refrigerators, the material—or its future kin—may be of use in turning waste heat to power in, say, hot engines. Such materials “could be of very significant interest to the automotive industry,” says Mark Verbrugge, who directs the materials and process lab at GM's R&D Research Center in Warren, Michigan. Thermoelectrics aren't quite ready to break open the heat-recovery business, he says, but they are getting a lot closer: “We've seen some significant materials changes in the last few years.”

    The good news, Dresselhaus and others say, is that materials engineers and nanotechnologists have a few more tricks up their sleeves that could boost ZTs even higher. One, says physicist Terry Tritt of Clemson University in South Carolina, is simply to try more combinations of materials when making bulk semiconductor alloys. Researchers have spent decades testing two-member alloys such as bismuth telluride and antimony telluride, but they've only recently begun testing three- and four-member alloys, such as Kanatzidis's recent success story.

    Another approach now being hotly pursued, says Dresselhaus, is combining the success of the superlattices with nanowire semiconductors. Several groups around the globe, such as one led by Lars Samuelson at Lund University in Sweden, have recently begun growing nanowire superlattices: materials consisting of wires composed of alternating semiconductors abutting one another like boxcars in a train. As with the chip-style superlattices, the numerous interfaces between the different semiconductors should slow heat transport in the materials, while electrons should still zip through the wires, thereby giving them a high ZT. But so far that's been difficult to confirm, partly because hooking these wires up to tiny circuits to test their ZT is a considerable challenge.

    Going the next step and turning forests of these nanowires into devices has been even harder. One hurdle here, Venkatasubramanian points out, is that any matrix material that holds these nanowires must be as good a phonon blocker as the nanowire superlattices themselves so heat leaking from one side of the device to the other doesn't just bypass the nanowires and slip through the matrix. Although these challenges haven't been solved yet, groups around the globe are now bearing down and expect results soon. If successful, they will undoubtedly act as a double espresso for this once sleepy field and perhaps awaken entirely new industries in energy recovery and solid-state refrigeration.

Log in to view full text

Log in through your institution

Log in through your institution