News this Week

Science  08 May 1998:
Vol. 280, Issue 5365, pp. 823

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Recipe for a Kilogram

    1. David Kestenbaum

    By counting atoms and generating standard forces, physicists are closing in on an absolute standard for the kilogram, the one fundamental unit still tied to an artifact

    Richard Deslattes has a problem with the kilogram: It's a thing. Although the meter, the second, and the other four “base units” are all defined in terms of universal constants such as the speed of light or the quantum ticks of an atom, the official kilogram is just a hunk of platinum-iridium locked in a safe in France. It could be lost or damaged. It is probably gaining weight. It is definitely provincial. “If we wanted to communicate our knowledge of the world to some extraterrestrial group, we could encode everything in binary, except the kilogram,” says Deslattes. As for the mass standard, “we'd have to throw it.”

    So partly as a matter of principle, and partly to stay a step ahead of the world's requirements for a stable measure of mass, Deslattes wants to change the standard. The physicist at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, and his colleagues in the standards community are trying to replace the lump of metal with a simple number that would specify exactly the same mass. They hope to define the kilogram as the mass of a specific number of atoms—say, 1025-odd atoms of silicon—or as a weight equivalent to the force generated by a wire carrying a standard current wrapped around a magnet. The result would be a universal kilogram standard that could be recreated anywhere and that would be guaranteed by the laws of physics.

    Beating out the original won't be easy. Any replacement will have to vary by less than one part in 108, a level of precision that will force scientists to worry about effects as small as missing atoms and the gravitational tug of the tides. But at the upcoming Conference on Precision Electromagnetic Measurements (CPEM) in Washington, D.C., this July, physicists plan to announce that they are on the verge of breaking into that last decimal place. If they can push a little further, the old artifact may be headed for the scrap heap.

    By any standards, the kilogram (also known as “Le Grand K”) has held up remarkably well during its 100 years at the International Bureau of Weights and Measures (BIPM) in Sèvres, France. In 1989 scientists at the BIPM removed the kilogram from its triple bell jar for only the third time, weighed it against some copies, and concluded that its mass was drifting by less than 5 millionths of a gram per year. The metal artifact sits in “the ordinary humid air of the suburbs of Paris,” which has all kinds of contaminants that could build up on its surface, explains BIPM director Terry Quinn. Official policy is to clean the kilogram before using it to calibrate any copies (Science, 12 May 1995, p. 804), but its weight still wanders. “It's amazing it's as stable as it is,” Quinn says.

    The standards community could have an absolute guarantee of stability, however, if it could peg the macroscopic kilogram to unchanging microscopic quantities. There are two camps: The “atom counters” (as Deslattes calls them) would link it to the unchanging atomic mass of a specific element (times a very large number). The “force measurers” would rely on electrical units of voltage and resistance, which ultimately rest on immutable numbers such as Planck's constant, which describes the spacing of the electron orbitals in an atom. Both techniques confront weighty problems. “The standards racket is tough,” says Deslattes, who is widely regarded as the first atom counter.

    Atom by atom

    Establishing a standard based on atom counting sounds simple: Just figure out how many atoms are in a known mass of a particular material. Atom counters approach the problem roughly like a geometry student estimating how many gum balls are in a gum-ball machine. From the size of the gum balls and of the glass sphere that houses them, it's possible to get a rough idea of how many fit inside. In the atomic version, the sphere is solid and made of crystal silicon whose atoms are arranged in a regular rectangular lattice. Scientists measure the size of the sphere and the distance between the atoms in the lattice.

    That's enough information to count the atoms in the sphere but not enough to give a cookbook recipe for constructing another one that will have exactly the same mass. Silicon comes in various isotopes, which have slightly different masses, so to completely describe one of these spheres from the atom up, scientists also have to know its exact isotope concentrations—the proportion of its atoms that have 15 neutrons instead of the usual 14, for instance.

    Subtle details like that have made atom counting an international enterprise, as each step requires equipment and expertise available in only a few places. The spheres begin as ultrapure silicon rods manufactured by companies in Germany or Japan. Then Australia's National Measurement Laboratory in Sydney polishes the rods into 93-millimeter-diameter spheres, perfectly round to one part per million. Standards labs in Germany, Italy, and Japan each take a sphere home and measure its mass relative to the kilogram, its volume, and, using x-rays, the distance between atoms. Bits of silicon also go to the Institute for Reference Materials and Measurement (IRMM) in Geel, Belgium, where researchers vaporize them and sift out the isotope concentrations by mass spectrometry. Each component “requires an absolute, deep, exquisite knowledge of measurement science,” says IRMM's Paul De Bièvre.

    Over the years that understanding deepened, and the error bars shrank. But in 1994 the field hit a wall. Researchers from the labs in Japan, Germany, and Italy came together at an international conference to compare their results. The labs converted their silicon atom counts into a common currency—the Avogadro constant, or the number of carbon-12 atoms in 0.012 kilograms. Their values were miles apart. “The Japanese number was very different from the two Italian and German numbers,” recalls Giovanni Mana of Italy's metrology lab, the IMGC in Torino. The difference was some 1018 atoms, or three parts in 106—10 times what the estimated uncertainties would allow.

    So the groups retraced their steps, looking for something that could have tripped them up. They swapped bits of silicon to compare lattice spacings, rechecked the isotope measurements, and reweighed their spheres, but couldn't find a hitch. “It was torturous,” Deslattes recalls. Last year the researchers gave up and tried a different tack. Maybe, they thought, the problem lay not with the scales or mass spectrometers, but inside the lattice itself. Maybe the lattice had holes.

    Deslattes tested the hole hypothesis recently, using a high-resolution x-ray technique to see inside the silicon. To his delight, he found what appeared to be small voids a few micrometers wide in some of the rods, where trillions upon trillions of atoms were missing. “It's as if someone had taken a bite out of the crystal and closed it up,” he says. Deslattes thinks the voids could have been formed by gas bubbles trapped in the silicon during the manufacturing process. The density of voids in the Japanese silicon, Deslattes says, is enough to explain the discrepancy. “It's premature to declare victory,” he says, “but when I saw those images, I actually let out a little whoop! For a very old man, that's really something.”

    Mana is also excited by the news: “If Deslattes's results are correct, we will be put on solid ground and we can [explain] the anomalous results.” Michael Gläser at the German standards lab, the PTB in Braunschweig, agrees. “If you could measure the density of holes,” he says, then that might bring the accuracy of the method to four parts in 107, just a factor of 10 away from mounting a challenge to the kilogram artifact.

    Deslattes thinks it might be possible to measure the hole density by diffusing copper into the holes and then separating out and weighing the copper. Peter Becker, who leads the atom-counting effort at the PTB, is also looking into a technique that would use neutrons to probe and gauge the size of the holes. If the holes can be conquered, the next step would be to improve the measurement of the isotope abundances. “It's the [next] bottleneck,” says Kan Nakayama, a researcher at Japan's National Research Lab of Metrology (NRLM).

    Force field

    Although the atom counters appear to be back on track, the force measurers are also inching forward. Their goal is to find a way to reliably manufacture an electromagnetic force that exactly balances the force of gravity on a kilogram. Then that electromagnetic force could be used instead of the metal standard. Bryan Kibble, a physicist at the National Physics Laboratory (NPL) in Teddington, U.K., first proposed the modern “Watt balance” in the mid-1970s. “It's been a rock on my back for the past 15 years,” he confides. “The work is very slow and painstaking.”

    Fortunately, Kibble's group has some company. At NIST, a group led by Edwin Williams recently finished new measurements with its own Watt balance. The device is a full two stories tall and sits in an isolated wooden building to minimize vibrations and electromagnetic disturbances. Like a conventional scale, the balance has two pans and tips toward the one with the heavier contents. But instead of being hung from either end of a beam, the pans are essentially suspended from the ends of a flat cable running over a half-meter pulley. The pulley allows the pans to move up and down without moving sideways as well, as they do in a conventional balance.

    Hanging below each pan from rigid rods is a wire coil, which fits around a cylindrical magnet that is fixed to the base. Running a current through a coil generates an electromagnetic force, which drives it up or down the magnet, pulling the pan along with it. When a kilogram of gold (a copy of the official one) is placed on one of the pans, the currents are adjusted to keep it from tipping. Because gravity—and hence the weight of the gold kilogram—changes with the tides, the researchers have a state-of-the-art gravimeter nearby to measure the local gravitational tug. Folding all these numbers together, the researchers can measure the kilogram's mass in terms of standard units of voltage and current.

    Understanding the nuances of the balance has taken years. Just recently the group realized that the kilogram sat in a cold spot and was probably gaining a bit of weight from moisture condensing from the air. They solved that problem, but last September the kilogram weighed in a little on the heavy side for a couple of months before returning to its old value. “We still don't know exactly what fixed it,” Williams says, shrugging. “Time seemed to help.”

    After years of tinkering, the NIST group now thinks its balance could be used to monitor fluctuations in an object's mass at the level of 1.5 parts in 107, a two-fold improvement over Kibble's earlier work. The real test is to compare their definition of the kilogram with the NPL experiment. The NIST group won't release final numbers until the CPEM conference, but Williams says the news is good—the two experiments agree. His team is now building a new balance that will sit in a vacuum, which they hope will eliminate convection currents that could disturb the balance and should also keep variations in air pressure from throwing off the laser interferometer that measures the position of the balance pans. In England, Kibble is already running his balance in a vacuum chamber and thinks he can attain an accuracy twice as good as NIST's.

    Indeed, physicists from both camps think they may reach an accuracy of one in 108 in the next decade, which would put them within shooting range of the kilogram artifact. And several variations of the two approaches are in the works. The Swiss Federal Office of Metrology is currently working on a scaled-down version of the Watt balance that would be easy to transport, and a group at the NRLM is hoping to replace the Watt balance with a magnetic levitation system.

    On the atom-counting front, Gläser is also working on a new method that would send a stream of gold atoms into a box, counting them like beans as they pile up. By giving the atoms an electric charge and monitoring the current they carry, he says, it's possible to get a good head count. After a week or so 10 grams would accumulate—enough to weigh. Gläser thinks his approach could rival the others in a decade or so. Other scientists are less optimistic, but encouraging. “It's going to be very challenging,” says the IRMM's De Bièvre, “but probably fruitful.”

    “Ideally,” says Williams, “we would like to have both methods work.” That way they could have a new kilogram and something to double-check it with. Terry Quinn, the keeper of Le Grand K, says that the new method would probably serve first for monitoring the old standard to see how its weight wanders. And when the BIPM does change the definition for the kilogram, he says, it won't be handing out Watt balances or silicon spheres. Even if Le Grand K is dethroned, metal kilograms will still set the standard for laboratory balances and supermarket scales.


    AIDS Researchers Negotiate Tricky Slopes of Science

    1. Michael Balter

    PARK CITY, UTAH—Outdoors, the air was bracing, the powder fine, and the skiing sublime. Indoors, 500 AIDS researchers, fortified with coffee and hot cocoa, discussed and debated the latest research on HIV and SIV, the viruses that cause AIDS in humans and other primates. The recent Keystone meeting* at this well-known ski resort served as a reminder that, like the white mountain slopes, AIDS research has its share of both fast schusses and mogul fields.

    Cornucopia of Coreceptors

    In 1996, everything seemed clear and simple. After a decade-long search, AIDS researchers had found two elusive “coreceptors,” cell surface proteins that HIV uses to dock onto its target cells. The discoveries solved a puzzle posed by earlier experiments, which showed that HIV's primary receptor, known as CD4 and found plentifully on some T lymphocyte immune cells, was not sufficient to unlock the door for viral entry into the cell. The first coreceptor identified, a protein called CXCR4 that normally serves as a receptor for an immune-system molecule called a chemokine, appeared to be used by viruses from patients in later stages of the disease; the second, a chemokine receptor called CCR5, seemed to dominate in early stages of HIV infection.

    These discoveries led to an appealing model for how HIV-infected patients eventually progress to AIDS after years of harboring the virus. “Two years ago you could tell a beautiful story,” says Dan Littman of New York University in New York City, a leading coreceptor researcher. Early in infection, mildly virulent viruses use CCR5 as an entry into cells, and these slowly evolve in the body into more highly virulent strains that prefer CXCR4. But as new work presented at Park City emphasized, the picture may not be quite so simple.

    Today, more than a dozen coreceptors for HIV and SIV have been identified in the lab, and researchers are engaged in a lively debate about what role these new molecules might be playing in AIDS. Nor is the discussion merely academic: If the virus could easily gain entry to cells using one or more of these other coreceptors, researchers say, earlier hopes of developing drugs to block viral attachment to these molecules might prove more difficult to realize.

    In a talk at the meeting, Edward Berger, whose team at the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, first identified CXCR4, described his lab's work with two new members of the coreceptor stable—CCR8, which is abundant in the thymus gland, and CX3CR1, which is found in the brain as well as on the surfaces of immune cells called natural killer cells. In laboratory experiments, Berger and his co-workers found that both CCR8 and CX3CR1 could serve as coreceptors for a variety of HIV strains. Berger speculated that these coreceptors might be playing a role in HIV infection in babies and children, whose thymuses are still actively developing, as well as in viral infection of the brain and central nervous system, which occurs in more than a third of HIV-infected patients.

    View this table:

    In spite of these laboratory results, researchers are divided over whether coreceptors other than CCR5 and CXCR4 play anything more than a minor role in actual patients. “I don't think there is any reason yet to think these other coreceptors are physiologically important,” says molecular biologist Ned Landau at the Aaron Diamond AIDS Research Center in New York City. Landau notes that people with genetic mutations that cause production of defective CCR5 molecules are highly resistant to infection with HIV—implying that the virus cannot easily switch to other, even closely related, receptors, such as CCR3. “This is a natural experiment that tells us CCR5 is central,” Landau says.

    But some scientists are not so sure. “We don't really know what all these coreceptors mean,” says Littman, adding, “I have a feeling it is important.” He speculates that the binding of HIV to CCR5, CXCR4, and the other coreceptors might not only enable the virus to enter cells but also trigger signals that affect how easily the virus replicates in its target cells, as well as harm nontarget “bystander” cells. And although “none of us disputes” the key role of CCR5 in initial HIV infection, Littman says, “other receptors expressed on as yet undefined cell populations may hold the key to disease progression.”

    In spite of the proliferation of coreceptors, Littman, Berger, and others agree that therapeutic strategies to block CCR5, one of the two original coreceptors, are still worth pursuing. “A CCR5 antagonist continues to make sense despite this increasing repertoire,” says hematologist James Hoxie of the University of Pennsylvania in Philadelphia. People with CCR5 mutations are not only resistant to HIV infection but seem to have normal immune systems, implying that blocking the receptor would not have serious side effects.

    But the effects of blocking CXCR4, presumably the other major coreceptor, might be less benign. Littman and his co-workers created genetically engineered mice in which the gene coding for CXCR4 was deleted. These “knockout” mice did not survive long enough to be born, and their embryos showed serious developmental abnormalities, not only in their immune systems but also in their hearts and central nervous systems. This finding “has a lot of people worried,” says Berger—as does recent work by Tadamitsu Kishimoto of Japan's Osaka University Medical School and his colleagues showing that knocking out the gene for SDF-1, the chemokine that naturally binds to CXCR4, causes similar developmental problems in mice.

    On the other hand, Berger adds, “it is possible that you absolutely need [CXCR4] during development, but once the fetus is born it doesn't need it anymore.” And the 10 March issue of Current Biology reported encouraging news: Nikolaus Heveker of the Cochin Institute in Paris, along with French and German colleagues, designed a modified version of SDF-1 that could block HIV bind ing to CXCR4 without interfering with the chemokine's ability to trigger normal signaling through this receptor. So it might be possible to design drugs targeted at CXCR4 that would not have deleterious side effects.

    But so far AIDS researchers are far from ready to predict such a happy ending to the coreceptor affair. “Right now there are a lot of observations, and we can ask a lot of questions,” says Littman. “But there's no story yet.”

    Partners in Protection

    In most HIV patients the infection appears to be under control for many years before it begins progressing to AIDS. Why they eventually lose control is one of the great riddles of AIDS research. Last fall, immunologist Bruce Walker and his team at Massachusetts General Hospital in Boston, including postdoc Eric Rosenberg, reported what looked like one piece of this puzzle: Patients whose immune systems still harbor CD4 T cells, also known as T helper cells, that specifically recognize HIV proteins seem able to control their infections, while patients who have lost these anti-HIV T helpers cannot (Science, 21 November 1997, pp. 1400 and 1447). At the time, Walker speculated that these T helpers keep the virus in check by ganging up with another breed of T cells, called cytotoxic T lymphocytes (CTLs), which home in on and destroy virus-infected cells.

    At the Park City meeting, Walker reported new research from his group that seems to support this picture—and might help lift a barricade or two from the obstacle-strewn road to an effective AIDS vaccine. In a cohort of patients who had not yet been treated with antiviral therapy, immunologist Spyros Kalams and other members of Walker's team measured the relationship between virus levels in the blood and CTL and T helper responses against HIV. They found that patients who had strong T helper responses against an HIV protein called p24, which is found in the virus's inner core, had the highest levels of anti-HIV CTLs and the lowest virus concentrations in their blood.

    “[Walker] has made an important contribution in showing the importance of T helpers in maintaining CTL activity,” says Jay Berzofsky, an immunologist at the National Cancer Institute in Bethesda, Maryland, who adds that the findings “are right on target in terms of showing an important direction to take in maintaining the immune system of HIV-infected people.” The finding that T helpers specific to p24 appear crucial to the CTL response is especially significant, Walker told the meeting, because most efforts at vaccine development have focused on the so-called envelope proteins that make up HIV's outer coat. “If patients respond to internal proteins rather than envelope proteins, that is very important to know,” agrees immunologist Rodney Phillips of the University of Oxford.

    By testing the HIV-infected patients for T helper responses against synthetic peptides corresponding to small segments of p24, the group homed in on four segments that were most responsible for triggering the immune response. The researchers have now teamed up with the Boston biotech company Peptimmune to see if some of these peptides could form the basis of a vaccine to boost the T helpers and CTLs—either in people already infected with HIV or in people at risk of infection. If T helpers and CTLs can indeed be provoked to gang up on the virus, nobody is going to root for the underdog.

    • *HIV Pathogenesis and Treatment, Park City, Utah, 13–19 March.


    Navigating Chernobyl's Deadly Maze

    1. Joseph Alper
    1. Joseph Alper is a free-lance writer in Louisville, Colorado.

    In a serene forest in northeastern Ukraine is a room as forbidding as the lair of a folktale ogre. The room is in the bowels of the Chernobyl nuclear power plant—the scene of the world's worst nuclear accident when one of its reactors exploded on 26 April 1986. Filled with fiercely radioactive slag and detritus, room 305 has beaten back all comers, human and robot alike.

    A new assault on 305 and other chambers in the ruined reactor is planned for this fall, when a U.S.-Ukrainian team will send in a robot fittingly named Pioneer to take samples and measure the environment. The goal of the $2.7 million effort is to map the guts of the damaged reactor building, now covered by a concrete sarcophagus that some experts fear could collapse in a moderate earthquake, sending radioactive dust into the air (Science, 19 April 1996, p. 352). Such a map would be invaluable to engineers attempting to stabilize the sarcophagus and prepare it for cleanup before a more sturdy covering can be built after the turn of the century.

    Plumbing the depths.

    New robot will probe the radioactive guts of the Chernobyl sarcophagus, which covers a destroyed nuclear reactor.

    R. STONE

    But the foray into the sarcophagus may have other payoffs as well for the eight institutions taking part in the project. By testing the robot's ability to withstand radiation and navigate a complex environment, the mapping effort “is going to be a proving ground for many systems that we hope will have use in future planetary and asteroid exploration missions,” as well as in nuclear weapons cleanup, says Pioneer project leader Maynard Holliday of Lawrence Livermore National Laboratory in California. “It's a real test and demonstration of the usefulness of robotics in situations where human activity is difficult,” adds S. Venkat Shastri, robotics director at SRI International, a private research institute in Palo Alto, California, who is not involved in the project.

    After Chernobyl's number 4 reactor exploded, much of its core burned through the floor and into control rooms below. Molten uranium oxide fuel mixed with graphite rods and building materials—metal and concrete—then cooled into an amalgam called corium. Some 190 tons of this highly radioactive mineral is thought to lurk in the damaged building; its distribution has been roughly gauged by plucky Ukrainian physicists who have dashed through the dark, wet sarcophagus—even pausing for perilous seconds in room 305. But engineers need a complete, detailed look at the interior to be able to fabricate reinforcing beams and other structures that can be assembled robotically.

    “The main challenge is building the robot so that it can withstand high radiation and navigate irregular terrain,” says Tim Denmeade of Red Zone Robotics, a Pittsburgh-based firm collaborating on the project.* The warren of rooms beneath the reactor hall is littered with at least 10 tons of equipment-fouling dust and 2000 tons of twisted steel, concrete, and plumbing debris. But the thorniest problem is radioactivity: In room 305, radiation levels exceed 3500 rads per hour, enough to deliver a lethal dose in minutes. Most materials, lubricants, and electronics will succumb in short order when exposed to such intense radiation. Indeed, says Holliday, after the explosion “the West sent over robots that just weren't made for that environment, and they failed.”

    Pioneer is designed to fare better. Chernobyl engineers will steer the 450-kilogram robot, which resembles a small bulldozer, through the rubble by remote control. Along the way, Pioneer will bore into the concrete walls and floors of the reactor rooms to measure their structural integrity, using a sensor to measure resistance to the drill bit and thus calculate the material's hardness. Other sensors will generate three-dimensional (3D) profiles of temperature, humidity, neutron flux, and gamma radiation flux. And a digital 3D imaging system, using three radiation-hardened video cameras aimed by a remote computer, will create range maps using algorithms originally developed for NASA's Mars Lander.

    “We're going to be at the very edge of what we can do,” says Geb Thomas, an industrial engineer at the University of Iowa, Iowa City, who oversees the image processing system. If the imaging system performs well, he says, it will be used in NASA's Mars 2001 mission.

    Pioneer's first test will come in August, when it runs through a mock-up of the Chernobyl reactor hall, sans radioactivity, that Red Zone is building. If that simulation goes well, Pioneer could be plumbing the heart of the sarcophagus by November.

    • *Other participants include Silicon Graphics and NASA's Ames Research Center, both in Mountain View, California; the Jet Propulsion Laboratory in Pasadena, California; Carnegie Mellon University's Robotics Engineering Consortium in Pittsburgh; the University of Iowa, Iowa City; and the Ukrainian National Academy of Sciences.


    Writing, Speech Separated in Split Brain

    1. Evelyn Strauss
    1. Evelyn Strauss is a free-lance writer in San Francisco.

    Like unruly children in a noisy classroom who shout answers and pass notes from one side to the other, neurons in the brain are constantly chattering. When it comes time to see just who can do what, it helps to stop the cross talk and test individuals. Neuroscientists can't yet examine single neurons in living people, but on page 902 researchers describe a case where they have in effect isolated one side of the “classroom” from another—and made some surprising discoveries about how the brain organizes the components of language.

    By studying an epileptic patient whose brain was surgically divided to control her seizures, Kathleen Baynes, a cognitive neuroscientist at the University of California, Davis, and her colleagues found that the centers for speech and writing, long thought to be in the same side of the brain, can reside in different hemispheres. It's hard to generalize from this single case. But the findings suggest that spoken and written language can develop separately, and may lead to a new understanding of learning disorders.

    “The typical view is that all the components of language hang together on the same side of the brain,” says Alfonso Caramazza, a cognitive neuropsychologist at Harvard University. “This shows that you can take them apart.”

    The patient, V.J., had suffered severe seizures. By cutting her corpus callosum, the fibrous portion of the brain that carries messages between the hemispheres, surgeons hoped to create a firebreak to prevent the seizures from spreading. The operation did decrease the frequency and severity of V.J.'s attacks. But V.J. developed an unexpected side effect: She lost the ability to write at will, although she could read and spell words aloud.

    To explore what had happened, the researchers tested which skills each side of her brain could perform. For example, when they showed words and pictures to V.J.'s left hemisphere (by flashing them in her right visual field), she could read and name them aloud, but she couldn't write the corresponding words. The researchers concluded that her left hemisphere controls speech and reading, but not writing.

    Reader's block.

    V.J.'s left brain didn't see the word, so she couldn't name it.


    In contrast, when words were displayed to V.J.'s right hemisphere, she could write them—although not as well as before surgery—but she couldn't read them aloud. Nor could she write or name the word for a picture. Thus it seems that her right hemisphere controls writing, but not reading, speech, or the neural functions that allow people to find the right word for an object.

    “Here's someone whose right hemisphere has all the motor information for controlling writing, but it's useless to her even for simple activities like making a grocery list,” says Baynes. “She can't look at an empty butter dish and write ‘butter’ because her right hemisphere can't make the connection between butter itself and the word. Her left hemisphere might know that she needs butter, but it can't write that down.”

    It's difficult to know how far to extrapolate from one person, particularly someone with a history of seizures, cautions Baynes. Indeed, the brain organization of V.J., who is left-handed, differs markedly from that of the few other split-brain patients studied; they retained the ability to write and speak in one hemisphere and completely lost it in the other.

    Still, neuroscientists are intrigued. “The fact that it's possible [to separate speech and writing] means that the brain is made up of a mosaic of autonomous parts,” says Caramazza. “If it were a completely integrated system, you couldn't move writing from one hemisphere to the other—even in one person.”

    This insight has implications for learning disorders and language development. “To understand dyslexia, people want to figure out the connection between oral and written language skills,” says Richard Ivry, a cognitive neuroscientist at the University of California, Berkeley. This work shows that “the writing system is not necessarily scaffolded on top of the phonological system.” Moreover, the fact that spoken and written language are not linked supports the idea that they evolved independently, says Baynes. Indeed, in V.J.'s brain at least, the side passing notes carries on independently from the side calling out the answers.


    Beam Splitter Teases Out Phase Secrets

    1. Robert F. Service

    X-ray crystallography, the workhorse of structural biology, may soon be hefting a lighter load. The technique is widely used to produce three-dimensional maps of the jumble of atoms that make up proteins and other molecules. Although the technique is enormously powerful—these days it can decipher the precise structure of molecules containing tens of thousands of atoms—it is also immensely laborious. Mapping out a large molecule often requires researchers to add heavy metals to some crystals of the molecule, scatter x-rays off each crystal in many different orientations, and then compare all of the resulting diffraction patterns. The process can take years of work. But now a Cornell University researcher has come up with a new technique that should considerably speed up the process.

    Qun Shen, an x-ray physicist at the Cornell High Energy Synchrotron Source in Ithaca, New York, reports in the 13 April Physical Review Letters (PRL) that he has developed a new technique that may do away with the need to make heavy-metal versions of the crystal. Shen uses the crystal itself to split the x-ray beam in two, which results in two beams passing through the crystal at slightly different angles. This produces diffraction patterns that provide the data needed to calculate the molecule's structure.

    “There is a lot of potential here,” says Michael Rossman, an x-ray crystallographer at Purdue University in West Lafayette, Indiana. He adds that the new technique still has a few bugs to work out before it can be widely adopted. Most important, it requires painstaking work to align the crystal in the first place. But if that procedure can be automated, Rossman and others say the technique could substantially speed the work of protein mappers.

    Split vision.

    Tilting the crystal (bottom) splits the x-ray beam and generates diffraction patterns containing phase information.


    The process of diffraction is the key to x-ray crystallography: X-rays ricochet in slightly different directions from different types of atoms depending on the number of electrons they carry. In a conventional setup, x-rays scatter off atoms in the crystal and hit a detector in a characteristic pattern of dots arrayed in concentric ellipses. This diffraction pattern is the result of interference between ricocheting x-ray waves. Where the peaks and troughs of separate waves rise and fall in unison, they add together, producing a bright spot on the detector. Where the peaks and troughs are out of synch, they cancel each other out, leaving the detector blank. The intensity of the spots and the space between them reveal essential information about the distances between atoms in the crystal. But to know exactly where each atom sits relative to its neighbors, researchers must find out the relative positions of the scattered x-rays, known as their phase, which is not recorded by the detectors.

    One of the most common ways researchers get this information is by making a version of the protein crystals with heavy-metal atoms inserted in the core of each protein. The large number of electrons around these metals deflect x-rays moving through the crystal in a different manner. And by comparing patterns from crystals with and without the metal, researchers can triangulate the position of the metal atom and then work out the positions of other atoms in the structure relative to that point.

    Shen's setup extracts the phase information in a very different way. First, he aligns the crystal precisely so that initially the x-ray beam hits the crystal planes at a very narrow angle. Each plane acts like a mirror, reflecting a large portion of the x-rays in one direction, and so producing two beams passing through the crystal at different angles. Both beams are diffracted by the atoms of the crystal, and as they emerge they interfere with each other, creating a diffraction pattern. The key is that this pattern is now sensitive to phase: Slightly changing the tilt of the crystal changes the angle—and hence phase—of the reflected second beam, which in turn alters the intensity of each spot in the final diffraction pattern. “By looking at the interference between the two beams, we can get the relative phase information of the diffracted waves,” says Shen. That information can then be plugged into a software program to solve the protein's structure.

    In the PRL paper, Shen reports using this technique to get the phase information needed to solve the structure of semiconductor materials, which are simple to interpret because they have only a small number of atoms in each unit cell. But he and his Cornell colleagues are now trying the technique on a crystallized protein known as lysozyme. If the experiment succeeds, it could help molecular cartographers draw their maps with ease and speed.


    A Dusty Ice Age Trigger Looks Too Weak

    1. Richard A. Kerr

    A study of interplanetary dust has dealt a setback to an upstart proposal about what drives the ice ages. Most climate researchers believe that a slow redistribution of the sun's heat across the globe, driven by cyclic variations in the shape of Earth's orbit, drives the advance and retreat of ice sheets every 100,000 years. But physicist Richard Muller of the University of California, Berkeley, has argued that another kind of orbital cycle, the changing tilt of the orbital plane, could dip Earth in and out of a cloud of interplanetary dust that could somehow alter climate (Science, 11 July 1997, p. 183).

    Now, on page 874 of this issue, two planetary scientists report calculations suggesting that dust should indeed fall to Earth in a 100,000-year cycle that matches the ice ages. What drives the cycle, however, is the changing ellipticity of Earth's orbit—the orbital change commonly thought to drive the ice ages—rather than Muller's tilting. And the relatively tiny amount of dust delivered to Earth varies by only a factor of 2 to 3. That's hardly enough to trigger anything so dramatic as an ice age, says geochemist Kenneth Farley of the California Institute of Technology in Pasadena: “It's awfully small to do much of anything.”

    Speck from an asteroid.

    Earth's gravity swept up this interplanetary dust particle.


    The tenuous dust cloud that envelops the inner planets is visible to the naked eye on a clear night, but to calculate how much dust may fall to Earth, the orbital behavior of individual dust particles must be known. So Stephen Kortenkamp of the Carnegie Institution of Washington's Department of Terrestrial Magnetism and Stanley Dermott of the University of Florida, Gainesville, set out to calculate dust orbits by simulating how planetary gravity, sunlight, and the solar wind disperse dust as it is generated by collisions among asteroids. Then they simulated how Earth gravitationally sweeps up the dust as its orbit takes it through the cloud.

    The researchers found that the dust cloud spreads too far above and below the plane of the solar system for the tilting of Earth's orbit by a few degrees—Muller's mechanism—to change the amount of dust that reaches Earth. But changing the orbital shape could make a difference. When the orbit is more round, Earth moves more slowly through the cloud and its gravity can more easily pull in dust particles. When the ice ages were at their zenith, so was the amount of atmospheric dust, according to their calculations.

    But even then the flux of dust would have been too low to chill the planet, say most observers. Dust that hits the upper atmosphere could affect climate, says atmospheric physicist Donald Hunten of the University of Arizona in Tucson, by vaporizing and recondensing into tiny particles that might alter the amount of sunlight Earth reflects. But Hunten has calculated that the current dust influx produces a negligible amount of these light-reflecting particles—“[five] orders of magnitude too small to be detected by any means we could imagine,” he says, “let alone [great enough] to have an effect on climate.” The two- or threefold increase that Kortenkamp and Dermott calculate for the height of the ice ages wouldn't make much difference, he adds.

    Muller isn't ready to give up on his tilt mechanism, saying that although Kortenkamp and Dermott's work “is a great start in doing these sorts of calculations, there's still a lot of room for uncertainty.” He notes that Farley's studies of the amount of extraterrestrial dust actually preserved in marine sediments also reveal a 100,000 year cycle—but the timing is the opposite of Kortenkamp and Dermott's calculated cycles. “We don't yet understand the accretion of dust onto the Earth,” he says.

    Kortenkamp and Dermott agree that there's room for more dusty analysis. But the only way they see to boost the dust flux to Earth by orders of magnitude would be a catastrophic collision that blows a sizable asteroid to smithereens in the asteroid belt. The resulting pulse of dust could be four orders of magnitude higher than normal for up to a million years, they say, and might account for several intervals of increased dust flux in Farley's 70-million-year sedimentary record. The resulting climate change might even have triggered gradual extinctions through climate change, they say. But that's another speculation begging for a test.


    NAS Elects New Members

    The National Academy of Sciences last week announced the election of 60 new members—eight women and 52 men—and 15 foreign associates. The election brings the total number of current active members to 1798 and the number of foreign associates to 310. Following are the newly elected members and their affiliations at the time of election. (For more details, see

    David E. Aspnes, North Carolina State University; Bruce J. Berne, Columbia University; William A. Brock, University of Wisconsin, Madison; A. Welford Castleman, Pennsylvania State University, University Park; William L. Chameides, Georgia Institute of Technology; Ronald R. Coifman, Yale University; Douglas L. Coleman, Jackson Laboratory, Bar Harbor, Maine; Elizabeth Anne Craig, University of Wisconsin, Madison; Roy G. D'Andrade, University of California (UC), San Diego; Ingrid Daubechies, Princeton University; Charles A. Dinarello, University of Colorado School of Medicine; David L. Donoho, Stanford University; William F. Dove, University of Wisconsin, Madison; Robert N. Eisenman, Fred Hutchinson Cancer Research Center, Seattle; Morris P. Fiorina Jr., Harvard University; E. Norval Fortson, University of Washington, Seattle; Perry A. Frey, University of Wisconsin, Madison; Susan Gottesman, National Cancer Institute; Norma Graham, Columbia University; Charles G. Gross, Princeton University.

    Donald A. Gurnett, University of Iowa; John M. Hayes, Woods Hole Oceanographic Institution; Roman Jackiw, Massachusetts Institute of Technology (MIT); Thomas H. Jordan, MIT; Robert P. Kirshner, Harvard University; Miles V. Klein, University of Illinois, Urbana-Champaign; Michael S. Levine, UC Berkeley; Malcolm A. Martin, National Institute of Allergy and Infectious Diseases; Douglas S. Massey, University of Pennsylvania, Philadelphia; John J. Mekalanos, Harvard Medical School (HMS); James K. Mitchell, Virginia Polytechnic Institute and State University, Blacksburg; James M. Moran, Harvard-Smithsonian Center for Astrophysics; Craig Morris, American Museum of Natural History; Eva J. Neer, Brigham and Women's Hospital, HMS; Carl O. Pabo, Howard Hughes Medical Institute (HHMI) and MIT; Paul H. Rabinowitz, University of Wisconsin, Madison.

    Sherwin Rosen, University of Chicago; Joan V. Ruderman, HMS; Martin Saunders, Yale University; J. William Schopf, UC Los Angeles; William R. Schowalter, University of Illinois, Urbana-Champaign; Lu Jeu Sham, UC San Diego; Brian J. Staskawicz, UC Berkeley; Paul J. Steinhardt, University of Pennsylvania; Melvin E. Stern, Florida State University; Audrey Stevens, Oak Ridge National Laboratory; Kenneth N. Stevens, MIT; Nobuo Suga, Washington University, St. Louis; Jack W. Szostak, Massachusetts General Hospital, HMS; Lewis G. Tilney, University of Pennsylvania; Roger Y. Tsien, HHMI and UC San Diego; David B. Wake, UC Berkeley; David C. Ward, Yale University School of Medicine; Robert G. Webster, St. Jude Children's Research Hospital, Memphis; Susan R. Wessler, University of Georgia, Athens; Michael S. Witherell, UC Santa Barbara; Richard L. Witter, Michigan State University; H. Boyd Woodruff, Soil Microbiology Associates Inc., Watchung, NJ; Andrew C. Yao, Princeton University; Jan A.D. Zeevaart, Michigan State University.

    Newly elected foreign associates, their affiliations, and country of citizenship:

    Duilio Arigoni, Swiss Federal Institute of Technology, Zurich (Switzerland); Peter Doherty, St. Jude Children's Research Hospital (Australia); Bryan D. Harrison, University of Dundee, and Scottish Crop Research Institute, Dundee, Scotland (U.K.); Richard Henderson, Medical Research Council Laboratory of Molecular Biology, Cambridge (U.K.); Hans R. Herren, International Centre of Insect Physiology and Ecology, Nairobi, Kenya (Switzerland); Bert Hölldobler, University of Würzburg (Germany); Anthony R. Hunter, Salk Institute for Biological Studies, La Jolla, CA (U.K.); Edward Irving, Geological Survey of Canada, Sidney, B.C. (Canada); Kiyosi Ito, Kyoto University (Japan); Maarten Koornneef, Wageningen Agricultural University, Netherlands (Netherlands); Olli V. Lounasmaa, Helsinki University of Technology, Espoo (Finland); Lelio Orci, University of Geneva Medical School (Italy); Roger Penrose, Oxford University (U.K.); Romuald Schild, Polish Academy of Sciences, Warsaw (Poland); Yasuo Tanaka, Max Planck Institute for Extraterrestrial Physics, Garching, Germany (Japan).

Stay Connected to Science