News this Week

Science  17 Oct 1997:
Vol. 278, Issue 5337, pp. 383

    Patterning Electronics on the Cheap

    1. Robert F. Service


    From industry to communication to entertainment, the microchip is king today. But the power behind the throne is a technique called photolithography, the method used to pattern the chips' tiny features. Since it was invented in 1959, it has shrunk the size of circuit features some 400-fold. Computing power—in smaller and smaller packages—has gone through the roof.

    Print it.

    Researchers are turning to simple printing techniques to pattern electronic devices such as these screen-printed, centimeter-sized polymer circuits.


    But all is not well in the Lilliputian land of lithography. It is expensive and, because it is designed to work on crystalline silicon, which comes in wafers of limited size, it can pattern only small areas at a time. Researchers have been looking for alternatives that are cheaper, faster, and better suited to patterning the new semiconducting plastics now emerging from the laboratory. Lately, they have been turning to a host of new patterning methods, some with a decidedly low-tech flavor: stamping, screen printing, or ink-jet printing. And as a recent spate of meeting reports and journal articles—many of them still in press—shows, these techniques have begun to move out of the tinkering stage and toward real-world devices.

    First impressions.

    Patterned with an ink-stamping technique, this 2-cm “light valve” transmits a backlight when an electric field is applied to just the right or left electrodes (left and center) or to both.


    The new techniques are now getting so good, says Princeton University electrical engineer Jim Sturm, that “it's now realistic to think about using them commercially” to create simple electronic devices such as throwaway plastic “smart” cards and data-packed check-out labels. And down the road, researchers expect that the techniques could improve enough to meet demanding patterning applications, such as making large, flat-screen displays.

    Particularly for the low-end applications, photolithography is far too complex to be cost-effective. It typically begins with a single flat wafer of silicon coated with a thin, light-sensitive polymer layer, called a photoresist. Manufacturers then place a stencillike mask over the resist and shine light through its slits, exposing select regions of the polymer to light and changing the chemical properties in those regions. The resist is doused with an etchant that removes either the exposed or unexposed regions as well as a thin section of the underlying silicon. To create transistors and other electronic devices, these steps must be repeated over and over as additional layers of insulators and conductors are successively laid down.

    The technique can carve features hundreds of times smaller than most of the alternatives can, at least so far, so it is not likely to be displaced anytime soon as the mainstay for making computer chips. “However, there are applications that [this technology] can't touch,” says Harvard University chemist George Whitesides, such as plastic-based electronics and cheap, large-area silicon devices. And for these cheaper, larger scale applications, researchers would like to eliminate some or all of the “subtractive” steps of masking and etching, and simply lay down the features they want.

    For example, a team at Philips Research Laboratories in Eindhoven, the Netherlands, reported last month at the European Conference on Molecular Electronics in Cambridge, England, that it has managed to use light itself, without etching, to pattern an electronic circuit composed of 40 polymer-based transistors, the largest of its kind to date. The device replaces the components of present-day electronics with different flexible polymers. In place of a transistor's source, drain, and gate electrodes—which control the flow of electrons through the device and are normally made of aluminum—is a conductive polymer; a semiconducting polymer replaces silicon as the electron-carrying channel between the source and the drain; and an insulating polymer substitutes for the silicon oxide insulator in conventional chips.

    To lay out these features, the Eindhoven team starts with a flexible substrate and lays down a thin, uniform layer of a conducting polymer on top. The researchers then pattern the source and drain electrodes into this layer by exposing it to light through a mask. The light, explains the group's leader, Emiel Staring, creates reactive compounds known as free radicals in the polymer, which convert the material from a conductor into a nonconducting insulator. So, just as the unexposed areas remain bright in a photographic negative, the regions left unexposed remain conducting, and hence become the electrodes.

    This step is repeated for the very top layer of the device, which contains the gate electrodes. In between, before the top layer is laid down, the researchers lay down uniform layers of a semiconducting polymer followed by an insulating polymer. These layers need not be patterned, because the patterned electrodes above and below control the electronic properties of the materials in these layers.

    Staring says the performance of the all-polymer circuit “is not up to the usual standards of [silicon] integrated circuits,” primarily because the current-carrying speed of today's polymers still lags well behind that of crystalline silicon. Nevertheless, says Zhenan Bao, a chemist at Lucent Technologies' Bell Laboratories in Murray Hill, New Jersey, “it's a very important result,” because the circuit's complex pattern of interconnected transistors is a big step toward applications. And because the polymers are cheap and easy to lay down over large areas, unlike crystalline silicon, the Philips researchers hope to be able to use them one day to drive large-area flexible devices such as flat-screen displays that can be rolled up and stored in a drawer.

    In press

    The Philips group is not alone in pursuing that goal. At last month's American Chemical Society (ACS) meeting in Las Vegas, Bao reported that she and her Bell Labs colleagues have made simpler all-polymer circuits by using an entirely different patterning approach: a screen-printing technique akin to the prosaic method for imprinting t-shirts. But instead of laying down a series of colored inks, the group prints successive layers of conducting, semiconducting, and insulating polymers that start out as liquids but quickly cure to form flexible plastics.

    To make the master patterns for this printing process, the team has borrowed a little from photolithography. They coat a metal mesh with a light-sensitive film, then expose the film to a pattern of light shining through a mask and etch away the exposed polymer. The result is a plastic pattern that can then be used repeatedly to print one layer of their circuits. By making a different master pattern for each layer, the researchers can build up complete microcircuits, made up of features as small as the width of a human hair.

    Bao says that like the polymer circuits made by Philips, her team's devices can't match the performance of comparable silicon circuits. But for low-end applications such as memories for smart cards, she thinks their performance “may be good enough already.”

    Other groups are turning to more modern printing techniques for help in simplifying the initial patterning steps in photolithography on silicon. At Princeton, for example, electrical engineer Sigurd Wagner and his colleagues are using a computer-controlled laser printer to create a pattern of toner—a powdered ink—on a flexible substrate topped with a thin layer of semiconducting amorphous silicon. The patterned toner then acts as an etch-masking material, thus eliminating the need to pattern a separate polymer layer for this purpose.

    Meanwhile, Princeton's Sturm has been trying to do away with both masks and etching by printing electronic materials only where they are wanted in the first place. With his colleagues, he is using an ink-jet printer to pattern polymers that, when layered and sandwiched with electrodes, become arrays of tiny, light-emitting devices for flexible, full-color displays.

    Making a full-color display requires combining large sets of red, green, and blue light-emitting pixels. In polymer-based devices, each of these three colored lights has a core of a different semiconducting polymer. Patterning small dots of these separate polymers very close to one another in a repeated array is Sturm's goal. But thus far he has made only single-color devices, consisting of a single printed layer, to demonstrate the feasibility of the process. One stumbling block is the equipment: Like Wagner, Sturm is using an off-the-shelf printer, designed to pattern ink rather than electronic materials.

    Stamp it out

    A final set of patterning techniques is based on an even more mundane strategy: stamping out a circuit pattern in the same way as a rubber stamp prints a label. At the ACS meeting, for example, Ralph Nuzzo, a chemist at the University of Illinois, Urbana, reported using the technique to make silicon-based transistors. Nuzzo and his Illinois colleagues, along with Whitesides and his team at Harvard, had previously used a stamping method called microcontact printing to pattern single layers of semiconductors, metals, and polymers (Science, 19 July 1996, p. 312; 15 December 1995, p. 1760). At the meeting the groups reported collaborating to print the multiple layers needed for working electronic devices.

    The researchers first use conventional photolithography to etch a pattern in a polymer photoresist or in silicon. This serves as a mold for a liquid polymer that is then heated, cured, and peeled off. The polymer now carries a negative image of the photoresist so that it can be repeatedly “inked” with organic molecules and stamped onto a chip material's surface to act as an etch mask layer, thereby eliminating the initial patterning steps for each chip. The Illinois-Harvard researchers have made a series of working devices, using this and related techniques to shape the various layers of semiconductors, metals, and insulators.

    Stampers, too, are trying to do away with the need for masks and etching altogether. At the ACS meeting, for example, Alan MacDiarmid of the University of Pennsylvania, Philadelphia, reported that his team, together with Whitesides's group, has used the stamping technique to print “light valves.” These devices, which are at the heart of digital displays, electrically alter the transparency of a liquid-crystal layer, changing it from opaque to clear. The Pennsylvania-Harvard team simply stamped out liquid conducting polymers that rapidly cured to form the patterns of electrodes and leads that control the liquid crystal.

    It is still too early to tell whether all these new patterning techniques will progress far enough to push electronics into a new realm of cheap, throwaway applications. For one thing, the devices produced to date are relatively primitive. For another, most of the techniques are now limited to creating devices in which the individual features are roughly a tenth of a millimeter across, about 1000 times larger than those made by lithography.

    But Nuzzo and others argue that in time, they will be able to reduce feature size dramatically and also turn out the complex architectures needed for real-world applications. Contrasted with the billions of dollars spent over the last decade to engineer the Pentium and other advanced chips made by photolithography, efforts to develop alternative patterning techniques have been modest indeed and rather recent—none has been under way more than a few years. And even if the new patterning approaches don't displace photolithography as the tool of choice for computer chips anytime soon, that might be irrelevant, says Whitesides. “It may not be a question of plastic-based microelectronics catching up [with silicon], but making something different instead.”


    Researchers Find Signals That Guide Young Brain Neurons

    1. Marcia Barinaga

    Building a brain during embryonic development is no easy matter. Like fans searching for their seats in an overcrowded football stadium, newborn nerve cells trying to find their final destinations in the complex layers of the brain have to navigate a tortuous route, traveling through hoards of other neurons while constantly staying on the alert for the signposts that will direct them to their own sections and seats. Neurobiologists had few clues to the molecular machinery guiding these neuron migrations until 2 years ago, when they discovered a protein that seems to act as a signpost for some neurons. Now, they have found another protein that may help the neurons read that sign.

    The apparent signpost is the product of a gene called reelin, recently identified as the mutant gene in a strain of mice dubbed reeler because of their reeling gait. The mutated reelin gene causes the mice to have partially scrambled brains in which many neurons don't make it to their proper destinations. Because Reelin, the protein product of the gene, is released near the destination of those migrating neurons, researchers hypothesized that its normal job is to somehow help them reach their final home in the brain.

    In the new work, reported by Chris Walsh of Harvard Medical School in Boston and his colleagues in the August issue of Neuron, and by teams headed by Tom Curran of St. Jude Children's Research Hospital in Memphis, Tennessee, and by Jonathan Cooper at the Fred Hutchinson Cancer Research Center in Seattle in this week's Nature, researchers have uncovered a protein that may work in partnership with Reelin. The protein, called mDab1, is made by a gene that is mutated in mice known as scrambler, which have a behavior and brain disorganization similar to those of reeler mice.

    That resemblance between the two mouse strains, plus the results so far with mDab1, raise the possibility that the new protein is part of a signaling pathway triggered by Reelin. Researchers have found, for example, that mDab1 interacts with the well-known intracellular signaling enzymes known as tyrosine kinases. And the protein is located in the right cells: the migrating brain neurons that respond to Reelin.

    Neurogeneticist Karl Herrup of Case Western Reserve University in Cleveland cautions that it's far from certain that Reelin and mDab1 work together in the same cascade of molecular signals. Still, he says, the findings are “extraordinarily exciting,” because they provide “genetic handles on the process of moving neurons.” Those handles could ultimately help researchers decipher the cellular biochemistry that controls the movements of young neurons—and perhaps understand how those movements are disrupted in certain rare human genetic conditions that result in a seriously disorganized brain (Science, 15 November 1996, p. 1100).

    The path to the current findings began nearly 50 years ago, with the discovery of the reeler mutation in mice. Studies of the brains of reeler mice revealed that neurons in both the cerebral cortex and the cerebellum—a brain area important for motor coordination—pile up in jumbles rather than migrating to their proper locations. Researchers were clueless as to the molecular cause of this defect until 1995, when Curran and his colleagues identified the mutant gene.

    Analysis of the normal gene's protein product, which they called Reelin, revealed that it is made and secreted by cells in the vicinity of where the abnormal neurons should end up in the brain. Apparently, under normal conditions the protein helps guide the movements of the traveling neurons. But beyond that, Reelin gave no hints about how it exerts its effects. “We couldn't connect it with any biochemistry,” Curran says.

    So researchers turned to another mouse mutation, scrambler, discovered a few years ago by Muriel Davisson at Jackson Laboratory in Bar Harbor, Maine. Mice with the scrambler mutation behave like reeler mice, and their brains look the same too, with piled-up neurons that never reach their destinations. Perhaps, researchers hoped, the protein product of the mutant gene would turn out to be the Reelin receptor or some other molecule that the migrating neurons require to follow Reelin's command. Curran's team, as well as Walsh's group in collaboration with Andre Goffinet of the FUNDP School of Medicine in Namur, Belgium, independently set out to find the scrambler gene, using a technique called “positional cloning,” in which genetic crosses between mice are used to identify ever smaller chromosome regions containing the mutant gene. By early this year, after much hard work, both teams were close.

    But then, in a serendipitous discovery, Brian Howell, a postdoc with Cooper at the Hutchinson Center, learned that a gene he was studying is in fact the one mutated in scrambler mice. Howell was not looking for the scrambler gene, but instead was searching for mouse proteins that bind to the tyrosine kinase Src, an enzyme that transmits signals inside the cell. He found one such protein, encoded by the mouse version of a fruit fly gene called disabled (dab), which causes flies to have defective nervous systems.

    To see whether the mouse gene, which they called mdab1, might also affect neural development, Cooper's team created mutant mice missing the gene. Surprisingly, the animals behaved like reeler mice, and had similarly scrambled brains, making the team wonder whether the loss of mDab1 protein could somehow block Reelin production. But tests showed that the animals make normal amounts of Reelin. “That led us to the more exciting model that mDab might be acting downstream of Reelin,” Howell says.

    Aided by news of the Cooper team's finding, postdoc Mike Sheldon in Curran's group, which had begun a collaboration with Cooper's, and a separate team including graduate students Marcus Ware and Jeremy Fox in Walsh's group, soon identified mdab1 as the mutant gene they were closing in on with their positional cloning. Additional evidence for the role of mDab1 in scrambler mice came when Sheldon, collaborating with Katsuhiko Mikoshiba's team at RIKEN in Tsukuba, Japan, and the University of Tokyo, found a defect in mdab1 RNA in the brains of mice with a mutation called yotari (for a Japanese word describing a drunken gait) that Mikoshiba's lab had identified. The Japanese group had already shown by mating experiments that the gene affected by the yotari mutation is the gene mutated in scrambler mice.

    To learn more about how the gene influences neural migration, the Curran and Cooper teams investigated which brain cells make the mDab protein. The results were gratifying: “mDab is in the cell types affected in the reeler and mdab mutants,” says Howell—the cells that normally respond to Reelin. “Therefore it is possible that [mDab] is acting in those cells to relay a signal.”

    Researchers now hope they can trace the pathway in which the mDab1 protein apparently acts. The protein's structure and ability to bind to Src suggest it is a “docking protein” that can link a tyrosine kinase like Src to another protein in a signaling pathway. The pathway may be triggered when Reelin binds to an unidentified cell surface receptor, but there's no evidence for that so far.

    But even if mDab is not activated by Reelin, it is involved in neuronal migration, says Howell, and so may help researchers answer the question of what goes on in the migrating neurons to keep them on target, or, in the case of mutations, throw them off. “What is going wrong isn't that well understood,” Howell says. “Having this protein that appears to be working [in the affected neurons] may be very helpful.”


    The Balance of Power in Ancient Ireland

    1. Sean Duke
    1. Sean Duke is a science writer in Dublin.

    DUBLIN—According to ancient texts, before Christianity came to Ireland about A.D. 400, the country was dominated by three principal kingdoms, the most powerful of which at any one time was the home of the “high” king or queen of Ireland. Their centers of power—at Navan Fort in what is now County Armagh, Northern Ireland; Tara, in County Meath, near Dublin; and Rathcroghan, in County Roscommon in the west—date back as far as 2500 B.C. The kingdoms' struggles for power and prestige are the stuff of Celtic legends that are deeply embedded in the modern Irish consciousness.

    Archaeologists have focused most of their attention on Tara and Navan Fort. Rathcroghan, in contrast, has long been considered something of a poor relation. Many archaeologists thought it was less significant than the other two, and was built around a mound formed by nature, rather than by human excavation. Recent geophysical studies of Rathcroghan may, however, change perceptions about the balance of power in ancient Ireland. In its heyday, Rathcroghan may have been more impressive than its two rivals.

    A 3-year study by researchers from University College Galway (UCG) has shown that the broad, flat, 7-meter-high mound appears to have been built for ritual purposes, and the enclosure surrounding it is in fact larger than those at Tara and Navan Fort. “Rathcroghan mound is spectacular. It is 90 meters across, and within the mound there are three concentric rings that may represent ring fort settlements from the early Christian period,” says John Waddell, one of the team leaders. “Rathcroghan is the [royal] site we know least about.…It is a spectacular site in terms of the mound and the structures that have been found,” says Jim Mallory of Queens University Belfast, who has worked on the Navan Fort site.

    The UCG project is the first study of Rathcroghan since archaeologist Michael Herity surveyed its topography in the 1960s. Rather than embarking on excavations straightaway, UCG geophysicist Kevin Barton carried out a detailed subsoil survey. Barton and his colleagues used standard geophysical techniques, including ground-probing radar and magnetic gradiometry, which measures the magnetic properties of subsoil materials, as well as a new tool in the surveyor's armory, electrical tomography. “We were the first in Ireland to use this technique,” says Barton.

    To carry out electrical tomography, the team placed metal electrodes into the ground and passed a current between them through the subsoil, measuring its resistivity, which varies depending on what it is made of. Using a large number of such measurements taken in different directions and at various depths, the team used computer modeling to construct vertical “slices” of the subsurface and then built these up into a three-dimensional image of the interior of the mound.

    The geophysical survey produced a wealth of new information about Rathcroghan: evidence of ditches, walls, postholes indicating structures and fences, and different phases of building in the mound. “Geophysics has completely changed our interpretation of Rathcroghan mound,” says Waddell. “We now know that it is a very complicated site with a prolonged history of human activity.…The mound is, without doubt, [humanmade]…indicating that a large amount of labor was invested here, suggesting an organized society with an element of leadership.” Eoin Grogan, an archaeologist working for the Irish government's Discovery Programme who has studied the Tara site, says, “We now know that geophysics works. This is an impressive piece of research that has produced exciting evidence.”

    The size of the Rathcroghan complex was a big surprise. It is 370 meters from the middle of the central mound to a circular enclosure where the team found postholes, indicating the presence of a wooden perimeter fence. This is almost double the size of the 200-meter enclosures at Tara and Navan Fort.

    Many of the team's discoveries are reminiscent of features found at Tara and Navan Fort and support archaeologists' earlier conclusions that these sites were used for important rituals, such as the inauguration or burial of kings and queens. For example, the team found, through the use of magnetic techniques, what they believe to be repeated burnings on and around the mound and also linear earthworks leading into the mound—similar to the “ritual roadways” found at Tara. In contrast to other large Celtic sites, there is no evidence of settlement within the enclosure in the pre-Christian period, suggesting that the huge area of open ground between the mound and the enclosure was used for large-scale rituals in pagan times. “We are now asking, ‘What is the significance of the sheer complexity of Rathcroghan?’” says Waddell. “This will be one of the hardest nuts to crack. How do we interpret the beliefs of ancient peoples?”

    This month, the team will present a final report on the study to the Irish Heritage Council, which funded the work, identifying areas for future digs. But Waddell says the prevailing mood is against rushing into excavations, as there is still much to be learned with geophysical techniques. Says geophysicist Andrew David of the British conservation organization English Heritage: “If these techniques are used judiciously in the future, I believe that Ireland has tremendous potential for new discoveries.”


    Two Spacecraft Track the Solar Wind to Its Source

    1. James Glanz

    The wind bloweth where it listeth,” says the King James Bible, “but [thou] canst not tell whence it cometh.” That surely holds true for the wind of particles that starts somewhere in the tangled forest of magnetic fields and turbulent gases near the sun's surface and blows throughout the solar system. Solar physicists' best guess has been that one component, a steady, fast wind that blows at up to 800 kilometers a second, originates near the sun's poles. The wind's other component, more capricious and slow, seems to come from somewhere within a broad region around the solar equator called the streamer belt. Now, by tracing the source of the solar wind from several different vantage points, a new study may have finally pinpointed the source of the slow wind—while calling into doubt the standard wisdom on the fast wind.

    The study relied on two spacecraft: the Solar and Heliospheric Observatory (SOHO), which observes the sun from near Earth, and Galileo, which passed behind the sun while in orbit around Jupiter and transmitted radio signals through the gases near the solar surface. Together, the observations caught the slow wind dribbling from long, narrow structures called stalks, which tower over the arched magnetic fields of the streamer belt like the spikes on a Prussian helmet. The findings also hinted that the fast wind emerges in patches over most of the sun—not just from near the poles.

    Reported in Astrophysical Journal Letters by Shadia Habbal of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, Richard Woo of the Jet Propulsion Laboratory in Pasadena, California, Silvano Fineschi of CfA, and others, the study's conclusions about the fast wind are controversial—“revolutionary,” as a skeptical Jack Gosling of Los Alamos National Laboratory in New Mexico puts it. But most researchers say the mystery of the slow wind is as good as solved. “It's a giant step forward,” says Alan Title, a solar physicist at the Stanford-Lockheed Institute for Space Research.

    The wind escapes from the solar corona—the sun's hairy halo of ionized gas, or plasma, which is somehow heated to temperatures hundreds of times higher than the solar surface itself (see sidebar).

    Much of the corona is bottled up by magnetic field lines that loop back to the surface. Near the solar poles, however, field lines wander off into space from regions called the coronal holes. Those holes, researchers assumed, allow the hot particles to rush out unimpeded and form the steady, fast wind that spacecraft sometimes detect. Meanwhile, the slow, ragged wind seen at other times somehow billows out from the constrained corona closer to the sun's equator.

    In earlier work, Woo and collaborators had probed for the source of the slow wind by monitoring the communications beams of interplanetary spacecraft, such as the solar probe Ulysses, as they dipped behind the sun. The researchers looked for “scintillation,” or scattering, of the signals close to the edge of the sun. “It's like a headlight scattered by fog: The headlight looks bigger,” says Woo of one scintillation effect. The scintillation should pinpoint regions of unsteady flow in the plasma near the solar surface—possible sources for the ragged, unsteady slow wind. Woo and his colleagues measured the strongest effects where the radio waves passed through the stalks towering over magnetic arches.

    To prove that the stalks really are the wellsprings of the slow solar wind, Habbal, along with Woo, Fineschi, and their colleagues, set out to measure the wind speed there. That's where SOHO came in. The team used SOHO's Ultraviolet Coronagraph Spectrometer (UVCS) as a kind of solar speedometer. The UVCS, explains its principal investigator, co-author John Kohl of CfA, collected light that is emitted by oxygen ions deep in the solar atmosphere, then scattered in all directions off other oxygen ions flowing outward in the solar wind. The faster the wind, the weaker the scattering, because the so-called Doppler shift lengthens the wavelengths seen by the fast-moving ions, making them unable to resonate with the light and scatter it.

    The team correlated their speedometer readings with images of the magnetic arches and stalks made by another SOHO instrument, the Large Angle Spectroscopic Coronagraph. They also did a new search for scintillation by analyzing measurements of the Galileo communications signal. Strong radio scintillation, slow wind, and the stalks all turned up in the same places. “It's real exciting,” says Steven Suess of NASA's Marshall Space Flight Center in Huntsville, Alabama. “I think 90% of their conclusions are absolutely accurate.”

    The other 10%, says Suess, concerns the fast wind. Although the fast wind is thought to originate in small regions near the poles, spacecraft far from the sun detect it at low latitudes. Solar physicists have explained this by assuming that field lines emerging from the polar coronal holes splay steeply outward, like the petals of a daisy, allowing the wind to spread out after it escapes. But to sustain high speeds even after it spreads out, the wind must get a tremendous boost from some unknown acceleration mechanism in the coronal holes.

    The new UVCS measurements show, however, that the fast wind blows at a wide range of latitudes, even near the sun. What's more, radio scintillation combined with observations from Ulysses suggest that when gusts of plasma and other features rise from the sun's surface near the poles, they don't expand much in latitude. That suggests, say Woo and Habbal, that the fast wind must originate not just near the poles, but from patches of open field lines all over the sun.

    That's heresy to many solar physicists. Although some models have predicted the stray fields, says Randolph Levine, who did early work on the subject at CfA, researchers have tended to think of the sun's magnetic field as well anchored except in the polar regions. But the idea that the fast solar wind blows from all across the sun “simplifies our ideas about what's going on,” says Habbal, because it would eliminate the need for some special boost in polar regions. What comes out, fast or slow, would depend only on the local magnetic topology.

    The new evidence makes this idea “hard to dismiss,” says Lockheed's Title. But he, like other solar physicists, isn't ready to follow that shift in the scientific winds just yet.


    Turning Up the Heat in the Corona

    1. James Glanz

    Last year, an instrument aboard the Solar and Heliospheric Observatory (SOHO) spacecraft dazzled solar physicists by finding oxygen ions in the sun's corona at temperatures of 100 million degrees Celsius—many times hotter than expected. In a forthcoming issue of Solar Physics, the team that operates SOHO's Ultraviolet Coronagraph Spectrometer (UVCS) drops another bombshell: a strong hint of the mysterious mechanism that heats the corona and drives the solar wind (see main text).

    The new UVCS findings show that those sizzling temperatures reflect mainly the ions' whirling motion around the magnetic field lines that stretch from the sun. Along the field lines, the ions slide outward at much lower energies, moving at velocities corresponding to temperatures from 15 to 100 times cooler. The disparity favors one of several contending theories about what boosts the corona as a whole to temperatures hundreds of times hotter than the solar surface, says John Kohl of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, and principal investigator of UVCS. “This is exactly what you would expect to see” if waves on the field lines were creating the energetic ions, like a person's gyrating hips twirling a Hula Hoop, he says.

    This “cyclotron resonance” theory (Science, 21 June 1996, p. 1738) predicts rapid gyrations but slower longitudinal movement, just as the Hula Hoop receives little up- or downward kick from the hips. UVCS can clock the ions in both directions by collecting the light they scatter or emit, which is affected differently—through the train-whistle, or Doppler effect—by the two kinds of motion, explains Silvano Fineschi of CfA, a UVCS team member. The measurements confirm that, in spite of the oxygen ions' searing temperatures measured across the field lines, they are moving along the lines at much lower speeds. A similar but smaller disparity, which is also consistent with the cyclotron resonance theory, turned up in the much more numerous hydrogen ions in the corona.

    Other researchers are impressed by the results but cautious about claiming that the long-standing coronal-heating problem has been solved. “It's getting pretty hard to get away from cyclotron heating,” says Joseph Hollweg of the University of New Hampshire, Durham. But he says that other possibilities—such as heating by shock waves or frequent, tiny flares—haven't yet been eliminated.


    Terribly Tiny Transistor Unveiled

    1. Dennis Normile

    TOKYO—Circuit designers at NEC Corp. have probed what some thought was a lower limit on the size of microelectronics—and found some give. A transistor's gate—the narrow electrode that controls the flow of electrons through the device depending on whether it is “on” or “off”—can be made only so small. Too small, and electrons will manage to sneak through even when the device is off. Some researchers had put this lower limit at 30 nanometers (billionths of a meter).

    Hat trick.

    A hat-shaped upper gate electrode opened the way to shrinking the gate itself in this experimental transistor.


    But at a recent conference* in Japan, a group at NEC's Fundamental Research Laboratories in Tsukuba led by Hisao Kawaura announced that, by combining a novel design with high-precision techniques for carving semiconductors, it has developed an experimental transistor with a gate just 14 nanometers across. That's 20 times smaller than in the transistors found on the densest commercially available chips. “Nobody has yet reported work at such small dimensions,” says Sandip Tiwari, a small-device researcher at IBM's T. J. Watson Research Center in Yorktown Heights, New York. The device is mainly a proof of principle, however, as its design isn't suited to being packed in large numbers on a chip.

    To create an ultrasmall gate that doesn't leak electrons, the NEC group varied a common transistor design known as a metal-on-oxide semiconductor field effect transistor, or MOSFET. In these devices, a central semiconducting channel lies between current-carrying source and drain regions. These so-called n+ regions are created by “doping” the base material with impurities that carry an excess of electrons. Above the channel is the gate electrode. The gate turns the transistor on or off by controlling the conductivity of the channel, either allowing electrons to flow from source to drain, or cutting off the current.

    The NEC team added a second gate, shaped something like a top hat, with the crown above the lower gate and brims above both sides of the channel. Ordinarily, the n+ regions abut the channel, but this second gate allowed the researchers to leave insulating gaps between the channel and the n+ regions. To create a route for current, a voltage applied to the upper gate attracts electrons to the surface of the base material, forming conductive ultrashallow source and drain regions. These electrically induced regions are far shallower than anything that can be formed with present doping techniques; and the shallower source and drain confine the current so that a narrower gate can control it.

    Microcircuit designers usually lay out such features by shining light through a stencillike mask. The light imprints the features onto a light-sensitive coating, or resist, on the semiconductor, establishing the pattern for later fabrication steps. To make their ultrasmall device, the NEC group replaced the light and mask with a tightly focused electron beam, says team member Toshitsugu Sakamoto. They also developed a new high-resolution organic resist, which yielded even greater precision, much as a sharper pencil on finer grained paper yields more precise and consistent lines.

    IBM's Tiwari says the new transistor “is very encouraging.” But both Tiwari and the researchers themselves caution that it will take a lot more work to turn this strategy of electrically induced source and drain regions into a practical device. For one thing, the present double-gate design would make it difficult to pack these transistors densely. The device also takes a lot of power and doesn't switch on and off with the needed speed and consistency. But even if the new transistor won't make it out of the laboratory, says Sakamoto, “This is a good structure for testing the limits for gates.”

    • * International Conference on Solid State Devices and Materials, Hamamatsu, 19 Sept.


    Biologists Catch Their First Detailed Look at NO Enzyme

    1. Ingrid Wickelgren

    Nitric oxide is a Jekyll-and-Hyde substance. In tiny puffs inside the body, it helps cement memories, control blood pressure, and battle disease. But it has also been implicated in ailments as varied as Alzheimer's disease, diabetes, and impotence. The key to this character switch is an enzyme called a nitric oxide synthase (NOS), which makes NO inside the body. If it does its job correctly, NO is the benign Dr. Jekyll; if it produces too much or too little, NO becomes the perfidious Mr. Hyde.

    In hand.

    Heme group (horizontal structure) sits in baseball mitt-shaped pocket of NO synthase. Small blue molecules above the heme are inhibitors.


    Now researchers have gotten their first detailed look at this crucial enzyme's key section: the catalytic site at which NO is produced. On page 425 of this issue, a team led by John Tainer at The Scripps Research Institute in La Jolla, California, and Dennis Stuehr of the Cleveland Clinic in Ohio reports determining the three-dimensional structure of NOS's active site, bound to substances known to inhibit its catalytic action. This information could help researchers design drugs to dampen NO production in diseases linked to excess NO.

    “This is a magnificent achievement,” says Carl Nathan, an immunologist at Cornell University Medical College in New York City who has studied NOS for a decade. “For drug development, a picture of the active site is indispensable.” Drugs that inhibit NOS, he notes, could be useful in treating septic shock, Alzheimer's disease, multiple sclerosis, stroke, inflammatory bowel disease, rheumatoid arthritis, and many forms of inflammation. “Probably every major drug company is looking for drugs to inhibit this enzyme,” adds Solomon Snyder, a neuroscientist at Johns Hopkins University whose team was the first to clone and sequence one form of NOS.

    The shape of the catalytic part of NOS—which turned out to be quite different from what experts expected—should also provide insights into a class of chemical reactions that produce steroids and break down toxins. Both processes, like the production of NO, entail adding single oxygen atoms to molecules. “It is a very exciting structure,” says Douglas Rees, a crystallographer at the California Institute of Technology in Pasadena.

    Long studied as an ingredient of air pollution, NO first achieved biological notoriety in the mid- to late 1980s, as hints emerged of its myriad physiological roles. In 1987, for example, John Hibbs's team at the University of Utah in Salt Lake City found that immune cells called macrophages released it as a toxic defense against tumor cells. But in smaller amounts and other contexts, NO seemed to function as a signaling molecule important to regulating blood pressure or transmitting messages between nerve cells.

    In the early 1990s, research teams isolated and cloned three variant forms, or isozymes, of NO synthase. Two of them, extracted from the brain or the lining of the blood vessels, produced NO for signaling purposes. But the enzyme isolated from macrophages, dubbed iNOS for inducible NOS, churned out NO in much larger amounts, to cope with threats from cancer or invading microbes. It's this isozyme that is largely responsible for NO's dark side: In the large amounts made by the enzyme, NO can also trigger deadly blood pressure drops, causing toxic shock, or destroy healthy tissue in the brain, bowel, or joints, leading to neurodegenerative or inflammatory diseases.

    In 1993, the Cleveland Clinic's Stuehr, a pioneer in NO biology, co-authored an article on the many properties of this gas in Chemical & Engineering News, and the article piqued the interest of Scripps's Tainer, a crystallographer. “I was fascinated,” Tainer recalls. “Whatever it took, I wanted to solve the structure of this enzyme.”

    He wrote to Stuehr in hopes of collaborating with him. But at the time, neither Stuehr nor anyone else had enough of any NOS isozyme to conduct structural studies. So Stuehr left Tainer's letter unanswered for 18 months, until he had coaxed bacteria to mass-produce iNOS, or rather, the part of it called the oxygenase domain. He sent Tainer a solution of the enzyme fragment in 1995.

    Tainer, along with colleagues Brian Crane, Andrew Arvai, and Elizabeth Getzoff, then began the painstaking process of crystallizing the fragment and determining its structure by measuring the diffraction of x-rays beamed through the crystal. They crystallized it in a complex with inhibitors to show where the substrate and drugs would bind. They were startled by what they found.

    Tainer and his colleagues had expected NOS's active site to look much like that of similar enzymes such as cytochrome p-450, which shares NOS's penchant for splitting and shuttling oxygen molecules, and also contains a “heme,” or iron-containing group. In those enzymes, the active site is embedded in a protective thicket of protein coils. In NOS, however, the researchers found an exposed pocket made from overlapping protein sheets shaped like a catcher's mitt, with the heme clasped inside the pocket like a ball in a glove.

    The shape, says Crane, “was something no one had seen before.” It provides critical clues to how the enzyme works. For example, the position of the heme in the pocket beside a binding site for the amino acid arginine strongly suggests that the heme produces NO by catalyzing a reaction between oxygen and arginine. Such an understanding may someday enable chemists to mimic the enzyme's activity to make complex compounds such as steroids, which require similar chemistry. In addition, the pocket's exposed position indicates that NOS may be more vulnerable to inhibitors than was previously thought.

    Another surprise was that two inhibitor molecules, instead of one, bind in the palm of this enzymatic glove. This suggests, says Tainer, that the most effective and least harmful drug inhibitors would be dumbbell-shaped molecules that could attach to both sites. Ideally, such inhibitors would specifically interact with iNOS and not the other NOS isozymes, so that they don't compromise signaling in the nervous and vascular systems.

    But more pieces of this enzyme may have to be crystallized before researchers can craft superspecific iNOS inhibitors. In the complete molecule, the oxygenase Tainer's group crystallized is linked to other enzymatic machinery, including a reductase, which shunts electrons to it, and another oxygenase-reductase pair. “What they have is a piece of a huge enzyme,” Nathan cautions. “We need to know how all the pieces fit together.” Even so, this piece should open many eyes. “For me this is a watershed,” says Stuehr. “Seeing this structure is like seeing somebody's face after knowing him for years as just a pen pal.”


    Biodiversity in a Vial of Sugar Water

    1. Virginia Morell

    ARNHEM, THE NETHERLANDS—Most biologists head into the field to test hypotheses about the processes that generate biodiversity. But for a few researchers, the insights are coming from trips to the lab. At the recent meeting of the European Society for Evolutionary Biology here, researchers described how microbes living in a vial full of nutrient broth can form a rainforest in miniature, quickly diversifying into a range of new forms.

    The diversity is nothing like the dizzying array of forms that emerged from, say, the Cambrian explosion half a billion years ago, notes Paul Rainey of Oxford University, who did the work with his colleague Michael Travisano. But “it nevertheless bears some of the hallmarks of such macroevolutionary events,” he says. And that means evolutionary biologists can study these miniature adaptive radiations for clues to what drives them in nature. Rainey and Travisano “have devised this fantastic, apparently repeatable experiment that shows exactly how it happens,” notes Andrew Read, an evolutionary biologist at the University of Edinburgh in the United Kingdom. “It's bound to become a classic.”

    Rainey and Travisano staged their demonstration by placing Pseudomonas fluorescens, a common aerobic bacterium that thrives in the soil as well as on plants, in an unfamiliar habitat: a broth-filled vial. The vial, says Rainey, “offers a variety of environments, all differing in the amount of available oxygen,” which varies with depth in the broth. Five days later, the original ancestor, which the researchers dubbed the “Smooth Morph” (SM) because its colony has a smooth surface, had undergone rapid morphological change, giving rise to numerous new forms, each one presumably adapted to a specific niche in the vial. In its small way, the vial thus mirrored Earth's seas “after a major extinction event,” says Rainey. “The bacteria had lots of opportunity for diversifying, and bang!, they just went.”

    As long as the vial sat undisturbed, that microbial diversity was maintained. But when Rainey shook the tube, he destroyed the structure of the bacterial colonies and the variations in oxygen. The multiple niches vanished—and so did the accompanying diversity. Rainey let the vial sit again, and within a week's time, the diversity reappeared.

    To show that good, old-fashioned natural selection was driving this diversification, the researchers first set out to identify the specific habitats of two of the most common morphs. On their own, these formed distinctive colonies. One grew a wrinkly, sticky surface (which the duo labeled “Wrinkly Spreader,” or WS), and one looked like a dust ball (the “Fuzzy Spreader,” or FS). The team then placed a few cells from these colonies as well as from the ancestral (SM) stock in a pristine vial environment, where their habitats would be easier to identify than in the original, more complex ecosystem.

    Within 3 days, WS had colonized the broth's surface, forming a thick mat; FS had settled in at the vial's bottom; and the ancestral SM was thriving in the middle. To further establish that the morphs had adapted to distinctive habitats, the team set them against each other in mano a mano competitions. For instance, they placed 100 cells of WS into a vial with a single cell of SM, and vice versa. In all cases, the minority cell survived and multiplied. “That could only happen if each morph [had adapted to occupy] a separate niche; otherwise, the rare form would go extinct,” says Rainey.

    “It's a very exciting experiment,” says Richard Lenski, an evolutionary geneticist at Michigan State University in East Lansing. “Other people have shown the complexity and rapidity of evolution, but [Rainey's team] is demonstrating that all together in one experiment—as well as showing the importance of natural selection.”


    Natural Selection's Capricious Ways

    1. Virginia Morell

    New habitats are sure to produce a gamut of new adaptations, as test tube experiments show (see main text). But predicting just how an organism will respond to a new environment can be dicey, Michael Travisano, an evolutionary biologist at Oxford University, has shown in another evolution-in-miniature project.

    Travisano studied 12 populations of Escherichia coli bacteria that have spent 10 years living on a low-sugar diet in Richard Lenski's lab at Michigan State University in East Lansing. To see how these populations had adapted to their meager diet, Travisano grew the glucose-adapted microbes in 11 new, slightly different environments: broths containing other sugars, such as lactose, fructose, and melibiose. After only 1 day, he measured their fitness by counting the number of descendants they had produced in that short amount of time. The numbers varied wildly, as the microbes thrived on some sugars, while stumbling badly on others.

    E. coli normally fares well on all of these sugars, so the varying responses suggested that the glucose-adapted microbes had undergone some physiological change, Travisano says. He suspects that Lenski's original strain adapted to living on a low-glucose diet by improving specific uptake mechanisms for glucose, which would enable the bacteria to digest the sugar more efficiently. Those mechanisms “preadapt” the bacteria to thrive on glucoselike sugars, such as fructose. But place them in a different sugar, such as melibiose, and the glucose-evolved bacteria falter.

    Yet the uptake mechanisms apparently did not change in the same way in all 12 strains, because the pattern of fitness they showed on the new sugars differed from strain to strain. “Travisano has put identical individuals through the same selection regime—and come out with greatly varying responses,” notes Peg Riley, an evolutionary biologist at Yale University. In other words, faced with one evolutionary pressure—in this case, a low-sugar environment—an organism can evolve several different solutions to improve its fitness.

    Travisano also let bacteria from Lenski's original E. coli ancestor evolve for 1000 generations on a restricted maltose diet instead of glucose. He then switched the maltose-evolved microbes to glucose, where they continued to thrive. “That might lead you to predict that the reverse is true, too,” he says: “that glucose-evolved bacteria will do well in maltose.” In fact, they vary greatly in their response, with some improving, others becoming substantially worse, and others not changing at all—a finding that underscores the different ways in which the glucose-evolved microbes had adapted to the same environment.

    Concludes Paul Rainey, Travisano's colleague: “Predicting the outcome of adaptive evolution is a risky business.”

Log in to view full text

Log in through your institution

Log in through your institution

Stay Connected to Science

Navigate This Article