News this Week

Science  27 Jun 1997:
Vol. 276, Issue 5321, pp. 1960

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Corn Genome Pops Out of the Pack

    1. Jon Cohen

    Congress is poised to launch a corn genome project, but plant geneticists want to make sure other, related cereal grains aren't ignored

    IRVINE, CALIFORNIA—In the next few weeks, key members of the U.S. Congress are planning to plant seed money into appropriations bills to launch a new genome project. This ambitious effort, focused on corn, or maize, the quintessential American food crop, could mean tens of millions of dollars for crop genetics research in the next few years.

    The prospect of such an initiative has grabbed the attention of plant scientists, who hope it could do for crop genetics what the multibillion-dollar Human Genome Project is beginning to do for human genetics. By helping to unravel the multitude of genetic mysteries hidden in corn's crunchy kernels, the project could aid in understanding and combating common diseases of grain crops. It could also provide a big boost for efforts to engineer plants to improve grain yields and resist drought, pests, salt, and other environmental insults. Such advances are critical for a world population expected to double by 2050, says Robert Herdt, director of agricultural sciences at the Rockefeller Foundation. “Four species provide 60% of all human food: wheat, rice, maize, and potatoes,” says Herdt. “And we don't have good strategies for increasing the productivity of plants.”

    Herdt was one of 50 plant scientists who gathered at the National Academy of Sciences' center here 2 to 5 June for a meeting billed as “Protecting Our Food Supply: The Value of Plant Genome Initiatives.” Although there's widespread enthusiasm for a plant genome project focused on a crop, the meeting revolved around the question of how much emphasis should be put on corn—which has a huge, complex genome. Several researchers believe the project should intensively study rice, too, which has a much simpler genome, and several other species from the genetically similar grasses family.

    Unlike many meetings in which scientists dreamily discuss prospects in their field, the talks at this one were integral to the policy-making process. One of the meeting's co-organizers, Ronald L. Phillips of the University of Minnesota, St. Paul, is also chief scientist for the competitive grants program at the U.S. Department of Agriculture (USDA) and chair of a task force preparing a report to Congress about how the government should proceed on such an initiative. “This is the only meeting that I knew was going to be important from the outset,” said Michael Freeling, a University of California, Berkeley, geneticist and the meeting's co-organizer, at the opening session.

    Ear ye, ear ye

    As much as scientists would like to see a crop genome project that studies several grasses, organizing it around corn makes a great deal of political sense. Corn is a potent economic force, providing much of the feed for the country's livestock, basic ingredients for everything from drugs to ethanol, and up to $8 billion in exports. And the industry is represented by a formidable political lobby, the National Corn Growers Association.

    Old friends.

    Rice, wheat, and corn—key grains on a list of grass genomes to be sequenced—are believed to have diverged some 60 million years ago.


    Indeed, the idea for a publicly funded corn genome project began to take root late in 1995, when the growers' association put its muscle behind it. The sales pitch includes a 70-page business plan and a slick video promoting a “national corn genome initiative.” They have won the backing of Senator Christopher “Kit” Bond (R-MO), who chairs the subcommittee that funds the National Science Foundation (NSF). “One of the reasons we're here [in Irvine] is because of the corn growers and the political momentum they created,” said corn geneticist Joachim Messing of Rutgers University in Piscataway, New Jersey.

    Bond says he is keeping an open mind about the project and that he welcomes input from scientists. “Give us a game plan,” Bond told presidential science adviser Jack Gibbons and NSF director Neal Lane at a 22 April hearing on NSF's 1998 budget request. Bond asked Gibbons to assemble a panel to come up with such a plan, and the White House responded by creating the task force that Phillips chairs. An interim report is due this month, with a final report by December.

    Several hundred academic researchers in the United States are studying corn genetics, estimated Ed Coe of the University of Missouri, Columbia, and three companies have projects under way to identify corn genes. But there is scant coordination between any of these efforts, and much of the data from the private companies are not widely available (see sidebar). The solution, according to the corn growers, is a federally funded, $143 million research program that would stitch together these varied efforts.

    Given a chance to put science in the driver's seat, plant geneticists are trying to block out the key issues. The most fundamental question is the same one that faced researchers who launched the Human Genome Project a decade ago: What level of detail is needed? One group would like to fish out from corn and other model plants just the sequences of genomic DNA most likely to code for genes. That's typically a small part of any genome. Another camp says that sequencing the entire genome is the only way to find all the genes and understand their relation to each other. “This is ‘déjà vu all over again,’” says David Cox, who co-directs a center working on the Human Genome Project at Stanford University. The lesson from the human experience, he says, is that “you need both.”

    However, the big problem with sequencing the entire corn genome is just that—it's big. Corn has about 3 billion pairs of bases (the building blocks of DNA), which makes it comparable in size to the human genome. It also has an abundance of repeated sequences, which probably contain worthless information. Rice, on the other hand, is the smallest of the crop grasses (see table), with only 430 million base pairs. Moreover, rice has a great deal in common with corn, wheat, oats, barley, and other members of the grasses family, says rice researcher Susan McCouch of Cornell University. “Rice is the closest thing to the ancestral version of the grass genome,” says McCouch. “Yet it still embodies the essential set of genes for grasses.”

    Gnats and giants.

    Grain genomes range in size, but are much larger than the nonhuman species being sequenced.

    View this table:

    Researchers have already found broad similarities between the genes of different members of the grass family. “It's no longer OK for me to be a wheat geneticist and for you to be a rice geneticist,” says Michael Gale, a plant molecular biologist from the John Innes Centre in Norwich, U. K. “We all have to be cereal geneticists.” But he says “it's still an open question” how many genes the different grasses actually share.

    Obtaining the entire rice sequence would clarify how much “synteny”—similar genes that appear in similar locations of the genome—exists between the grasses. But how to get the most bang for the bucks that researchers hope Congress will devote to the project is another question. “If we get $100 million, do we suck it all into the rice genome?” asked Jeff Bennetzen, a corn geneticist at Purdue University. “For me, whole-genome sequencing is a low priority.” Timothy Helentjaris of Pioneer Hi-Bred emphasized that rice has little political muscle. “If you got down to details and said 70% [of the budget] is for rice sequencing, you'd set off red flags,” said Helentjaris.

    Cornucopia project

    By the meeting's end, participants had cobbled together a plan that seemed to satisfy the majority. The first was a proposal for an international rice genome sequencing effort, with the United States putting up half of the money and inviting China and Japan, both of whom are funding rice genome projects, to join. The scientists also suggested building up a database of short sequences, called “expressed sequence tags” (ESTs), that can be used to identify expressed genes. They recommended sequencing 500,000 ESTs for corn and 100,000 each for rice, wheat, oats, barley, and sorghum. The group also called for computer databases to share data as they are generated and stock rights where researchers can freely receive the clones used to study the various plants.

    The plan won plaudits from government officials eager to avoid a congressional mandate. “I'm very enthusiastic about what I've heard at this meeting,” said Mary Clutter, head of the biology directorate at the NSF. “That is, focus on the science and let us build a program” to present to Congress. Clutter would like several agencies to participate in a project led by the USDA, with NSF funding a steering committee that would draw up a request for the 1999 fiscal year that begins on 1 October 1998.

    That's not soon enough for Kellye Eversole, a lobbyist for the corn growers at the meeting. “We don't want to spend another year on planning,” says Eversole. “We want to see this get off the ground in 6 to 7 months.” But James McLaren of Inverizon International, which drew up the business plan for the corn growers, signaled a willingness to be flexible about the scope of the project. “If you all tell me the best way to improve corn is to sequence rice, I'll support you,” said McLaren. “But you'd better be right, because [the corn growers] are standing out front.”

    Congress also seems eager to get started. A staffer in Bond's office who asked not to be named told Science that legislators plan to designate $10 million for the effort in two separate parts of the spending bill for the agriculture department. Another earmark might appear in the appropriations bill that funds NSF. But Clutter takes issue with that approach. “Earmarking … is anathema to the Administration,” said Clutter. “It means taking away money from something planned.”

    Indeed, says Cliff Gabriel of the White House's Office of Science and Technology Policy, starting a genome project means curbing or ending an existing program. And although he didn't propose any candidates for the chopping block, he told the group that the Administration supports a grain initiative. “The time is right to do something,” he said.


    Please Pass the Data

    1. Cohen Jon

    IRVINE, CALIFORNIA—Deciphering the genetics of the mustard plant won't by itself meet the world's increasing demand for food. But plant biologist Christopher Somerville thinks that it can teach his colleagues a lot about sharing as they embark on a grain genome project (see main text).

    Somerville is part of a coordinated, international effort to decode, or sequence, all of the DNA in the genome of Arabidopsis (Science, 4 October 1996, p. 30). But a slide he presented at a recent meeting on food crops (see below) makes clear that the extent of collaboration has been uneven. While several groups were sharing sequence information fully, he says, Japanese and European researchers had yet to put any sequences into public databases. “From the beginning, we've had a lot of international cooperation, and this is not in that spirit,” says Somerville, who heads the Carnegie Institution of Washington's plant research branch in Stanford, California.

    Unequal portions.

    Arabidopsis collaborators follow different practices about sharing databases.


    Leaders of both the Japanese and European Arabidopsis projects acknowledge the shortfall, but they say there's good reason. Michael Bevan of Britain's John Innes Centre, who heads the international consortium on the plant that Somerville and others contribute to, notes that European researchers, unlike their U.S. counterparts, don't release data until they have verified its accuracy. “The rapid release of highly accurate, annotated sequence is a goal we all aim to achieve,” says Bevan. Satoshi Tabata, who heads the Japanese project at the Kazusa DNA Research Institute in Chiba, Japan, says money has been a big obstacle to the posting of data. Tabata says the project will begin releasing data next month and “will keep releasing data without delay after that.”

    Regardless of which view prevails, the Arabidopsis experience illustrates the obstacles to any effort to coordinate the sharing of genomic data. And even when scientists profess fidelity to the idea of sharing, the interests of industry and nationalism can be overwhelming. Michael Gale, a plant molecular biologist also at the Innes Centre, is much worried by what he says is recent pressure from the European Union (EU) to give industry first crack at any genome data. “The EU wants to protect its databases,” says Gale. “It's something we should all fight very vigorously.”

    Rice genome researchers have long complained that Japan's 7-year-old rice genome project has been slow in sharing data (Science, 18 November 1994, p. 1187). While tensions have eased as the Japanese researchers have made their data and materials more widely available, similar concerns are being raised about the availability of data from a Chinese project. In particular, says Susan McCouch of Cornell University, Chinese researchers have had little interaction with foreign colleagues. “We have almost no information out of the Chinese project,” said McCouch. “No one I know has ever seen any of that data.”

    Hong Guofan, director of the National Center for Gene Research at the Chinese Academy of Sciences in Shanghai, told Science that he expects the situation to improve shortly. An Internet site will offer data “as soon as the relevant financial arrangement has been settled.” Hong noted that Chinese rice researchers also submitted an abstract of their work at a meeting in South Carolina last October and have a related paper in press. McCouch replies that Hong canceled his talk at the South Carolina meeting and again at a meeting held in San Diego this January. “As far as I am concerned, an abstract is not ‘sharing data,’” says McCouch.

    Other genomic data about grains are being held close to the vest because industry is directly funding the work. Three U.S. companies—Pioneer Hi-Bred, Monsanto, and DuPont—have corn genome projects under way. The highest profile belongs to Pioneer's project, a 3-year, $16 million deal with Human Genome Sciences of Rockville, Maryland, to pluck out pieces of corn genes and assemble them in a database.

    Setting the rules for access to such databases can be tricky. Pioneer's Steven Briggs says the data, while not in the public domain, are available. “We've granted everyone's request for access,” said Briggs about the project, which began in January 1996. Briggs urged the U.S. government to negotiate with industry to gain access to private databases.

    James Cook of the U.S. Department of Agriculture had some advice for the feds, too: Be tough with collaborators who withhold information. “If our partners are not holding up their end, pull the plug,” said Cook. For their part, most scientists prefer the carrot to the stick. “A data-release war [would be] a disaster for the genome project,” says Rob Martienssen, who co-leads the Arabidopsis sequencing project at Cold Spring Harbor Laboratory in New York. “We can only encourage them and lead by example.”


    Alzheimer's Maverick Moves to Industry

    1. Eliot Marshall

    Since 1994, the British drug company Glaxo Wellcome has been buying bits and pieces of U.S. biotech firms as part of a push into genetics. On 17 June, the company announced a surprising choice to direct its growing genetics empire: Allen Roses of Duke University, a prominent neuroscientist and controversial Alzheimer's disease researcher. Roses will run this $47 million directorate from Glaxo's U.S. headquarters in Research Triangle Park, North Carolina.

    Roses, an outspoken researcher whose ideas about the genetics of Alzheimer's have drawn a mixed reception from his peers, has been at Duke for 27 years and was named director of the university's Center for Human Genetics in 1996. He says the main reason he took the job with Glaxo is that “We are at a point now in the understanding of Alzheimer's disease [at Duke] that we are targeting” therapeutic products. “Universities don't make drugs and governments don't make drugs,” Roses says, but “Glaxo Wellcome does.” Glaxo Wellcome has funded Roses's work at Duke, and he says his research program will “be accelerated by my being inside” the company. Glaxo has agreed to allow Roses to continue some research at Duke as an adjunct professor.

    As director of Glaxo's international genetics program, Roses will command a program based in labs in three countries (the United States, Britain, and Switzerland), comprising 150 researchers. According to Glaxo, the staff is expected to double over the next 18 months, as new departments are created to “ensure that genetics plays its part not only in drug discovery but also in development and in the commercialization of medicines.” Roses's job will be to forge a coherent strategy, linking combinatorial chemistry at Affymax Research Institute of Palo Alto, California (purchased by Glaxo in 1995), gene expression research at Incyte Pharmaceuticals Inc. of Palo Alto (as of last month, a partner of Glaxo's), and clinical genetics studies at Spectra Biomedical of Menlo Park, California (purchased by Glaxo this month).

    Roses says one of the reasons the company chose him is that he's not a fence straddler. Indeed, he notes, some of his peers have called him a “street fighter.” For example, he recently spoke out at a Senate subcommittee hearing about what he called lack of vision in the public biomedical funding agencies. He says his grant requests to the National Institutes of Health received poor ratings from “narrowly focused scientists” with “dogmatic belief systems.” His lab would have closed, he added, had it not received funding from Glaxo Wellcome.

    Roses may be best known for showing that a protein involved in cholesterol transport (apolipoprotein E) is a factor in Alzheimer's disease. Roses and his colleagues also linked genes that encode variants of the protein (the apoE genes) to varying degrees of risk for Alzheimer's disease. Alison Goate, an Alzheimer's researcher at Washington University in St. Louis, says that while most researchers would agree that the gene known as apoE 4 is “the single most important risk factor” for Alzheimer's disease in the under-70 population, some of Roses's other conclusions are not widely accepted. Most controversial, Goate says, is a theory of Roses and his Duke colleague Warren Strittmatter that “good” versions of the apoE gene (E2 and E3) produce a protein that helps maintain healthy nerve cells, while the “bad” variant (E4) fails to do so, leading to Alzheimer's disease (Science, 19 November 1993, p. 1210). Because some Alzheimer's patients do not have the apoE 4 gene, and some people who have the gene do not have the disease, many researchers doubt that a test for apoE 4 would have value in predicting whether a healthy person will get the disease.

    While Roses may seem an iconoclast to some, his colleague Peter St. George-Hyslop of the University of Toronto says he's really “not all that outrageous … he likes to play that angle.” Goate agrees: “He thrives on controversy.” As for Roses's move to Glaxo, St. George-Hyslop comments: “It's good for them, bad for academic science.”


    Varmus Grilled Over Breach of Embryo Research Ban

    1. Eliot Marshall

    Like a brainy kid getting a lesson from the neighborhood enforcers, Harold Varmus, director of the National Institutes of Health (NIH), endured 3 hours as the lone witness before a House investigative panel last week. He was grilled about an NIH-funded researcher, Mark Hughes, accused last year of violating a federal ban on embryo research. The 19 June inquiry before the oversight and investigations subcommittee—the panel once chaired by the fearsome John Dingell (D-MI)—also served as a reprimand of NIH's top brass, seated behind Varmus in the audience, who were faulted by legislators for lax management. The subcommittee chair, Representative Joe Barton (R-TX), called it a “friendly hearing,” but at times the questioning was anything but amiable.

    Barton and other panel members, including Representative Ron Klink (D-PA), a sharp interrogator, concluded that Hughes, a molecular geneticist, had violated the ban on embryo research from 1995 to 1996. Hughes had searched for disease-causing mutations in DNA from embryos created by in vitro fertilization (IVF) to determine if they should be implanted in the mother's uterus. Barton said in an opening comment that he was concerned that Hughes had “conducted this prohibited [embryo] research openly on the NIH campus.” Barton implied that the NIH chiefs had looked the other way, allowing Hughes's research to go on “with a wink and a nod”—until it became a burning issue. Barton said, “It appears some at NIH believe they are above the law. They are wrong.”

    Varmus confirmed that Hughes had violated the embryo research ban and other rules designed to protect human subjects. But “there was no wink and a nod.” Varmus maintained that he and other NIH leaders had been unaware of Hughes's alleged misconduct because Hughes was careful to hide it. “When Dr. Hughes's surreptitious pursuit of prohibited research was discovered,” Varmus said, “the NIH moved swiftly and decisively to terminate its research relationship with him and to ensure that no other similar violations were occurring.” But in an awkward moment, Varmus disclosed that he had not even learned details of the Hughes scandal until it surfaced in the newspapers in January 1997—3 months after officials of the National Human Genome Research Institute (NHGRI) had severed all ties with Hughes because of his apparent misdeeds. “It was unfortunate,” Varmus said: “My people assumed that someone else had [told me]” about the controversy, but no one had.

    “Friendly” fire.

    House investigators pepper NIH chief Varmus with questions as NIH staffers look on.


    Hughes did not testify. But his lawyer, Scott Gant of Crowell & Moring in Washington, D.C., issued a statement in which Hughes claims, “I never intended to violate the ban on embryo research.” In interviews with Science, Hughes has insisted that NIH chiefs did not make it clear that federal rules forbade him to use NIH resources to practice his specialty—preimplantation genetic diagnosis (PGD) of DNA taken from a single cell of a human embryo. “The NIH leadership may believe that they expressly told me in person that my PGD research was barred,” says Hughes, “but that is not my recollection. …” He adds, “I was never given any written statements or policies indicating that I could no longer do my PGD work. I believe there was simply a miscommunication. …”

    Hughes's charge of poor communication at NIH was supported by evidence at the 19 June hearing, but not his claim that he didn't know the rules. As Varmus noted, embryo research—particularly the IVF studies Hughes was involved in—had been widely debated. Since 1980, it had been off limits for NIH researchers. Congress cleared the way for funding of embryo research in 1993, but before approving any projects, Varmus, as NIH's new director, sought advice from a panel of experts on what should be allowed. Hughes was a member of the advisory group.

    When the panel issued recommendations in late 1994 calling for limited use of embryos in research (Science, 9 December 1994, p. 1634), President Clinton stepped in with a new prohibition: He ruled that federal funding could not be used to create embryos for research. Then in 1995, the Republican Congress ruled that no funds could be used for “research in which a human embryo or embryos are destroyed, discarded, or knowingly subjected to risk of injury or death greater than that allowed under” other laws. NIH interpreted this to mean it could not fund any research on human embryos.

    Varmus met Hughes—who had been recruited to the NIH campus in Bethesda, Maryland, in 1994 while the policies were in flux—in the NIH director's office on 12 June 1995 to clarify the rules. Hughes recalls the session as being packed with top officials. Varmus and his staff say that Hughes was told explicitly that he could not use NIH resources for DNA analysis of single cells taken from embryos. Hughes claims that this was not made clear. He says he continued to believe that, while research on embryos was off limits, analysis of DNA from single embryo cells was permitted. Unfortunately for NIH, the meeting produced no written memo to Hughes or NIH staff on the rules.

    Hughes continued to analyze DNA—taken from single cells extracted from embryos in IVF clinics—in a lab he had set up at Suburban Hospital near NIH and, in at least one case, on the NIH campus. By chance, the test at NIH went wrong: Hughes had determined that DNA from an embryo did not carry a mutation that causes cystic fibrosis, but when the embryo was implanted and brought to term, Hughes confirms, the child tested positive for the disease. Complaints about research procedures conveyed to NIH by Hughes's postdocs triggered an internal inquiry by NHGRI staff in August 1996. The inquiry found that Hughes had violated the embryo research ban, given NIH fellows unapproved tasks, shipped NIH equipment on loan to an unapproved site, and failed to obtain proper ethics reviews for research protocols. NIH severed ties with Hughes on 21 October 1996.

    After conceding that NIH had been slow to ask the Department of Health and Human Services inspector general to look into this case—an investigation that is still under way—Varmus pledged to do a better job of enforcing research limitations in the future. Barton approved, saying, “We can be much more unfriendly” if NIH doesn't show signs of enforcing the rules more strictly.


    Senate Raps NASA on Cost Overruns

    1. Andrew Lawler

    A powerful Senate committee chair said last week that he may try to curb cost overruns on the international space station by imposing a new ceiling on annual outlays for the project. Agency managers fear a new cap could trigger yet another redesign, and station supporters worry that any major changes could doom the program, which is scheduled to launch its first components in June 1998. The move comes just 1 month after NASA managers succeeded in calming angry House lawmakers, who felt Russia was reneging on its promised contribution.

    The latest threat to the $30 billion project came at an 18 June hearing, when Senate Commerce Committee Chair John McCain (R-AZ) surprised NASA officials by calling for a comprehensive cap to hold the space agency more accountable for cost overruns. However, McCain, who supports the station, did not propose a specific figure. The same day, the General Accounting Office (GAO) warned that U.S. contractor overruns have reached $300 million, triple the level of a year ago. The problem, combined with Russia's tardiness in meeting its commitments, has caused a contingency fund to shrink much faster than planned, GAO said. The bad news for scientists is that NASA has used space shuttle and station science-facility funds to keep the lab on track.

    The Administration and Congress informally agreed to a $2.1 billion annual limit on station funding in 1993. But GAO's Thomas Schulz warned Congress that “the station is going to cost more than the [current] cap allows.” A Senate aide was even more critical. “It's not a real cost cap—it's a fiction,” says the aide. “NASA is playing a shell game.”

    While admitting that the station's problems are severe, NASA managers say they are reluctant to alter the program. “I have no problems with reviews, but I worry about redesigns,” NASA Administrator Dan Goldin told McCain. “I don't think this program could survive another one.”

    McCain's proposal would include shuttle and civil service costs associated with the station program, which do not fall within the $2.1 billion figure. The new number—which almost certainly would be higher than the current limit because of the additional elements—could appear in Senate legislation authorizing NASA's 1998 funding, say congressional staffers. NASA managers say they will cooperate, but privately complain that there is too much uncertainty in the program to commit to a specific figure.

    The GAO report, meanwhile, had harsh words for Boeing, the station's prime U.S. contractor, whose performance it said “showed signs of deterioration last year, [and] has continued to decline virtually unabated.” The company washed out in its last performance review, scoring zero out of 100 and forfeiting up to $33 million in incentive payments, say NASA officials. In an 18 June letter to NASA space flight chief Wilbur Trafton, Boeing defense and space group president Alan Mulally pledged to tighten oversight of subcontractors and keep to the schedule for major hardware.

    The problems with Russia and Boeing leave NASA with little room to maneuver. A reserve fund has already dropped from almost $3 billion last spring to $2.2 billion, the GAO noted, and it is expected to shrink to $1.4 billion by October.

    Some are skeptical, however, that McCain's proposal will make a difference. “Congress has voted 16 times over the past 14 years to keep the program and has shelled out $18 billion,” says Marcia Smith, an analyst at the Congressional Research Service. With the first launch only a year away, she says it seems unlikely that Congress would cancel the effort, even if NASA exceeded a revised cap.


    Labs Form Biomedical Network

    1. Dennis Normile

    TOKYO—Biomedical scientists from more than a dozen institutions throughout the Pacific Rim are meeting here this weekend to launch an association to foster molecular biology and related research throughout the region. If all goes well—and if they can find the money—the researchers hope someday to develop into an organization with its own world-class laboratories.

    The idea for the tentatively named International Molecular Biology Network of Asia Pacific Rim grew out of a 4-year-old collaboration between the University of Tokyo's Institute of Medical Science and the Institute for Molecular Biology and Genetics at Seoul National University (SNU). Reciprocal annual meetings have spawned activities that have aided both institutions. When SNU virologist Sunyoung Kim wanted to study the interaction between two proteins, for example, he sent a graduate student to Tokyo to take advantage of an assay developed there. “If we wanted to set up [the experiment] in my lab in Korea, it would take 6 months,” Kim says. “We did it [in Tokyo] in 10 days.”

    The goal of the new organization, says Ken-ichi Arai, a molecular biologist at the Tokyo institute, is “to extend this type of interaction to other countries in the region.” The organizers received an enthusiastic response from a collection of institutions so loosely defined that it included Israel's Weizmann Institute of Science. “For those of us in the Asia-Pacific region, it will be very useful to have an organization that will allow us to interact a bit more closely,” says Nick Nicola, assistant director of the Walter and Eliza Hall Institute of Medical Research in Parkville, Australia. Y. H. Tan, director of the Institute of Molecular and Cell Biology at the National University of Singapore, thinks that the effort to enhance collaboration is “a great idea,” although he worries about the organization being dominated by one or two countries.

    One way to avoid that situation, says microbiologist Jeongbin Yim, director of SNU's institute and a co-organizer of the nascent organization, is to have scientists chart the association's future. For that reason, the first step is likely to be a series of get-acquainted research conferences, along with an exchange of postdoctoral researchers. Arai also anticipates using a World Wide Web page to help participants keep in touch. His model is the European Molecular Biology Organization, which started as a loose network of European institutions and has since acquired laboratories and a means to support research.

    Obtaining a dependable source of funding will be a major challenge, however. Yim and Arai tapped corporate contributions and government grants to subsidize the inaugural gathering, but participants probably will have to pay their own way to future meetings. Still, Arai hopes that the positive reaction to the initial meeting will convince governments that supporting this fledgling organization is in their own, long-term interest.


    Cold Wind Blows Through Arctic Climate Project

    1. Jeffrey Mervis

    Hydrologists Larry Hinzman and Vladimir Romanovsky were packing up to leave Russia's Far East earlier this month at the end of a 4-week research trip when disaster struck. The trip had already been a bit of an ordeal for the University of Alaska, Fairbanks, researchers, who were taking part in an international project involving Japanese and Russian scientists to monitor ground water and climate inside the Arctic Circle. It took three times longer than they had expected to clear their gear through customs and obtain the necessary permits for their equipment, which included a differential global positioning system (GPS), and a portion of their research had to be abandoned. But the fieldwork itself—staking out a 1-kilometer square of tundra and installing markers and sensors—had gone well.

    The first inkling that something was amiss came on Friday, 30 May, when customs officials in Yakutsk told them that the Federal Security Bureau, Russia's internal police, was interested in their activities. After being ordered to transfer their equipment to the Geodesic Supervision Commission and spending an anxious weekend worrying about its fate, they learned on Monday that the instrument's ability to obtain precise geographic coordinates made their data a state secret that could not leave the country. There was worse to come on Tuesday: a 3-hour interrogation by the security forces, who assumed the scientists were spies, and the seizure of Hinzman's logbook. On Wednesday, the police claimed his laptop computer and disks. With only 4 days to go on his visa, Hinzman and Romanovsky (a Russian citizen) decided the next day to leave Yakutsk. On 7 June, they arrived back in Fairbanks, safe but badly shaken, minus their data and equipment.

    Frozen out.

    Romanovsky (right) and Japan's Norifumi Sato with the GPS that was seized. Behind them is a grid maker.


    U.S. government officials are weighing how to respond to Hinzman's treatment. “We're extremely concerned about this interruption in the free flow of scientific information,” says Douglas Siegel-Causey of the National Science Foundation's (NSF's) Office of Polar Programs, who says Hinzman “did nothing wrong” during his visit. But a strong formal complaint or a threat to pull out of joint activities might prejudice future joint ventures in a region that U.S. researchers are eager to study. “We spend a lot of time promoting collaborations, and any problem can put a real damper on things,” says Cathy Campbell of the White House Office of Science and Technology Policy. But “without all the facts, it's hard to know how [Hinzman's case] should be resolved.”

    NSF, which had awarded Hinzman $242,000 for the 2-year project, would like to know not just the facts of this case but whether it fits into a pattern. Several U.S. scientists working in Russia's Far East in the past few years have reported problems ranging from unilateral restrictions on their activities by provincial authorities to last-minute financial demands. NSF has asked its grantees to submit information on such episodes. If the reports do point to a pattern, they say, the next step may be to air the problem before a commission, headed by Vice President Al Gore and Russian Prime Minister Viktor Chernomyrdin, formed in 1993 to foster cooperation in science and other areas between the two former Cold War enemies.

    One complicating factor is the growing independence of local and regional officials from the central government in Moscow. In June 1995, for example, the governor of the Chukotka region that borders the Bering Sea enacted a new policy requiring permits for all scientific activity in his jurisdiction. Over the next few months, three U.S.-Russian projects were canceled and eight more were delayed as scientists scrambled to meet the new rules. In the Hinzman episode, it is unclear whether the local security police were acting on their own or under orders from Moscow.

    Some government officials say the episode is no cause for alarm, however. “From where I sit, it's not a recurring problem,” says Environmental Protection Agency (EPA) official Gary Waxmonsky, who staffs the environmental committee of the Gore-Chernomyrdin Commission. “There are other scientists using GPS equipment in Russia, so I don't see [the security police's reaction] as a generic problem.” He says problems are less likely to occur when federal officials are involved from the start, as with most of the projects EPA sponsors.

    U.S. researchers hoping to conduct research in Russia's Far East—an area rich with opportunities that was largely closed to outsiders during the Cold War—are anxious that any official action by the U.S. government not make an already difficult situation worse. “There are still lots of opportunities, but some U.S. scientists have given up because it's too difficult to overcome the bureaucratic obstacles,” says Dale Taylor of the U.S. Geological Survey's biological resources division in Fairbanks and a driving force behind an effort to create a Beringian Heritage National Park that spans the two countries. “And that's very sad for Russia, where good scientists are working under very difficult conditions.” Collaborations, says Taylor, also are an essential ingredient in keeping Russian researchers afloat.

    For glacial geologist Julie Brigham-Grette of the University of Massachusetts, Amherst, Hinzman's experience has forced her to examine the balance between her quest for discovery and her concern about safety. Brigham Grette's students encountered problems last fall during a field expedition in Chukotka, and she had to postpone a project scheduled for this spring after it became snarled in local politics. “There's an unexplored meteorite crater lake formed 4 million years ago that may have a continuous climate record, and I want to get a core sample,” she says about the Russian site. “It could be a major research project that would also contribute a lot to the local economy. … But I don't want my family to have to worry that I might be arrested just for doing my work.”

    Hinzman is also taking stock. “When I got home, I said I'd never return. But last night, my wife bet me $1000 that I'd be back in Russian within a year.” He paused, turning the idea over in his head. “There's just so much science that needs to be done, and so much to learn.”


    Centers Fear Self-Sufficiency Is Prelude to Government Cuts

    1. Elizabeth Finkel
    1. Elizabeth Finkel is a free-lance writer in Melbourne.

    MELBOURNE—Success breeds success. At least, that's the way it's supposed to work. But try telling that to the managers of Australia's Cooperative Research Centres (CRCs), set up in 1991 to unite government, industry, and academic researchers in an effort to boost the nation's high-tech economy. The most successful centers have attracted industrial support and laid the groundwork for new products, but they fear that this is only encouraging the government to cut their funding in the hope that industry will pick up a larger share of the bill. “It's a Catch-22 situation,” says Nick Nicola, former director of the CRC for Cellular Growth Factors in Melbourne. “If your CRC looked like it could be self-sustaining, they'd say there was no need for government funds. If it didn't look like it could be self-sustaining, they'd say you didn't deserve [the money].”

    CRC directors are running scared in part because the government recently announced that it would trim the program's overall operating budget by $10 million over 2 years and begin a yearlong review aimed at making centers more self-sufficient and more attuned to commercial applications. The program has also just gone through a bruising competition involving 13 of the original 15 centers and 23 new applicants; only six of the existing centers won full funding for another 7 years, and some that had won high marks from outside reviewers either lost out or had their government funds trimmed sharply.

    The competition, moreover, is likely to get even more intense. Center directors say they worry that pressure to trim a persistent federal deficit will put a big squeeze on the $146-million-a-year program, which has grown to 65 sites. “There is a belief around Canberra that the government would like to reduce the program by half,” says one former science manager who requested anonymity. In addition, the Department of Industry, Science, and Tourism has recently dropped two other programs offering R&D incentives to industry, leaving the CRC program as the sole holdover in a portfolio begun by the previous Labour government. While government sources deny the rumor, they say that increased accountability and value for money is essential. They calculate that industry contributes only 15% of the CRC's overall budget, once government subsidies are subtracted, and they will be looking for ways to increase that share.

    The centers were set up to correct a situation, common in many countries, that featured strong fundamental research institutions with few ties to industry and a private sector averse to innovation. The program represents nearly 4% of the government's overall R&D investment, and supporters say a continued flow of public funds is needed to grease the wheels of cooperation with industry and to support projects not yet ready for the market. “A lot of people think CRCs are incubator companies,” says Geoff Vaughan, ex-vice chancellor of Melbourne's Monash University and chair of the CRC's policy-setting body. But that was never their intention, he says. “They are the intellectual bases for the nation's R&D.”

    The program has surpassed expectations in bringing together industrial, academic, and government researchers. It has helped to raise industry's contribution to the country's overall R&D effort from 25% to 46% in the past decade. It has prompted changes in graduate training, adding courses on intellectual property to the usual fare of research techniques. “Our students don't share the view of some of their supervisors that pure research is sullied by an industrial partner,” says Nicola.

    The CRC program's scientific results have also made a big splash. In the past few weeks alone, CRC-based research has grabbed global attention for several new findings, including a mathematical description of solitons, or standing waves, that may mean a brighter future for telecommunications (Science, 6 June, p. 1538) and a new biosensor that can take assays of individual molecules (Nature, 5 June, p. 580).

    One CRC director unsettled by the shifting currents is Mark Sceats of the Australian Photonics Institute, whose scientists were responsible for the recent publication on the mathematical description of solitons. The Photonics CRC lays claim to two showpiece examples of technology transfer: A signal-dispersion compensator for the telecommunications market is being commercialized by Siemens, and Asea Brown Boveri is planning to commercialize a fiber-optic current sensor. But any commercial application of solitons is a decade or more into the future, Sceats says, a time frame that deters companies with their eyes glued to quarterly financial reports. Such research also requires an investment in human capital, he notes, adding that “industries are not going to support Ph.D. students.”

    Two CRCs whose funding was recently cut are trying to keep industry interested by putting a stronger commercial focus on their work, but the outlook is not bright. “With government funding, we could approach companies like AgrEvo to carry out high-risk research looking at genetic manipulation of important crops,” says Chris Buller, manager of the CRC for Plant Science in Canberra. But those talks are now in limbo, he says, and the center's future is uncertain. Says Robert Bitmead, director of the CRC for Robust and Adaptive Systems in Canberra, “It's hard to imagine collaborative research without government funding.”

    Eric Huttner, chief operations officer for Groupe Limagraine Pacific Pty. Ltd., says that the government's failure to renew the Plant Science CRC is a big disappointment. “We had gotten to know the scientists and established a track record,” he says. Being able to share the risk with government, Huttner adds, “is what attracts industry to invest in precompetitive research.”

    But while Sceats thinks that most centers would need at least 10 years of government funding to keep the lifeblood of innovation flowing, two centers that have received wind-down funding say they're optimistic about their chances of survival. “We've always proclaimed our potential for self-sufficiency,” says John Ballard, the director of the CRC for Tissue Repair and Growth Factors in Adelaide, which hopes to profit from milk-derived growth factors through a spin-off company, GroPepP/L. David Nairn, the director of the GK Williams CRC for Extractive Metallurgy in Melbourne, views the situation as a challenge: “If we've delivered to industry, they'll continue our funding.”

    Government officials will examine these issues in an upcoming review of the CRC program by the finance and industry/science departments. Advocates hope the review will clarify the government's role. “I suspect the push to self-sufficiency is coming from groups that don't understand the background, structure, and potential of CRCs,” says Vaughan. “The review will give [program officials] a chance to explain that.”


    Marijuana: Harder Than Thought?

    1. Ingrid Wickelgren

    Contrary to the popular view that marijuana is a relatively benign drug, new evidence suggests its effects in the brain resemble those of “hard” drugs such as heroin

    For decades, policy-makers have debated whether to legalize marijuana. Compared to drugs such as heroin and cocaine, many people—scientists and teenagers alike—consider marijuana a relatively benign substance. Indeed, there was little evidence to indicate that it is addictive the way those drugs are. But now, two studies in this issue demonstrate disturbing similarities between marijuana's effects on the brain and those produced by highly addictive drugs such as cocaine, heroin, alcohol, and nicotine.

    In one study, which appears on page 2050, a team of researchers from the Scripps Research Institute in La Jolla, California, and Complutense University of Madrid in Spain trace the symptoms of emotional stress caused by marijuana withdrawal to the same brain chemical, a peptide called corticotropin-releasing factor (CRF), that has already been linked to anxiety and stress during opiate, alcohol, and cocaine withdrawal. And on page 2048, Gaetano Di Chiara of the University of Cagliari in Italy and his colleagues report that the active ingredient in marijuana—a cannabinoid known as THC—results in the same key biochemical event that seems to reinforce dependence on other drugs, from nicotine to heroin: a release of dopamine in part of the brain's “reward” pathway.

    Together, the two sets of experiments suggest that marijuana manipulates the brain's stress and reward systems in the same way as more potent drugs, to keep users coming back for more. “These two studies supply important evidence that marijuana acts on the same neural substrates and has the same effects as drugs already known to be highly addictive,” says David Friedman, a neurobiologist at Bowman Gray School of Medicine in Winston-Salem, North Carolina. They also, he adds, “send a powerful message that should raise everyone's awareness about the dangers of marijuana use.”

    But the results may have a more hopeful message as well, because they may guide scientists in devising better strategies for treating marijuana dependence, for which some 100,000 people in the United States alone seek treatment each year. For instance, chemicals that block the effects of CRF or even relaxation exercises might ameliorate the miserable moods experienced by people in THC withdrawal. In addition, opiate antagonists like naloxone may, by dampening dopamine release, block the reinforcing properties of marijuana in people.

    Scripps neuropharmacologists Friedbert Weiss and George Koob first began thinking that stress systems might be involved in drug dependence in the early 1990s, after noticing that withdrawal from many drugs produces an anxious, negative disposition that resembles an emotional response to stress. They reasoned that drug withdrawal might recruit the same brain structures and chemicals that are involved in the stress response. Because Koob's team had associated emotional stress with the release of CRF in a brain structure called the amygdala, they thought that drug withdrawal might also trigger CRF release.

    Beginning in 1992, the Scripps researchers amassed evidence showing that this is indeed the case. First, Koob and his colleagues found that injecting chemicals that block CRF's effects into the amygdalas of alcohol-dependent rats reduces the anxiety-related symptoms, such as a reluctance to explore novel settings, that develop when the animals are taken off alcohol. Then in 1995, Weiss, Koob, and their colleagues showed that CRF levels quadruple in the amygdalas of rats during the peak of alcohol withdrawal.

    After similar experiments demonstrated that elevated CRF underlies emotional withdrawal from opiates and cocaine, Weiss, Koob, and M. Rocío Carrera of Scripps, along with two visiting Spanish scientists, Fernando Rodríguez de Fonseca and Miguel Navarro, set out to investigate whether CRF might mediate the stressful malaise that some long-term marijuana users experience after quitting.

    The researchers injected a synthetic cannabinoid into more than 50 rats once a day for 2 weeks to mimic the effects of heavy, long-term marijuana use in humans. Normally, marijuana withdrawal symptoms develop too gradually to be recognized easily in rats, because the body eliminates THC very slowly. But the researchers were able to produce a dramatic withdrawal syndrome lasting 80 minutes by injecting the rats with a newly developed drug that counteracts THC. The drug does this by binding to the receptor through which cannabinoids exert their effects.

    The group found that the cannabinoid antagonist greatly increased the rats' anxiety, as measured in a standard behavioral test, and exaggerated such signs of stress as compulsive grooming and teeth chattering during withdrawal. What's more, when the scientists measured CRF levels in the rats' amygdalas, they found that rats in withdrawal had two to three times more CRF than controls not given the antagonist, and that the increase paralleled the apparent anxiety and stress levels of the rats.

    The results, experts say, provide the first neurochemical basis for a marijuana withdrawal syndrome, and one with a strong emotional component that is shared by other abused drugs. “The work suggests that the CRF system may be a part of a common experience in withdrawal—that is, anxiety,” says Alan Leshner, director of the National Institute on Drug Abuse. A desire to avoid this and other negative emotions, Weiss suggests, may prompt a vicious cycle leading to dependence.

    But withdrawal is just one component of addiction. Addictive drugs also have immediate rewarding, or reinforcing, effects that keep people and animals coming back for more. The drugs produce these effects, scientists believe, by hijacking the brain's so-called reward system. A key event in the reward pathway is the release of dopamine by a small cluster of neurons in a brain region called the nucleus accumbens. Researchers think the dopamine release normally serves to reinforce behaviors that lead to biologically important rewards, such as food or sex. Addictive drugs are thought to lead to compulsive behavior because they unleash a dopamine surge of their own.

    But no one had been able to show convincingly that marijuana could induce that telltale dopamine rush, until Di Chiara and his colleagues put THC to the test. When the Cagliari team infused the cannabinoid into a small group of rats and measured dopamine levels in the nucleus accumbens, they found that the levels jumped as much as twofold over those in the accumbens of control rats infused with an inactive cannabinoid. The magnitude of the surge was similar to what the researchers saw when they gave heroin to another set of rats.

    Further work confirmed that cannabinoids, rather than other factors such as the stress of being handled by the experimenters, were responsible for the dopamine release. For example, the researchers observed no dopamine increase in animals who were given a receptor blocker before the THC.

    Then Di Chiara and his colleagues found an additional parallel between THC and heroin. They showed that naloxone, a drug that blocks brain receptors for heroin and other opiates, prevents THC from raising dopamine levels, just as it does with heroin. This indicates that both marijuana and heroin boost dopamine by activating opiate receptors. Marijuana, however, presumably does so indirectly, by causing the release of an endogenous opiate: a heroinlike compound made in the brain. “Marijuana may provide one way of activating the endogenous opiate system,” explains Di Chiara.

    Di Chiara speculates that this overlap in the effects of THC and opiates on the reward pathway may provide a biological basis for the controversial “gateway hypothesis,” in which smoking marijuana is thought to cause some people to abuse harder drugs. Marijuana, Di Chiara suggests, may prime the brain to seek substances like heroin that act in a similar way. Koob and Weiss add that the stress and anxiety brought on by marijuana withdrawal might also nudge a user toward harder drugs.

    More work will be needed to confirm these ideas, as well as to find out exactly how marijuana influences the stress and reward systems. For instance, nobody knows how THC interacts with neurons in the amygdala to alter the release of CRF. Nor do scientists understand the molecular steps by which THC triggers the dopamine release in the nucleus accumbens.

    But despite these uncertainties, both papers should help revise the popular perception of pot as a relatively—although not completely—safe substance to something substantially more sinister. “I would be satisfied if, following all this evidence, people would no longer consider THC a ‘soft’ drug,” says Di Chiara. “I'm not saying it's as dangerous as heroin, but I'm hoping people will approach marijuana far more cautiously than they have before.”


    Climate-Evolution Link Weakens

    1. Richard A. Kerr

    We mammals have come a long way since our ancestors were a motley group of small creatures scurrying about in the shadows of the dinosaurs. We owe much of it to climate change, or so goes the conventional wisdom. Researchers have speculated that the innumerable warmings and coolings of climate pushed unfit mammals to extinction and spurred the evolution of new, better adapted species. But the best compilation of fossil evidence on mammal evolution to date now shows that climate had little effect on most of the evolutionary churnings of the past 80 million years.

    Mammal ascent.

    Climate had little to do with the rise of mammal diversity after the impact 65 million years ago.


    “This is counterintuitive; I wanted to find a connection,” says paleontologist John Alroy of the Smithsonian Institution's National Museum of Natural History. Only during a few brief periods did climate seem to drive evolution—although those periods are turning points in the history of mammals. Instead, the main determinant of the rate of evolution was the number of existing species, with new species appearing more slowly as the ark got more crowded. Alroy's results, presented at last month's meeting of the American Geophysical Union in Baltimore, are “pretty impressive,” says paleontologist David Jablonski of the University of Chicago, “because it's been hard to get large-scale studies where you can look at” rates of evolution.

    Alroy gained this overview by putting together a unique record of mammals. “It's the best piece of work in terms of methodology I've ever seen,” says paleontologist Michael McKinney of the University of Tennessee, Knoxville. Alroy consulted 4015 lists indicating when and where 3181 North American mammal species lived during the past 80 million years. Then he adapted the record for statistical analysis by creating standard time intervals of 1 million years each and by dropping fossils from the most heavily sampled intervals, which would otherwise tend to look more diverse than sparsely sampled periods.

    Alroy's final record of mammal evolution shows that mammal species were consistently scarce 80 million to 65 million years ago in the Cretaceous period, and the numbers dropped even lower during the mass extinction 65 million years ago at the time of the great impact. During the next 10 million years or so, diversity rose sharply, and then it settled into a more or less stable but higher plateau for the past 50 million years. Isotopic clues in the deep-sea sediments show numerous climate shifts over the same period, but Alroy found that most left no mark on mammal diversity.

    The reason mammals generally failed to respond to climate change, Alroy suspects, is that they were already adapted to an unsteady climate. Throughout the interval, cyclical variations in Earth's orbit have driven climate changes every 20,000, 40,000, and 100,000 years, he notes. The average species, surviving a couple of million years, would have to deal with repeated climate shifts.

    Alroy's analysis may have put to rest the old saw about climate driving every twitch of evolution, but it could give new life to another old idea: that new species are more likely to form when ecological niches are unoccupied, as they were after the great impact catastrophe. His analysis shows that new mammal species originate at the highest rate when existing species are few.

    Still, some researchers point out that climate has not been totally impotent. “It's fine that climate isn't important 95% of the time,” says paleontologist Steven Stanley of The Johns Hopkins University, “but the things we have to focus on are the intervals when interesting things did happen.” In fact, Alroy did find three short intervals—55, 34, and around 6 million years ago—when drastic global temperature shifts and heightened rates of diversity change did coincide. All three were times when mammal evolution took a major turn.

    The diversity change Alroy identified 55 million years ago was modest, for example, but qualitatively it was a “critical interval,” as Stanley has dubbed it. A host of modern mammals from primates to ungulates abruptly appeared in North America, in time with a sudden burst of warming that may have been driven by a sharp gush of greenhouse gas from the ocean's sediments (Science, 28 February, p. 1267). Climate may leave few marks on evolution, but they are lasting ones.


    Strings Unknot Problems in Particle Theory, Black Holes

    1. James Glanz

    PHILADELPHIA—In a story by Guy de Maupassant, a little piece of string takes on an almost cosmic significance when its discovery changes a French peasant's life forever. For almost 30 years, a group of physicists have been trying to establish the cosmic significance of their own brand of string: 10-dimensional (10D) abstractions with vibrations and interactions that may describe the structure of our universe at its most fundamental level. The obscure mathematics of string theory has made it seem remote from the problems other physicists grapple with. But a recent meeting here showed that even though the theory's ultimate significance remains uncertain, some physicists are paying it a telling compliment: They are finding ways to apply it.

    Geometric reasoning.

    By wrapping so-called D-branes (purple) around intersecting spheres, string theorists devise geometric representations for theories of particles and forces. (The diagram omits eight dimensions.)


    So far, says Cumrun Vafa of Harvard University, the problems solved by string theory are “still one little step removed from experiments.” That may be an understatement. Like several others who spoke on the topic at the 5th International Conference on Supersymmetries in Physics, held at the University of Pennsylvania from 27 to 31 May, Vafa used the string formalism to solve abstract problems in particle physics and relativity. Still, the work is turning some heads. “There was an enormous number of skeptics in the particle-physics community about the whole enterprise,” says Thomas Banks, a string theorist at Rutgers University in Piscataway, New Jersey. “I think some of these new results have changed [that].”

    Physicists are discovering that string theory sometimes opens the way to solving seemingly intractable problems in other realms of theory by simple geometrical reasoning. As theorists explained at the meeting, entire field theories—equations describing arrays of particles and their interactions—correspond to sets of shapes in the high-dimensional space occupied by strings. Intersections between the shapes can then reveal how particles behave within those field theories. String theorists have also made progress in calculating a seemingly abstruse quantity: the entropy, or information content, of black holes, and what happens to the information when a black hole “evaporates.”

    These glimmerings of practical application are a return to string theory's basic mission. For all its abstraction, the theory aims to resolve fairly concrete problems in the theory underpinning the Standard Model—particle physicists' current picture of the fundamental structure of matter. Among them is the Standard Model's tendency to explode with “infinities.” The strange rules of quantum mechanics are to blame: They allow any given particle to spend part of its life as another particle—which can, in turn, sometimes exist as still other particles, ad infinitum. The resulting “loop corrections” add up to divergent values for the particle's total mass. “It's very uncomfortable,” says Mirjam Cvetic, who organized the conference along with her Penn colleague Paul Langacker.

    Mainstream theorists have found ways of manipulating those infinities to get sensible answers. But some are finding a more satisfying approach in a largely untested theory called supersymmetry, or SUSY, which is actually an offshoot of an early version of string theory. For each known particle, SUSY posits a massive partner that makes opposite contributions to the loops, almost canceling most of the infinities. Experimentalists are still searching for those partners, but meanwhile, theorists have looked for ways to nix the lingering infinities in SUSY.

    String theory, says Edward Witten of the Institute for Advanced Study in Princeton, New Jersey, “gets rid of the whole kit, cat, and caboodle of infinities.” It does this by identifying particles with resonances—vibrational harmonics, in essence—of the 10D strings. The procedure introduces an infinite “tower” of particles of increasing mass that fix all the loop corrections, like a series of ever-tinier shims used to make a bookcase stand upright on an uneven floor. (The more massive a particle, the smaller its influence on the loop corrections of particles that are light enough to measure.) String theory achieves these cancellations “in an automatic way, without any arbitrary adjustment of anything,” says Witten. It also gives rise to a particle called the graviton, which conveys the force of gravity, making string theory physicists' best hope for unifying gravity with other forces.

    To get from this abstract, 10D world to our 4D universe, string theory holds that the remaining dimensions are shrunken, or “compactified,” in the string theorists' argot. In our world, a string might be as small as 10−20 the size of an atomic nucleus. To bridge that huge gap in scale, physicists have been turning to even quirkier entities called D-branes, which were discovered by Joseph Polchinski of the Institute for Theoretical Physics in Santa Barbara, California.

    A multidimensional counterpart to a 2D or 3D membrane, a D-brane is “a submanifold in space where strings can end,” says Witten. Strings normally form loops, but a D-brane provides a kind of anchoring surface. This ability to sprout many strings means that a D-brane can represent a vast array of particles and their behavior, and configurations of D-branes can stand for entire classes of so-called quantum field theories—particular variants of SUSY, for instance. As a result, simple manipulations of the D-branes can replace horrendously complicated calculations within those theories.

    The trick, says Witten, is to “embed” a calculation into a 10D geometric configuration, “where it's less obvious what the 4D physics question is, but more obvious what the answer is.” At the conference, Witten showed how one such question—the couplings between particles in a version of SUSY—could be solved by manipulating what looked like a series of D-brane “ladders” embedded in 10D space. Because strings wind like vines within these ladders, they represent all the enormously complicated particles and interactions in that field theory. Vafa went further, coming up with basic principles of what he called “geometric engineering”: translating a given field theory into a set of geometrical shapes wrapped in D-branes (see illustration). “We found the dictionary, basically,” says Vafa.

    Because D-branes can have mass, momentum, and a kind of charge, they can also be compactified to represent physical objects—“things” made of real particles rather than theories containing abstract particles. One target of this effort is black holes—objects whose gravity has caused them to collapse behind an “event horizon” from which nothing can escape. More than 2 decades ago, using techniques unrelated to string theory, Stephen Hawking of Cambridge University and others showed that black holes have a kind of entropy, or information content, and emit radiation. But Hawking's result posed two puzzles. The form for the entropy suggested that all the black hole's information was concentrated at its event horizon, rather than spread throughout its volume, as it is in ordinary objects. Moreover, black-hole radiation raised worrisome questions about information loss.

    The radiation, said Hawking, stems from the pairs of virtual particles that, according to quantum mechanics, continually wink in and out of existence throughout space. Just outside the event horizon, the black hole's enormous gravity might convert one of these normally fleeting events into a single particle that flies off into space, leading to the radiation—and an effective mass loss for the black hole. Thus, the black hole shrinks, and its information content dwindles. But the laws of quantum mechanics imply that information is never destroyed, only dispersed or rearranged, so Hawking's picture could be viewed as a paradox.

    Last year, Andrew Strominger of the University of California, Santa Barbara, and Vafa tested the theory by constructing black holes from scratch out of D-branes. They compactified massive, charged D-branes by wrapping them around 6D tori to create small, massive objects resembling black holes. The team then considered possible excitations of the D-branes—“ripples” along the compactified dimensions—which could encode information about the black hole's internal state. By counting up the number of possible vibrations for a given black hole, they found an entropy that agreed exactly with the value Hawking and others had calculated.

    But they also found, says Vafa, that the information existed not just on the event horizon, but “hidden in those extra [six] dimensions.” At Penn, Cvetic and Finn Larsen went further. Building on earlier work Cvetic did with Donam Youm of the Institute for Advanced Study, Larsen reported, they showed how the entropy is sometimes split between the usual horizon and a cloaked, “inner” horizon that forms a second point of no return: Light rays that pass inside it cannot even return to the outer horizon. “Our calculations would say that at least mathematically, the inner horizon plays at least as big a role as the outer horizon,” explains Larsen.

    Later work by Vafa and Strominger even suggested a way out of the information paradox Hawking had posed. Their D-brane-based analysis showed that the “hidden” information might not be lost as the black hole shrinks: It might be escaping along with the Hawking radiation. If string theory can plug a cosmic information leak, it may start getting more attention on Earth.

  12. OPTICS

    Tripping the Light at Fantastic Speeds

    1. Robert F. Service

    In today's world of high-speed telecommunications, researchers are always on the lookout for faster ways to send information. The speediest schemes today encode data as pulses of laser light fired through glass optical fibers. But the comparatively slow electronic switches that pulse the light on and off limit the overall speed of these systems. Now researchers at the University of Utah in Salt Lake City and Osaka University in Japan have come up with a new polymer-based optical switch that has the potential to dramatically boost the data rate.

    Quick change.

    A pulse of green laser light on a polymer film creates charge pairs (black dots), which block an infrared, data-carrying beam (taupe). Red laser light collapses the pairs, allowing the beam to pass through the film.

    Beams of laser light trip and reset this speedy switch. One laser fills the polymer with evanescent charge pairs called excitons, which block an information-carrying infrared beam; a second laser can collapse the pairs and open the switch again in just a trillionth of a second. “It's something I find very interesting,” says Joseph Perry, a chemist at the Jet Propulsion Laboratory and the California Institute of Technology, both in Pasadena, who has worked on designing other polymers for high-speed switching applications. Perry and others caution, however, that a series of technical hurdles—such as the polymer's tendency to break down when hit repeatedly with laser light—must be overcome before the new switches are ready for the market.

    Present-day optical switches apply an electric field to an inorganic crystal to change its optical properties, turning light on and off. Such devices can generate light pulses at about 20 gigahertz, or 20 billion times per second. In recent years, Perry and others have been working to design polymers that can boost the switching speed of the same basic setup to more than 100 gigahertz. But because of the hair-trigger response of excitons, the new polymer switch has the potential to switch 10 times faster still, at 1 terahertz, or a trillion times a second.

    The new switch relies on polymers that can conduct electricity and emit light, derivatives of a compound known as poly (p-phenylene vinylene), or PPV. Several research teams recently used these materials to make the first polymer-based laser, which absorbs laser light of one color and reemits it as a beam of a different color (Science, 27 September 1996, p. 1800). In their new work, which they report in the 2 June Physical Review Letters, Utah physicists Sergey Frolov and Valy Z. Vardeny and their colleagues exploit these light-handling talents to create their switch. Other high-speed exciton-based switches have been reported in the past, but they rely on different optical effects for their switching.

    To make the conducting polymer opaque and turn the switch “off,” the researchers hit it with a pulse of green laser light. The pulse excites electrons in the material to a higher energy state, leaving behind positively charged electron vacancies, or “holes.” These newly created energetic electrons and holes stick close together to form excitons, which themselves absorb light at an infrared wavelength. The absorption essentially blocks the infrared data beam.

    To turn the switch back “on” again, the researchers zap the polymer with a pulse from a red laser that is precisely tuned to stimulate the excitons' electrons and holes into recombining. That makes the polymer transparent again to the infrared data beam. In their initial demonstration, Vardeny and company only created 80 million pulses per second. Raising this to a trillion would of course also require the control lasers triggering the switches to be pulsing at the same speed. Conventional setups can accomplish that by splicing together separate, rapid-fire laser pulse trains, although such systems are difficult to set up.

    For this and other reasons, even Vardeny admits that the scheme has a long way to go before it could become a real-world technology. One “particularly difficult” problem, says electrical engineer Mohammed Islam of the University of Michigan, Ann Arbor, is that conducting polymers tend to break apart quickly when triggered to give off photons of light. Another obstacle, he adds, is that the polymer films heat up when they absorb infrared light, which may cause further degradation. But if researchers can work out these problems, telecommunications could be in for a big switch.


    Are Pushy Axons a Key to Spinal Cord Repair?

    1. Wade Roush

    When moving heavy loads at high speed, as any truck driver or railway engineer will tell you, it's wiser to pull than to push. Nature apparently learned this lesson long before humans did. In the developing nervous system, growing axons—the tendrils that transmit electrochemical signals from one neuron to the next and from the spinal cord to the body's muscles—are dragged to their destinations by oozing extremities called filopodia. These cellular locomotives can whisk their neural freight through the embryo at the breakneck pace of millimeters per day. But researchers studying one adult nervous system have now found a curious counterexample to this rule. In the primitive fish called the sea lamprey, some axons can push their way forward—a trait apparently responsible for the ability of the lamprey, unlike any higher vertebrate, to repair its spinal cord when it is severed.

    In the 1 July issue of The Journal of Neuroscience, a group led by University of Pennsylvania neurologist Mickey Selzer reports that the lamprey axons seem to owe their mobility to neurofilaments, rods of protein previously thought to play a purely supporting role in axon growth. For unknown reasons, only some lamprey spinal cord neurons are adept at regenerating; the group found that in these cells neurofilament production bounces back after injury, while in those that stay put, production of the protein does not recover. They speculate that growing neurofilaments slowly push each axon forward, like poles gradually raising the canopy of a circus tent.

    The finding could lead neuroscientists to revise their view of how neurons regrow after injury. The limited regrowth that does occur in higher vertebrates has been attributed mainly to actin and microtubules, the same components of the cell skeleton that drive filopodia during embryonic development. Now it appears that “we [may have] underestimated the dynamics of neurofilaments,” says Itzhak Fischer, a cell biologist at Allegheny University of the Health Sciences in Philadelphia who studies the neuronal cytoskeleton. “It's intriguing to think about them as generating some kind of force.”

    The discovery may also put researchers one step closer to what New York University neurosurgeon Wise Young calls “the Holy Grail of neurobiology”: a way to heal humans with injured spinal cords. “It's pure speculation at this point,” explains Selzer, “but it may be that temporarily overexpressing neurofilament in people with central nervous system injuries would help the nerve fibers to grow, if we also can eliminate some of the extracellular barriers to regeneration.” Many scientists remain skeptical of neurofilaments' healing potential, however, pointing to the large evolutionary and physiological gap between sea lampreys and humans. “It's certainly more complicated than saying ‘If we could turn on this [neurofilament] molecule, Christopher Reeve would walk,’” says molecular biologist Nisson Schechter of the State University of New York (SUNY), Stony Brook.

    Still, the finding opens a crack in the standard picture of how axons grow—and what restricts their growth. Over the decades, researchers have shown that during embryonic development these neuronal tendrils are led on their long journeys—which can stretch from the spinal cord to the big toe—by prickly, fanlike structures called growth cones. Filopodia on the growth cones, composed mainly of the structural protein actin, elongate and then contract like inchworms, tugging the axon forward. Just behind come microtubules, hollow rods built from the protein tubulin, which stiffen the trailing axon. Hours later, the axon is infiltrated by neurofilaments, which set up a permanent, rigid cytoskeleton.

    Axons in the peripheral nervous systems of mammals retain some of this embryonic wanderlust throughout life, which is why surgery to reconnect severed fingers and other body parts often succeeds. But the central nervous system (CNS—the brain, eyes, and spinal cord) in adult mammals is soaked through with proteins that inhibit axonal growth.

    Sea lamprey neurons flout these restrictions. As Selzer first reported in Science in 1985, many of the 2000 neurons in the lamprey brain that project axons to the spinal cord can regenerate after the cord is cut, forming functioning connections with the correct neurons on the opposite side of the gap. More recently, researchers in Selzer's lab found that the growth cones responsible for this feat have some unusual features: They lack filopodia, contain little actin, move extremely slowly compared to axons in embryos, and are packed with neurofilaments. “For all of those reasons, we realized that the actin-based, filopodia type of axon growth cannot be the mechanism of regeneration” in lampreys, says Selzer.

    To dig further, Alan Jacobs, a former Selzer graduate student now at the University of California, San Francisco, decided to study whether regenerating lamprey neurons produce unusual levels of neurofilament protein. Jacobs cloned the gene encoding lamprey neurofilament protein and used its nucleotide sequence to construct complementary DNA probes that would bind to the gene's messenger RNA (mRNA) product. He then cut halfway through several lampreys' spinal cords, and while the primitive fish convalesced, he tracked neuronal mRNA levels to monitor the production of neurofilament protein.

    In axons that were “bad regenerators,” Jacobs and Selzer found, neurofilament mRNA production fell after the axons were cut and stayed low. In “good regenerators,” however, neurofilament expression showed a smaller decrease and then—about 4 weeks later—climbed back up. The mRNA levels recovered even when Jacobs and Selzer made the spinal cord gap so broad that axons couldn't grow across it, suggesting that the revitalization isn't merely a consequence of axon regeneration but may help drive it.

    The Penn researchers still can't explain why neurofilament production recovers in some neurons and doesn't in others. But the implication that evolution has given at least one vertebrate species a backup mechanism for neuronal growth in the CNS accords with similar hints emerging from a few other neuroscience labs. Ben Szaro at SUNY Albany, for example, has found that severed optic nerves of Xenopus frogs—another rare example of CNS neurons that usually regenerate—fail to regrow if they are exposed to antibodies that prevent the assembly of neurofilament protein subunits. Neurobiologist Dennis O'Leary at the Salk Institute for Biological Research in La Jolla, California, has also found that the axons connecting the pontine nuclei—structures in the mammalian brain that link each hemisphere to the opposite half of the cerebellum—to the spinal cord grow very slowly and lack growth cones. “Something about the structure of neurofilaments allows for this [slow but sure] kind of growth,” speculates SUNY Stony Brook's Schechter.

    But other researchers caution that the hints of neurofilament-driven axon growth in other species don't mean that their axons can match the regenerative prowess seen in the lamprey. Lamprey nerves lack the myelin sheath that protects mammalian nerves, for example, and lampreys have only one type of neurofilament protein, while humans have at least three. Nor has Selzer figured out yet how to boost neurofilament production in lamprey nerve cells, let alone human paraplegics.

    Thanks to the new finding, though, neurofilaments may not be the only things trading in their passive stability for dynamism and flux. The doctrines of many scientists studying neural growth and regeneration, says Allegheny's Fischer, “are less rigid now.”


    Longer Tusks Are Healthy Signs

    1. Pallava Bagla
    1. Pallava Bagla is a science writer in New Delhi.

    NEW DELHI—The long tusks of some male Asian elephants may advertise the genetic vigor of their bearers, shows a new study by two Indian researchers. Unfortunately, long tusks are also a come-on for poachers, who take a heavy toll on the endangered elephant. Ivory hunters may thus be depleting the elephant populations of the individuals with the healthiest genes.

    Size matters.

    Bigger tusks mean fewer parasites for Asian elephants.

    P. BAGLA

    The study's finding—that male elephants with longer tusks have fewer parasites—supports a theory explaining such secondary sex characteristics put forward in 1982 by evolutionary biologist William Hamilton of the University of Oxford in the U.K. Hamilton proposed that males carrying genes for resistance to parasites will be healthier and, hence, in a better condition to develop expensive secondary sexual characteristics, which then enable females to choose mates carrying the best genes. Studies of invertebrates, fishes, reptiles, and birds have all supported the theory.

    The elephant findings, which appear in a recent issue of India's Current Science, provide what co-author Raman Sukumar, an ecologist at the Indian Institute of Science in Bangalore, calls “the first demonstration” of its kind in mammals. For Raghavendra Gadagkar, a sociobiologist and chair of the Centre for Ecological Sciences at the Bangalore institute, it also suggests a pressing concern. “The importance of the present work lies not in its conceptual novelty, but in its implications for conservation of elephants,” he says (see sidebar), “for ivory hunters are most likely to cull the best males.”

    The 3-year study was carried out at Mudumalai Wildlife Sanctuary in southern India. The researchers identified elephants from photographs and unique body markings and collected fresh dung samples from as many as 38 animals. They then tested the dung samples for intestinal helminth parasites, finding as many as 20 million parasite eggs per dropping. Although these densities “may not be life threatening,” Sukumar says, in lean periods and in stressed conditions, the parasites could significantly weaken the elephants.

    Sukumar and his colleague, microbiologist Milind Watve, also developed a standard growth curve of tusks using a series of field techniques—enlarged photographs, height measurements of live individuals and museum specimens, and postmortem examinations. They then plotted the amount by which each male elephant's tusk length exceeded the standard curve against the parasite densities found in its dung. The scientists found that the longer an elephant's tusks, the fewer parasites are found in the animal's dung.

    Hamilton says he's “pleased to see these results from the ‘king of mammals,’” adding that this study may convince skeptics, who offer other explanations for the correlation between exaggerated sex characteristics and parasites. In particular, the characteristics become exaggerated with age, while the number of parasites declines with age in most animals. Sukumar agrees that the findings are “compatible with, but not necessarily a substantial proof of,” Hamilton's hypothesis in elephants.

    Among other unanswered questions is whether longer tusks really do attract females, although Hamilton says females “contentedly mate with a dominant male” and that older and more dominant bulls usually display longer tusks. Nor is it clear yet that parasite resistance in elephants has a genetic basis.

    But if tusk length is a sign of good genes, poaching may be weakening the elephant gene pool by removing parasite resistance genes from the population—something that could become a “serious health problem” for wild populations, says Sukumar. And Hamilton says that the solution, while obvious, isn't likely to be implemented. “Never cull the top bulls; cull old but small-tusked males,” he says. “Of course, that is the opposite of what hunters do if they want to make a profit.”


    Ivory Trade Seen as Threat

    1. Bagla Pallava

    The decision last week by an international body to permit trade with Japan in ivory taken from elephants in three African nations is expected to put additional pressure on the dwindling number of Asian elephants, too. Although only stockpiled ivory from Botswana, Namibia, and Zimbabwe can be sold, the difficulty in identifying ivory's source could put all animals at greater risk, say environmentalists.

    “A legal chink has been opened up in the international market,” says Vinod Rishi, chief of a large conservation effort by the Indian government to protect its 27,000 elephants. “Now there is a chance of a large-scale massacre of elephants in India.” Belinda Wright, of the Wildlife Protection Society of India, worries about a “dramatic and disastrous spate of poaching” and a further decline in the ratio of males to females, already as low as one to 400 in some parts of the country.

    Ironically, the elephant's downlisting by the Convention on International Trade in Endangered Species comes at the same time the U.S. Congress is moving ahead with legislation to create an Asian Elephant Conservation Fund. The bill, sponsored by Representative Jim Saxton (R-NJ), would support research and conservation efforts to protect the animal and its environment. It is modeled after a program initiated in 1989 to help the African elephant. Although the bill would provide up to $5 million a year, a Saxton aide says that an annual budget of $1 million is more likely.


    Gene Discovery Offers Tentative Clues to Parkinson's

    1. Gretchen Vogel

    Neurobiologists long ago identified the defect underlying the tremors, halting movements, and other cruel symptoms of Parkinson's disease: the gradual die-off of a set of brain neurons that make the neurotransmitter dopamine. But what causes those neurons to degenerate is still a mystery. Now, scientists may have taken a big step toward answering that question.

    Body of evidence.

    Lewy bodies (stained tan, in center) are characteristic of Parkinson's disease.


    On page 2045, researchers from the National Human Genome Research Institute (NHGRI) in Bethesda, Maryland, the Robert Wood Johnson Medical School in Piscataway, New Jersey, and colleagues in Italy and Greece report pinpointing the gene that, when defective, causes a hereditary form of Parkinson's in a large Italian family. The gene, which encodes a brain protein of unknown function called α-synuclein, will probably account for only a few percent of all Parkinson's cases. But if researchers can pin down how the mutated α-synuclein exerts its effects, they may learn just what kills off the crucial dopamine-producing neurons in the much larger number of patients with nonhereditary Parkinson's.

    Other researchers hail the finding. “It is a marvelous piece of work—of great importance,” says C. Warren Olanow, a neurologist at Mount Sinai School of Medicine in New York City. “This is a major clue as to what might be going on.” Demetrius Maraganore, a neurologist at the Mayo Clinic in Rochester, Minnesota, agrees: “It's the first major breakthrough in the understanding of the disease in 30 years.”

    Ultimately, a better understanding of what causes the neurodegeneration of Parkinson's may lead to better therapies. Although current treatments with drugs that replace the missing dopamine control the symptoms for a time, they become useless as the patients' brains deteriorate. But learning what causes the deterioration might make it possible to design drugs to prevent it.

    The current work is an outgrowth of a discovery made late last year. For many years, most evidence suggested that an environmental factor, not heredity, is responsible for Parkinson's. Last November, however, the NHGRI, Robert Wood Johnson, and Italian teams reported that the disease afflicting the Italian family, which develops at an unusually early age, showed strong genetic linkage to a region on chromosome 4 (Science, 15 November 1996, p. 1085).

    By analyzing blood samples from more family members, the researchers were able to locate additional chromosomal markers that seemed to be inherited along with the disease. The additional markers allowed the group to narrow the suspect region to about 6 million base pairs of DNA. That stretch of DNA contained a promising gene: the α-synuclein gene, which had previously been mapped to that area. “All along, we knew that synuclein was a good candidate,” says NHGRI geneticist Mihael Polymeropoulos, because other researchers had shown that the protein is expressed in the brain areas that deteriorate in Parkinson's.

    Sure enough, when the researchers sequenced the gene in the Italian family, they found that the affected members had a mutation not present in unaffected members. The group found the same mutation in three of five Greek families with strong histories of the disease. But it did not appear in any of the nearly 100 controls from southern Italy or in 52 Italian patients with sporadic Parkinson's disease.

    The link to α-synuclein does not provide any easy answers to what causes Parkinson's, but it does offer some tantalizing clues. Proteins must fold up into a three-dimensional structure to function, and the researchers suggest that the mutation causes α-synuclein to misfold. It might then produce abnormal deposits in the brain, much as the accumulation of a protein called α amyloid in neurotoxic deposits may contribute to nerve degeneration in Alzheimer's disease. Just as the brains of Alzheimer's patients are riddled with these plaques, Parkinson's brains are studded with inclusions called Lewy bodies.

    Polymeropoulos notes that the α-synuclein mutation, which replaces the amino acid alanine with a threonine, has the potential to cause the protein to misfold. Although the exact conformation of α-synuclein is not known, alanine is commonly found in a coiled structure, called an α helix, while threonine is common in a more rigid and insoluble structure called a β sheet—the same kind of formation that is suspect in the formation of Alzheimer's plaques. The next step, Polymeropoulos says, is to find out if the Lewy bodies actually contain α-synuclein.

    The possibility that α-synuclein misfolding leads to Parkinson's is very much an unproven hypothesis, however, and it does not jibe with a current leading theory of what causes the disease. Evidence has been building that oxidative stress—the buildup of cell-damaging compounds called free radicals—may be behind the neuron loss. “It's not apparent how to link the two theories,” says John Trojanowski, a pathologist at the University of Pennsylvania Medical Center in Philadelphia who has studied both Alzheimer's disease and Lewy bodies.

    Deepening the mystery of how the mutation causes disease are the versions of α-synuclein found in the rat and mouse. These proteins already have a threonine where the normal human protein has an alanine, yet the threonine causes no apparent problems in the animals. The rodents' short life-span may be one explanation for the paradox, the authors suggest: The animals may simply die before they can develop the disease. The researchers also raise the possibility that the mutation in humans may disrupt—or encourage—the interaction of α-synuclein with another protein not present in rodents. Finally, Polymeropoulos notes that the mutation in the Italian family is dominant—only one of the two gene copies is mutated—and the normal and mutant forms of the protein may have to interact to cause problems.

    To try to figure out just what the α-synuclein mutation does, researchers would like to transfer the mutant gene into mice to see whether they can re-create Parkinson's in the animals. The natural threonine in the mouse suggests that the transgenic animals might not get sick, Polymeropoulos concedes. The researchers hope they might succeed by mimicking the genetic endowment of the Italian and Greek patients: knocking out both copies of the mouse gene and then substituting one normal human gene and one with the threonine substitution.

    But even if researchers can pin down how the α-synuclein gene mutation leads to Parkinson's in the Italian family, they won't completely solve the riddle of the disease. “Parkinson's disease is going to be a 100-piece puzzle,” Polymeropoulos says. “α-Synuclein may be a central piece of the puzzle and will hopefully give us a picture of what it will look like when it is done. But we should be prepared for another 99 pieces.”


    New Imaging Methods Provide a Better View Into the Brain

    1. Marcia Barinaga

    In addition to providing new insights into brain function, improved imaging methods are revolutionizing other scientific fields as well. The Special News Report that begins on page 1981 and selected research Reports provide a look at some of these recent developments.

    For a field that didn't even exist 20 years ago, human brain imaging has developed at a mind-boggling pace. Thanks to one advance after another, neurobiologists can peer into the living human brain and produce pictures that shed new light on brain functions ranging from the processing of sensory information to higher level thinking tasks. But breathtaking as the developments have been, improvements already under way will soon give imagers new perspectives on how the brain goes about its business.

    Most of these advances are based on functional magnetic resonance imaging (fMRI), a technique that spots the increases in the blood oxygenation that reflect a boost in blood flow to active brain areas. Because of advantages—including greater speed and higher resolution—in recent years, fMRI has largely eclipsed positron emission tomography (PET), the method that got the imaging field rolling nearly 20 years ago. Now, researchers are devising a whole new wave of modifications, several of which were showcased at a recent conference on brain imaging,* that will allow fMRI to be used to even better advantage.

    Some modifications will permit more sophisticated experimental designs that link brain images more closely to a subject's perceptions and behavior. In addition, increased magnet strengths will give even greater spatial resolution of activated brain areas. By combining fMRI with other techniques, researchers are now able to answer previously unaddressable questions about the timing with which brain areas are activated. Those answers will yield insights into how information moves through the brain.

    As these advances invigorate the field, a next wave is waiting in the wings: methods that may image neural activity directly by following the flux of sodium ions, or by measuring the scattering of light by brain tissue. These newest directions are as yet unproven, but in a field where methods go from inconceivable to commonplace in a couple of years, researchers have learned never to say never. “I hesitate to say [any technique] isn't going anywhere, because I could be writing a grant to try to buy the equipment in 2 years,” jokes cognitive neuroscientist George Mangun of the University of California (UC), Davis.

    Mangun learned that lesson, he says, from the fast ascent of fMRI. When he heard of an early form of the technique in 1990, Mangun recalls, he thought the technique was “interesting … but [would] never go anywhere.” Within a year, additional advances had paved the way for it to become the mainstay of the field.

    But even as fMRI was catapulting to its position of prominence, researchers using the technique unwittingly handicapped themselves with old habits carried over from PET imaging that have prevented them from taking full advantage of fMRI. PET, which uses radioactive tracers to detect the increased blood flow to activated brain regions, is slow, taking up to a minute to gather the data for a brain image. As a result, neuroscientists using the method do “block trials,” in which the subject performs a string of similar short tasks, causing the brain to repeat the same mental process while the data are gathered. Researchers continued to use block trials with fMRI, although that technique takes only 2 seconds to collect an image. “We always assumed if we only looked at one trial, the signal would be so small we wouldn't be able to see it,” says neuroimager Randy Buckner of Harvard Medical School in Boston.

    That assumption evaporated in 1995, when Robert Savoy and his colleagues at Harvard Medical School reported that fMRI could detect brain activations in response to a visual stimulus lasting only 30 milliseconds. The next year, Buckner and his colleagues did a similar experiment with a cognitive task. They used the word-stem completion test, in which subjects are given three-letter parts of words and asked to complete the word. A single word-stem completion, the researchers found, activated brain areas nearly identical to those activated by a block trial.

    Thus was born a new method, “event-related” fMRI, in which researchers collect brain image data from individual trials, which they can then sort and pool as they wish. It opens up many avenues for cognitive experiments, says Buckner. For example, some tests don't work well in block trials, because they involve an element of surprise.

    Studies with electroencephalograms (EEGs), which record electrical activity inside the brain, have demonstrated that if you show a person a series of pictures, say, of geometric shapes, and then throw in something different, like a picture of an animal, the oddball picture produces a bigger neural response than the others. Neuroimagers would like to know which brain areas react to the surprise, but they can't find out from a block experiment, says Buckner, because “if you do that kind of surprise three or four times, [the response] goes away.” With event-related fMRI, researchers can mix “surprise” trials with other types of trials, and then afterward pool the data from the surprise trials to analyze together.

    Research groups have leaped to use the new approach, not only to identify brain areas that react to unusual events, but also to relate brain activity directly to subjects' responses or perceptions which can only be determined once the experimental trial is over. For example, the technique allows researchers to sort brain images based on whether a subject got the right or wrong answer in a test, and see how the brain activation differed. Because of its ability to address some of these important questions with imaging for the first time, the technique “is catching on incredibly rapidly,” says Buckner.

    It is likely to catch on even faster, thanks to some recent troubleshooting done by Buckner and his Harvard colleague Anders Dale. The problem they addressed is this: The fMRI response to a single trial takes more than 10 seconds to run its course, so it seemed that individual trials would have to be separated by 16 seconds or so to be sure the response to one trial was finished before the next one was presented. That is not only time-consuming, but could also alter the results. “Sixteen seconds is a long time to do nothing,” says Buckner. “People have more time to work on the problem, more time to prepare.” Dale and Buckner have a paper in press in Human Brain Mapping showing that trials can be presented as fast as every 2 seconds, and an algorithm can then be used to extract the overlapping brain activation data associated with each trial.

    Is bigger better?

    Advances like event-related fMRI have opened up countless questions for cognitive neuroscientists to address. Most can be tackled using standard fMRI machines, which have magnetic field strengths of 1.5 or 3 teslas (T) and can distinguish activated brain areas separated by as little as half a centimeter. But “there will be a time when we will definitely have to look beyond” that resolution, says brain imager Kamil Ugurbil of the University of Minnesota, Minneapolis. Ugurbil, who feels that time is rapidly approaching, is leading the charge to higher magnetic fields. The imaging center at Minnesota has a 4-T MRI machine, one of only a handful in the world. With those machines, researchers have revealed multiple strengths of higher fields.

    At higher fields, Ugurbil says, event-related fMRI can be developed to its full potential, producing robust images from single trials, and reducing the need for researchers to sort and pool their results. What's more, researchers have already seen two brain features with 4-T machines that have eluded those using lesser magnetic fields. One, the so-called early oxygenation dip, is an apparent drop in blood oxygenation in active brain areas before the rise in blood flow (Science, 11 April, p. 196). Researchers using 4 T have also seen ocular dominance columns—columns of neurons in the visual cortex with diameters on the order of 1 millimeter—which respond selectively to images from one eye or the other. All these effects are sure to be even clearer at fields higher than 4 T, Ugurbil says.

    Indeed, neuroimager Roger Tootell, of Harvard Medical School in Boston, calls Ugurbil's sighting of ocular dominance columns a “watershed” in brain imaging. Imagers now focus on activity in whole brain areas, but the opportunity to see the activity of individual columns within those areas promises “a quantum jump in insight,” Tootell says. “There are columns all over the brain, and we don't know what they do.”

    Ugurbil's team hopes to pursue both the oxygenation dip and column resolution with a 7-T human imaging machine they are due to receive in December. It will be the first of its kind in the world. But no human has ever been exposed to a 7-T field, and safety tests on animals, already begun by Ugurbil's group, will be needed before that can be done.

    Even if those colossal fields are deemed safe, neuroimager Marcus Raichle of Washington University in St. Louis warns that bigger is not better for everyone, because the bigger machines require an engineering team devoted to tinkering and tuning constantly. “You become hostage to the equipment if you're not careful,” he says, likening a high-field magnet to a “Ferrari that needs a $5000 tune-up every year” and isn't really suited to just going for a ride. “A well-equipped, well-running 1.5-T machine, with … people who know how to ask the questions, is an enormously powerful piece of equipment,” Raichle argues. “There is a tremendous amount of neurobiology that can and will be done on such machines.” Nevertheless, he says, someone, preferably with Ugurbil's level of experience, needs to be developing higher field machines to pave the way for a time when the biological questions demand the next wave in resolution. “We may, 5 years from now, say, ‘Gosh, we all have to be at 5 T.’”

    A timely union

    One trick not even the biggest MRI machine can presently pull off on its own is following precisely when brain areas become active during a cognitive process. That's because the neurons themselves respond within 10 milliseconds of a triggering stimulus, while the blood-flow changes measured by fMRI or PET take several seconds to develop. This limitation has been a great frustration for neuroimagers. “Timing is everything in the brain,” says UC Davis's Mangun. Without timing information, researchers can only guess about how different brain areas build on each other's work as they perform a task.

    To remedy this problem, Mangun's group and others have recently arranged a marriage of convenience between fMRI and PET imaging techniques and a pair of brain-recording methods whose forte is timing: EEG, which measures the electrical fields produced by brain neuron activity, and magnetoencephalography (MEG), which measures neurally generated magnetic fields. Both methods can take readings at more than 100 points on the scalp and can track how neural activity changes with time along the surface of the head. But they have a big weakness: They can't pinpoint the source of the electromagnetic signal.

    Mathematical equations can point to brain areas where the activity might be, but the equations yield multiple solutions, with no way to tell which one is right. But “if you can calculate a [candidate] position, and then show that neuroimaging shows that there are active cells in that particular place, then that increases your confidence that you've got it right,” says EEG researcher Steven Hillyard of UC San Diego.

    Hans-Jochen Heinze at Otto von Guericke University in Magdeburg, Germany, along with Mangun and Hillyard, did just that with a cognitive task in 1994. They presented subjects with pairs of symbols in both their right and left visual fields and directed their attention to either the right or left field by asking them to judge whether the symbols appearing there were the same or different. Earlier work in Hillyard's lab had shown that the EEG wave evoked by the symbols differs, depending on whether the subject is paying attention to them or not: A bump in the wave beginning about 80 milliseconds after the symbols were flashed, known as the P1 component, gets bigger when the subject pays attention.

    To find the source of the activity that creates P1, the Heinze team had the subjects do the task once while the researchers took EEG recordings, and again in the PET scanner. The PET data showed two areas in the so-called “extrastriate” portion of the visual cortex that could be the source of P1, and the team then returned to the model to see whether these spots would work as possible sources that would explain the EEG data. Those sites, says Mangun, explained the data “very, very well.” Mangun has since shown that making the perceptual task easier selectively reduces both P1 and the attention-associated extrastriate activation seen in PET, further support that the two techniques are measuring the same brain function.

    That experiment showed that imaging and electromagnetic techniques can work together, says Harvard's Dale. But the math used by the Heinze team could consider only two or three simultaneously active brain areas as possible sources of the EEG signal. And while that was fine in the case they had chosen, Dale points out that in most cognitive processes, many brain areas are activated. Dale is one of several researchers deriving a new generation of mathematical models that can pose thousands of sites of brain activity as potential sources and contain other improvements as well.

    Like the model used by the Heinze group, Dale's model begins with electromagnetic data recorded on the scalp and predicts which configuration of active areas in the brain could best explain that activity. But instead of relying just on EEG recordings, it can use MEG and EEG data taken simultaneously. And while older methods model the brain as a sphere inside the skull, Dale's limits the potential sources of activity to the cerebral cortex. Moreover, because each brain is unique in how its cortex is folded, Dale uses a structural MR image to tailor the calculations to the individual brain.

    The result is a localized, though fuzzy, estimate of combined activity in the brain that could produce the EEG and MEG signals at any point in time. Dale then takes fMRI data on brain activity during an identical experimental trial and uses those data to “weight” the solutions by having the equations favor areas shown to be active by the fMRI. The end result is a set of crisp images with the spatial resolution of fMRI that show changes in brain activity on a time scale of tens of milliseconds. “You can make a movie animating this,” Dale says.

    Dale, Tootell, and Jack Belliveau, also of Harvard, have validated the technique by using it to look at the timing of the brain's response to a moving image, and Dale and Eric Halgren of UC Los Angeles have studied the time course with which the brain responds to novel versus repeated words. “It is an important wedding of techniques,” says Washington University's Raichle and is likely to become a staple of the field for researchers who want to know the pathways information takes in the course of a thinking process.

    Future frontiers

    Millisecond movies of neural activity using fMRI, EEG, and MEG might seem visionary enough, but some in the field think such wonders will someday be possible with a single technique—either a new form of fMRI or a much less expensive alternative: imaging with ordinary light beams.

    Keith Thulborn's team at the University of Pittsburgh Medical Center is working to devise a way to get images with real-time resolution information directly from fMRI, by measuring changes in the sodium magnetic resonance signal. “Sodium imaging may be a very direct way of looking at neuronal activity,” says Thulborn, because sodium ions flow into neurons when they fire. The passage of ions into the neurons changes sodium's magnetic resonance properties in a way that should be detectable by MRI, Thulborn says.

    The imaging center at Pittsburgh already uses sodium imaging clinically to assess brain damage in patients with strokes, epilepsy, and tumors. Because the sodium signals are weak, it takes 10 minutes to create a reliable three-dimensional image, says Thulborn. But because MR images are built up from many individual snapshots, Thulborn says it would be possible to construct images that capture the immediate neural response by taking repeated snapshots timed at a very short interval after a repeated stimulus. Thulborn and a team of engineers and physicists have been working for 6 years to improve the MRI machine's ability to detect sodium. Their work has reduced the detection time from 45 minutes to 10, while increasing spatial resolution an order of magnitude, and they plan to test the experimental approach on a 3-T machine within the next few months.

    Still other researchers are hoping to image neural activity directly without the $1-million-per-tesla price tag of fMRI. Their preferred medium: light. Studies in living brain slices have shown that the light-scattering properties of neurons change when they become active. Cognitive neuroscientists Gabriele Gratton and Monica Fabiani of the University of Missouri, Columbia, lead one of several labs trying to take advantage of that property by using near-infrared light from a fiber-optic source to image activity changes in living human brains. Their system, which they call EROS, for event-related optical signals, has a bargain-basement cost of less than $50,000.

    When a fiber-optic source placed on the scalp shines light into the head, the light penetrates the skull and is scattered by brain tissues before some of it reemerges. EROS uses light sensors placed on the scalp just centimeters from the source to measure the time the light takes to emerge. Because that time is influenced by light scattering, which in turn is affected by neural activity, the system can detect changes induced by an experimental task. And it does it with a temporal resolution similar to that of an EEG. EROS can also locate the source of the scattering changes, based on detector placement and timing of the light's emergence, with spatial resolution of less than a centimeter. Using EROS, Gratton repeated the experiment by which Heinze, Mangun, and Hillyard first showed the power of combining PET with EEG. EROS produced the same results, localizing the effects of attention to the extrastriate cortex.

    One limitation of EROS is that the light can only penetrate several centimeters into the head, and so the technique is unable to register activity from deep brain areas. Indeed, some researchers worry that it will not reliably image parts of the cortex that are buried in folds. “If it is limited to the superficial cortex, it will never replace fMRI,” says cognitive neuroscientist Steven Luck, of the University of Iowa, Iowa City. But Gratton and Fabiani say they have already imaged cortical areas deep in a fold and have ideas about how to reach even deeper regions.

    “My eye is on optical techniques in terms of the next wave,” says neuroimager Bruce Rosen, of the magnetic resonance imaging center at Harvard Medical School. “In 10 years, I wonder if we will all be doing optical imaging and throwing away our magnets.” While most brain imagers might think that unlikely, this is a field that has learned never to say never.

    • * “Neuroimaging of Human Brain Function,” The Arnold and Mabel Beckman Center of the National Academy of Sciences, Irvine, California, 29-31 May.

  18. New Eyes on Hidden Worlds

    1. Tim Appenzeller,
    2. Colin Norman

    Picture this: the inner workings of a living cell, or the surface of another star. Once, scientists could see them only in their mind's eye, but now these and other unseen realms no longer need to be imagined. Thanks to advances in optics and electronics—among them lasers, ultrasensitive charge-coupled device detectors, and computerized control and image-processing systems—researchers in many fields are enjoying what amounts to a golden age of imaging. In this Special News Report, Science takes a look at some of the imaging technologies that are opening scientists' eyes to new worlds or letting them see familiar worlds in a new light.

    The scanning tunneling microscope and its offspring are revealing atoms, molecules, cell-membrane channels, and other minuscule objects as individuals, each in its own environment. New microscopy techniques are allowing researchers to play peeping Tom on the inner lives of cells, watching organelles, genes, and even individual proteins go about their business. Molecules, too, need no longer be seen in still life, as demanding new techniques turn x-ray structures into movies that capture proteins in the process of changing shape. Some of the same technologies that are opening new views of the very small are also revolutionizing astronomers' views of the very large, as lasers, computer controls, and other wizardry multiply the seeing power of the world's largest telescopes.

    Imaging is so pervasive in science that no single news report could do it justice. The rest of this issue of Science is testimony to the pace of progress: It contains no fewer than six Reports on advances in imaging or results obtained by new imaging techniques.

    • “Optical biopsy” for high-resolution medical imaging (p. 2037);

    • Magnetic force microscopy for studying the magnetic structure of materials that change their conductivity in a magnetic field (p. 2006);

    • Near-field microwave microscopy of an electrically polarizable material (p. 2004);

    • Fluorescence microscopy of the dynamics of stretched DNA molecules (p. 2016);

    • Confocal microscopy combined with magnetic resonance to map light-emitting defects in diamond (p. 2012); and

    • Multiphoton imaging to track communication between plant-cell organelles (p. 2039).

    This issue also includes a Research News story on new directions in brain imaging, an area where feats of seeing the unseeable—human thought processes—are becoming almost routine.


    Candid Cameras for the Nanoworld

    1. Ivan Amato

    A 16-year-old atomic microscope has spawned a family of instruments that allow biologists, materials scientists, and chemists to see their fields from the bottom up

    In 1982, a soft-spoken electrical engineer at Stanford University named Calvin Quate was flying 10 kilometers above the broad-brushstroke landscape on his way to a meeting in London. He was catching up on his technical reading when he came across a short article in the April issue of Physics Today. It is no exaggeration to say that what he read changed his life.

    Landscape with clouds.

    In this cross section of a semiconductor laser, white patches of oxidation are forming near a junction between gallium arsenide atoms (upper right) and aluminum gallium arsenide.


    “I was going to stay for a week in London,” he recalls, “but when I got there, I immediately flew to Zurich.” He felt compelled to meet the protagonists of the article—Heinrich Rohrer and Gerd Binnig—at IBM's Zurich Research Center in Rüschlikon. Quate knew no one at the 200-researcher facility, but he managed to track down Christoph Gerber, a laboratory technician working for Rohrer and Binnig. “He showed me a notebook with all of these traces,” Quate says. “I was absolutely astonished. I knew that something big was happening.”

    The traces were the first images from a new instrument called the scanning tunneling microscope (STM). They resembled topographic maps of gently wavelike territory, but the terrain was nothing like the vast expanses Quate had seen from his airplane; indeed, it would hardly have covered a protozoan's underbelly. The subtle rises and falls of the traces corresponded to atomic-scale steps and corrugations on the surfaces of gold and other metals. Isolated bumps might even have marked individual atoms that had settled onto the surfaces, although Binnig and Rohrer weren't quite ready to claim they were seeing atoms.

    Chemical sensitivity.

    A 60-micrometer-wide pattern of water-loving methyl groups and water-shunning carboxyl groups. A methyl-bearing force microscope tip stuck to the methyl regions (red and green in image at bottom). With a carboxyl-bearing tip, the attraction was reversed.


    Atomic resolution or not, Quate suspected that the STM, which builds an image by scanning an atomically sharp probe across a surface, was about to change researchers' perspective on materials as dramatically as a passenger's vantage changes when an airplane lands. Instead of getting broad-brush pictures of atomic landscapes—the best that techniques such as neutron diffraction, x-ray diffraction, and electron microscopy could do—Quate guessed that researchers using the STM might be able to crawl through every atomic nook and cranny.

    Any uncertainties were dispelled a year later when Binnig, Rohrer, Gerber, and their colleague Edmund Wiebel published a now-classic Physical Review Letters paper that included an image showing a repeating pattern of atoms on what is known as the silicon 7-by-7 surface. As far back as 1957, scientists had used electron diffraction to probe this particular crystal surface, but the results were vague enough to be consistent with several different models for how the surface atoms are arranged. The STM image almost single-handedly settled the issue, dramatically illustrating the power of the new tool. Binnig was so moved, he says, that “it was a little too much to feel joy.”

    “That 7-by-7 image of silicon was the revolution,” Quate says. Since then, the revolution has continued without letup. In 1986, Rohrer and Binnig shared the Nobel Prize for their invention. Quate's own laboratory back at Stanford soon became a key stage for the scientific and technological drama that opened at Rüschlikon. And for researchers in disciplines from surface science to cell biology, the STM and its progeny, collectively known as scanning probe microscopes (SPMs), are ending the need to infer the structure and behavior of ultratiny entities—atoms, molecules, unit cells of crystals, pores and channels in cell membranes—from measurements made on crowds. “Quite often, macroscopic measurements average over ensembles so you don't learn much about physics at small scales,” says Donald Eigler, another of IBM's celebrity STM scientists. Now researchers can get downright personal with the tenants of the nanoworld, monitoring individual atoms or molecules and seeing how their particular environments affect them.

    Atomic rank and file.

    The contours of a gold surface.


    Just as light microscopes can see different properties of a material depending on how it is stained, SPMs can map such properties as magnetism, surface roughness, and electrical potential, depending on the kind of probe they are equipped with. They even allow researchers to reach into the nanoworld and change it, turning nanoscience into nanotechnology. Says Mark Ketchen, director of physical sciences at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, “Some 50 years from now, people will look back and say this was a major turning point.”

    Fine discrimination

    Finesse was what made the STM possible. At the heart of today's version of the device is an electrically conductive tip (ideally tapered to a point a single atom across). When the tip is maneuvered to within a few atom-widths of a conductive surface, a quantum-mechanical phenomenon known as electron tunneling begins to occur. At these minuscule separations, the quantum-mechanically defined region of space that should contain an electron from an atom on the tip overlaps with the similarly defined region for electrons in the sample atoms just beneath the tip. This overlap behaves as a conduit through which electrons from the tip can tunnel into the sample. Because tiny changes in the overlap drastically speed or slow the electronic traffic through the conduit, the technique is exquisitely sensitive to surface relief.

    To exploit these effects, Binnig and Rohrer developed a precise and stable control system for the stylus. Piezoelectric elements, based on ceramics that change size ever so slightly in response to an electric field, nudge the tip this way and that in increments of only a fraction of an atom-width. The piezoelectrics, in turn, are controlled by an electronic feedback system that lowers or raises the tip to maintain a constant tunneling current. The result is that the tip ends up tracing over a sample's surface contours at a constant, tiny altitude. A decade earlier, Russell Young, an engineer at the National Bureau of Standards—as the National Institute of Standards and Technology (NIST) was then known—had come close to inventing the STM when he devised the same basic scheme but fell short of perfecting a system for stabilizing the tip (see sidebar on p. 1984).

    Open-and-shut case.

    In images from an atomic force microscope, doughnut-shaped channels in the membrane of a cell nucleus are open (top) when the membrane calcium content is high and closed when the calcium is released (bottom).


    By scanning the tip in a tight, back-and-forth pattern while the feedback system maintains a constant current, the system can create arresting topographic maps of surfacescapes at resolutions all the way down to the level of individual atoms. Repeated images can be assembled into videos that show freshly exposed metal surfaces, in search of equilibrium, rearranging themselves into atomic steps or virus-sized islands. The STM can even identify different kinds of atoms by their electronic idiosyncrasies.

    This world didn't remain an exclusive preserve for materials scientists for long. The STM can image only electrically conductive samples, so its first devotees churned out surface images of gold, platinum, silicon, graphite, gallium arsenide, and pretty much any other metal, semiconductor, or superconductor they could lay their hands on. But that was not enough for Binnig, who says, “I always was a bit disappointed that this instrument could not work on insulators,” which include the surfaces and molecules of the biological world.

    Binnig brought this dissatisfaction with him when he and Gerber took a sabbatical from IBM's Zurich facility in 1985 to work with Quate at Stanford and at IBM's Almaden Research Center in nearby San Jose. There, Binnig says, he intuited the design of what became the most influential offspring of the STM, the atomic force microscope (AFM). The inspiration came one day when he was staring up from a sofa in his apartment at the stippled stucco ceiling and envisioned a fine stylus tracing its bumps. “I could have drawn out a design [of the AFM] with a pencil,” Binnig recalls.

    Instead of a conductive tip interacting with a sample by way of tunneling electrons, the tip of Binnig's new instrument would be attached to a tiny cantilever—a wee diving board—that would bend and flex in response to the minute mechanical, electrostatic, and other atomic- or molecular-scale forces it encountered as it was scanned over a surface. A feedback system like the STM's would ensure that the force on the cantilever remained constant. Once Binnig and his colleagues had defined the principle of the AFM, it took only a few days for Gerber to build the first prototype. The AFM has since become the most popular type of scanning probe microscope.

    Opening the floodgates

    The AFM was just one of the STM's ever-enlarging brood of children, each specialized for mapping a different property across submicroscopic landscapes. Even before Binnig invented the AFM, Dieter Pohl, also at IBM Zurich, introduced the near-field scanning optical microscope, which replaces the probe with a special light-gathering optical fiber that can image surfaces optically, at resolutions smaller than the wavelength of light illuminating the sample.

    The pace of innovation only accelerated after the AFM appeared. For one thing, the AFM's talent for imaging nonconductive materials served as a welcome mat to enormous research communities in areas such as polymer chemistry and molecular biology. For another, researchers quickly found that by modifying the AFM tip, say with a magnetic coating or a hydrophobic chemical coating, the basic instrument could be specialized for measuring almost any physical property on unprecedentedly fine scales.

    Among its instrumental siblings are the magnetic force microscope, the frictional force microscope, the electrostatic force microscope, the scanning ion-conductance microscope, the scanning chemical-potential microscope, and the scanning plasmon near-field microscope, to name only some of them. “New ideas keep coming out—new things to scan, new detectors, new innovative ways of using these techniques,” remarks IBM's Ketchen, who is part of a team developing yet another variation on the original: the scanning SQUID (superconducting quantum interference device) microscope, for imaging the minuscule magnetic fields that are the staple of data-storage technologies.

    As a result, scanning probe microscopes are turning up in some surprising settings. Consider work by the husband-and-wife team of Paul and Helen Hansma, of the University of California in Santa Barbara, on biological molecules in solution. In 1989, they and six colleagues reported the first AFM-made video of a biological polymerization process. By making repeated scans and stitching them together, they visualized the linking and aggregation of fibrin, a blood-clotting protein, first into strands and ultimately into a netlike molecular fabric.

    More recently, the Hansmas have spied on the progressive slicing and dicing of DNA molecules by the DNA-digesting enzyme DNaseI. They have also watched an RNA polymerase molecule bind to and travel along a DNA molecule, reading the code of its nucleotide bases one by one and transcribing them into messenger RNA. “One DNA polymerase molecule can replicate DNA at a rate hundreds of times that at which everyone in the genome project can decode DNA,” says Paul Hansma, who dreams of using scanning probe microscopes “to learn the operating mechanism of these incredible little machines.”

    And last February in the Biophysical Journal, Helen Hansma, graduate student Daniel E. Laney, and two other colleagues reported that they had mapped not only the shapes but also the “feel” of individual synaptic vesicles—tiny sacs of neurotransmitter that release their cargo when one neuron communicates with its neighbor. By tapping the probe of an AFM across vesicles from the electric organ of the marine ray (Torpedo californicus), Hansma and her colleagues learned that the vesicles became turgid in the presence of calcium ions—the signal that triggers the vesicles to burst and release neurotransmitter. The feat could open the way for directly monitoring the biophysical changes that lead to the release of neurotransmitter, says Hansma.

    Chemist Robert Dunn of the University of Kansas and his colleagues have been probing the workings of other cellular machinery: the channels and pores that riddle cellular membranes. These channels, made up of small bundles of molecules, open and close like locks on a canal to regulate the flow of substances into or out of the cytoplasm and nucleus. In one set of experiments, the Kansas researchers placed an AFM probe directly above so-called nuclear pore complexes in membranes from a frog egg cell to monitor their workings. “In the open state you see a channel, but after triggering the pore [with calcium ions] you see something like a piston stick up and block the central part of the channel,” Dunn says. That's a mechanism some biophysicists had speculated about, but Dunn's images lend it direct support.

    AFMs have made a foray into chemistry in the hands of chemist Charles Lieber of Harvard. He uses them to examine the molecular basis of phenomena that include adhesion, lubrication, and wear. “People were always making difficult arguments about what was going on [on the molecular level] when you couldn't see it,” Lieber notes. “By attaching molecules to the force microscope tip, we can get chemical sensitivity” and tease apart those molecular interactions, he says. He and his co-workers scan the chemically coated AFM tip over surfaces micropatterned with different chemical toppings. They measure the forces the coated tip “feels” as it treks through chemically different neighborhoods, experiencing various amounts of hydrogen bonding and other chemical interplay.

    One recent STM feat builds on its first: that famous silicon 7-by-7 surface. Phaedron Avouris and In-Whan Lyo at the T. J. Watson Research Center, with collaborator Yukio Hasegawa of the Institute for Materials Research at Tohoku University in Japan, used an STM to trace the movement of electrons through the silicon surface's “dangling bonds,” where an atom isn't fully bound to its neighbors. By examining subtle variations in the tunneling current depending on whether the dangling bonds are occupied or not, the group was able to monitor the electron traffic through the bonds—information that could be critical for designers of ultrasmall circuits.

    Remaking the nanoworld

    Having opened these windows on the nanoworld, researchers are now reaching through them to modify that world to their liking, turning atoms into minuscule switches, batteries, and data-storage sites. In this effort, SPMs serve as both measuring devices and tools.

    Two examples can be found in adjacent laboratories at NIST. One goes by the name Molecular Measuring Machine, or M3, the handiwork of Clayton Teague of the precision engineering group. This onion-structured instrument, whose concentric layers isolate the STM nestled in the middle from physical perturbations, relies on a laser-based tip navigation system to measure the distance between any two points on a surface area centimeters across to within 1 billionth of a meter, or just a few atomic widths. That's a feat akin to locating two widely separated grains of sand in a 2500-square-kilometer patch of desert and then measuring the distance between them to within 1 millimeter. Ordinary STMs, in contrast, are unable to map sites within such large areas.

    This ability to survey the nanoworld should aid efforts to engineer it by planting or dislodging atoms with the tip of an STM or AFM. This interventionist brand of microscopy began in earnest with Eigler and his colleagues at IBM Almaden back in 1990. They made headlines by maneuvering 35 xenon atoms into Big Blue's famous company acronym atop a surface of crystalline nickel. Since then, Eigler and others have built more scientifically interesting structures; these include rings of iron atoms that behave like quantum corrals, constraining electronic motions on surfaces in accordance with the ways of quantum mechanics.

    Now, across the hall from M3 in an acoustically isolated room with a massive metal door, Joseph Stroscio and his colleagues are trying to take the next step in building atom-scale test-beds for quantum mechanics. Their instrument, not yet completed, should be able to modify and scan samples under ultrahigh vacuums, at 2 degrees above absolute zero, and in extreme magnetic fields. Stroscio and his colleagues expect that these experimental conditions will enable them to probe quantum effects to which standard STMs operating under less demanding conditions are blind. “We hope to make [ultrasmall] structures and then ask questions like … what energy does it take to put an electron into the structure,” Stroscio says. The answer could feed into the design of prototype data-storage devices that rely on the presence or absence of a single electron.

    Costly, specialized devices such as Teague's and Stroscio's, however, are only a small part of what scanning probe microscopy has become since its invention. Thanks to large-scale production at companies including Digital Instruments, Park Scientific, and Topometrix, scanning probe microscopes can cost $50,000 or even less—complete with a computer package for creating wall-poster images of your favorite sample. Thousands of the instruments are now touring atomic landscapes.

    After opening up the nanoworld, scanning probe microscopes have proceeded to democratize it. Without these tools, says Stroscio, “it would be like the Dark Ages.”

    Ivan Amato's book on materials science, Stuff, was just published by Basic Books.


    Directory of scanning probe microscopy links, including companies and research groups

    Images and videos from IBM's Almaden Research Center

    Links to international scanning tunneling microscopy research groups

    Links to companies and international research groups


    The Man Who Almost Saw Atoms

    1. Ivan Amato

    Heinrich Rohrer and Gerd Binnig were not the first to spy on the world of atoms when the two IBM scientists invented the scanning tunneling microscope (STM) in the early 1980s. In the late 1950s, for example, Erwin Mueller at Pennsylvania State University had invented an atom-resolving device called the field ion microscope. In a vacuum chamber, a strong electric field tore charged atoms from the surface of a sample, sending them careening into a detector in positions that reflected their arrangement in the sample. Field ion microscopy, however, was limited to metal samples drawn into very sharp points. But in the late 1960s, one of Mueller's former students invented a device that could have anticipated the STM directly—if only he had completed it.

    Russell Young, then at the National Bureau of Standards (NBS, now the National Institute of Standards and Technology), called his instrument the “topografiner,” after a Greek word meaning to describe a place. And to STM aficionados, its basic scheme has a familiar ring. Piezoelectric elements scanned its fine metal tip across a surface, with feedback and control systems maintaining the tip at a constant height. Young and colleagues John Ward and Fredric Scire even measured tunneling currents—the basis of the STM—when they brought the topografiner's tip sufficiently close to a metal sample. They published reports claiming that the effect could, in principle, be used to measure a surface position to within about 0.3 nanometer, or about atomic resolution. “One can honestly say that the instrument developed [at NBS] and the instrument that achieved atomic resolution [at the IBM Zurich Research Center] looked very similar,” says Roland Wiesendanger of the University of Hamburg.

    But Young ran into technical and bureaucratic difficulties. Vibrations and other perturbations were preventing the topografiner from seeing atoms. In a 1972 paper in the Review of Scientific Instruments, the NBS researchers gave some sense of the difficulties by pointing out that they were able to achieve tunneling currents only by running experiments during odd hours when their building's air conditioner was off and by operating the instrument remotely so that their own movements would not generate resolution-killing disturbances.

    Binnig and Rohrer later solved these problems with multitiered isolation systems. Young never had a chance to try. In 1971, NBS management took him off the topografiner project in a resource-allocation decision. But Young never forgot that he was once on the verge of seeing atoms—and neither did the Royal Swedish Academy of Sciences. In awarding the 1986 Nobel Prize in physics to Binnig and Rohrer, the Nobel committee acknowledged Young's close approach to the STM and blamed his failure to beat Binnig and Rohrer on “exceptionally large experimental difficulties.”


    Monitoring a Killer Volcano Through Clouds and Ice

    1. Daniel Clery

    Late in the evening of 30 September last year, seismometers in Iceland detected the beginnings of a volcanic eruption beneath the Vatnajökull Glacier, the largest in Europe. By 2 October, the eruption had forced its way through the 500-meter-thick ice sheet—spewing steam and gas thousands of meters into the air. An estimated 2.3 cubic kilometers of meltwater was trapped beneath the ice and would soon burst out, threatening communities, roads, and communication links in this remote corner of southeast Iceland.


    Because of the inaccessibility of the region and the constant cloud cover, which made aerial surveillance difficult, the Icelandic authorities could not tell which way the meltwater would go. They got some timely help, however, from an unlikely source: a cloud-piercing radar satellite and a new image-processing technique that allows researchers to see movements in Earth's surface down to a scale of a few centimeters.

    University of Munich geographer Bettina Müschen and several colleagues at the German Aerospace Research Establishment (DLR) in Oberpfaffenhofen were involved in a project sponsored by the European Space Agency (ESA) to study radar images of Iceland from its ERS spacecraft. The team quickly realized that the satellites could help Iceland's disaster management by tracking the meltwater buildup. The synthetic aperture radar on ERS-2 had taken its first images of the Vatnajökull eruption in early October, but the Munich researchers believed that processing radar images of the eruption by a technique called SAR interferometry could generate even more valuable clues for disaster relief.

    In this technique, two images are taken from the same vantage point, say, 24 hours apart, and superimposed. The resulting interferogram shows graphically any movement that has occurred in that 24-hour period. ERS mission managers agreed to provide the services of an older spacecraft, ERS-1, then in the process of having its systems checked. On 21, 22, 23, and 24 October, both ERS spacecraft passed over Iceland acquiring images that were then processed into interferograms at DLR. “We detected subsidence of just centimeters per day, which was not visible to the eye. A few days later, [these movements] were confirmed on the ground,” says DLR's Achim Roth. “We could see the water going south,” says Müschen.

    As a result, the Icelandic authorities focused their monitoring and flood defenses to the south of the glacier. On 4 November, meltwater building up in a volcanic crater under the ice lifted the ice sheet and flooded southward. Over the next few days, floodwater and ice blocks of up to 1000 tons took out bridges and power and communication cables en route to the sea, but avoided a nearby village.

    Tracking the Vatnajökull eruption was the most dramatic use yet of SAR interferometry, a technique pioneered by researchers at NASA's Jet Propulsion Laboratory in Pasadena, California, in the 1970s that has taken off since the launch of ERS-1 in 1991. “ERS-1 provided the first reliable source of data,” says Steve Coulson, who coordinates SAR interferometry research for ESA. Besides making static topographical maps with unprecedented resolution, SAR interferometry is also being used to measure ground movement after earthquakes and volcanoes, the creep of glaciers, landslides, and subsidence caused by coal mining. According to Müschen, German insurance companies are looking into using SAR to assess the risk of natural disasters in different areas. Coulson says the latest application, still at the research stage, is to use interferometry to detect deforestation and different kinds of agricultural land use. SAR interferometry, he says, “seems to be the big thing at the moment.”


    A recent conference on SAR interferometry

    European Space Agency's Earthnet Online page


    Fast-Action Flicks Draw Chemists' Rave Reviews

    1. Robert F. Service

    Looking for the fastest paced action-adventure film of the summer? Forget Spielberg and Lucas. Keith Moffat and Michael Wulff have a couple of new, action-packed thrillers that are knocking the socks off viewers worldwide. They're leaving audiences “breathless,” proclaims one reviewer. But don't bother with the popcorn. The movies are over so fast, you won't even have a chance to reach for a kernel. Still, what they lack in duration, they make up for in splendid closeup shots: They reveal—in exquisite, three-dimensional, atomic detail—how the show's stars, protein molecules, change shape as they undergo simple reactions.

    Double take.

    Superimposed models of part of myoglobin before (red) and after (blue) dissociation of carbon monoxide (double spheres). Residues in green contact dissociated CO.


    These two new action flicks aren't the only ones on release. Scientists around the globe are turning out molecular movies of proteins, semiconductor crystals, and even simple molecules reacting in a gas, using short pulses of x-rays and electrons to freeze the action of molecules in motion. “We've dreamed of doing these kinds of things for decades,” says Moffat, a biochemist at the University of Chicago. “Now they're finally happening.”

    The new movies are by no means the first efforts to detect high-speed changes in molecules. Researchers have used laser-based techniques for years to track the knitting and breaking of chemical bonds, events that occur in just a few quadrillionths of a second. Those techniques essentially just detect when the events occur, however. By contrast, the new x-ray and electron-beam schemes take repeated frames of the position of each atom in the spotlighted molecule, giving scientists successive snapshots of its complete atomic structure and a direct look at exactly how a host of chemical reactions unfold.

    Just as film grew out of still photography, the new movies are an outgrowth of a molecular snapshot-taking technique known as diffraction, which is widely used to produce still lifes of molecules. Researchers begin by firing a beam of x-rays or electrons at a sample of aligned molecules—such as innumerable copies of a protein lined up in a crystal. By recording and analyzing how the waves or particles ricochet off the sample, the investigators can determine the precise location of each atom in a molecule.

    Diffraction is traditionally done with a continuous beam of x-rays or electrons. But by pulsing their beam instead of leaving it on, researchers can create an effect like that of a strobe light, freezing the action of molecules in motion. And in recent years, by using shorter and shorter strobe pulses, researchers have been able to capture faster and faster action sequences. One problem, however, is that molecular moviemakers need roughly the same number of x-rays or electrons to make each diffraction image. So, as the pulses get shorter—and that's no easy feat in itself—researchers have had to find ways to boost the flux of their x-ray and electron beams.

    For x-ray movies—at this point the most popular of the genre—that has meant turning to high-powered synchrotrons, which produce ultrabright flashes of x-rays. Most current synchrotrons can produce bright enough x-ray pulses that last about 10 milliseconds. But even that isn't short enough to capture the fastest action of proteins in real time, which occurs in just billionths of a second, or nanoseconds. So molecular moviemakers have had to come up with a variety of tricks to slow down the action, such as cooling their protein crystals to ultralow temperatures, which can drop the speed of a reaction 10 billion-fold (Science, 21 October 1994, p. 364).

    Researchers worry, however, that the frigid temperatures may affect the behavior of their molecular actors. “You may not merely slow down the normal reaction,” says Moffat. “You may get abnormal reactions at that temperature.” But x-ray moviemakers have begun to solve that problem using a trio of new synchrotrons, such as the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which are so bright that they can pack enough energy into pulses lasting just 150 picoseconds (1000 picoseconds equal 1 nanosecond). In just the past 6 months, for example, Moffat and Wulff, an ESRF physicist, along with other colleagues, have used the 150-picosecond x-ray pulses at ESRF to produce the first pair of real-time movies of proteins in action at room temperature.

    In the group's first such movie, the researchers unveiled the most detailed picture yet of how the iron-containing protein myoglobin—a common muscle protein—carries out its task of storing and releasing small molecules such as oxygen. In this case, the film tracked the release and rebinding of a single carbon monoxide molecule—a stand-in for oxygen—from myoglobin's central iron atom.

    For each frame the Chicago-ESRF team shot, the action was triggered by blasting a crystal of myoglobin molecules in a vacuum chamber with a short burst of laser light. The light was precisely tuned to be absorbed by the molecule's iron-containing heme group, breaking the iron atom's bond to CO. A fraction of a second later, the researchers opened a high-speed shutter, steering a 150-picosecond x-ray pulse into the chamber to produce a diffraction image of the crystal. By taking multiple snapshots, varying the length of time between the trigger pulse and the probing x-rays for each one, the researchers assembled a movie that shows how CO drifts away from the iron and later rebinds. It revealed such details of the action as how the iron atom and others close by in the protein withdraw slightly from the protein's center after the CO-iron bond breaks, allowing the CO to drift away (Science, 6 December 1996, p. 1726).

    The team's second feature shows the gyrations of a bacterial photoreceptor called photoactive yellow protein, as a small organic group in the protein absorbs a photon and drastically changes its shape in response. Together, these two releases show that high-speed x-ray movies have come of age for tracking real-time atomic changes in proteins, but there's still plenty of action that even these fast cameras can't capture. For that reason, other teams are working to come up with still faster x-ray pulses for tracking even speedier events.

    At last month's Quantum Electronics Laser Sciences Conference in Baltimore, for instance, researchers led by physicist Chris Barty at the University of California, San Diego, reported creating subpicosecond x-ray pulses by firing ultrafast laser pulses at a moving copper wire, which then sheds the excess energy as x-rays. They used those pulses to produce x-ray-diffraction movies that track the initial atomic motions involved in the melting of laser-heated, gallium arsenide semiconductor crystals. And last fall, researchers led by Charles Shank at the Lawrence Berkeley National Laboratory (LBNL) in California reported producing x-ray pulses lasting just 300 femtoseconds (or 3/10 of a picosecond) by firing near-infrared laser light across an accelerated beam of electrons; the energetic electrons essentially give the infrared photons a kick, boosting them to x-rays (Science, 11 October 1996, p. 236). The LBNL team has yet to produce images with its ultrashort, but as yet relatively low-flux, pulses. But “ultimately, we want to be able to look at chemical reactions as they occur,” says Shank.

    No matter how short, x-ray pulses will still miss plenty of action, such as reactions in gases rather than in solids like protein crystals. That's because x-rays interact only weakly with atoms: Generally, researchers need huge numbers of atoms lined up in crystals to deflect enough x-rays to provide high-quality images, explains Ahmed Zewail, a physicist at the California Institute of Technology (Caltech) in Pasadena. So Zewail and other moviemakers interested in tracking chemical reactions in gases have turned to short pulses of electrons, which interact with atoms more readily than x-rays do.

    While electron-diffraction experts have also been making movies for years—in this case of gas-phase chemical reactions—techniques here, too, have been advancing rapidly. In the 13 March issue of Nature, for example, Zewail and his Caltech colleagues reported making the fastest paced electron-diffraction movies to date by shortening their electron pulses about 1000-fold, from several nanoseconds to about 10 picoseconds, using a femtosecond light pulse to create short bursts of electrons that were then focused on their target.

    The Caltech team shot movies of molecules as they are torn apart by laser light. To do so, the researchers first used a laser to fire a pulse of photons into a vacuum chamber filled with a methane derivative containing two iodine atoms. The photons began breaking apart some of the methane molecules—essentially starting a reaction stopwatch. A second pulse, fired a fraction of a second later, hit a metal-coated cathode ray tube, stripping away the electrons and creating an ultrashort electron pulse, which was channeled into the vacuum chamber to produce a diffraction image of the dissociating methanes. In this case, those images don't reveal the exact position of each atom, because molecules in a gas are oriented randomly and are not lined up like those in a crystal. Nevertheless, because all the molecules have the same constituent atoms, the diffraction images are able to reveal the precise distance between atomic neighbors within the molecules.

    Buoyed by this success, the Caltech researchers hope to do even better. At this point, the movie only shows the methanes just before and after they break apart. To capture the bonds in the process of breaking will require shortening the pulses another 1000-fold, says Zewail's postdoc Jianming Cao. Nevertheless, says Carl Lineberger, a chemist at the University of Colorado, Boulder, “it's very exciting to see people taking steps toward seeing electron diffraction in real time.” That excitement will undoubtedly grow if, as expected, the number of new movie releases begins to take off.


    Catching Speeding Electrons in a Circuit City

    1. Alexander Hellemans
    1. Alexander Hellemans is a science writer in Paris.

    Take a long-exposure aerial photograph of a city at night, and you will see traffic patterns traced in the bright streams and dense clusters of car headlights. The image (below), obtained by a team of researchers at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, is the equivalent shot of a functioning microprocessor, the 1997 S/390 used in the current generation of IBM mainframes. The “traffic” consists of electrons emitting light as they pass through the transistors, or crossroads, of this silicon city. By directly viewing such traffic patterns, circuit designers can look for weak spots and bottlenecks in the millions of components on a chip.


    “Researchers have known since the 1980s that electrons emit light as they pass through the field-effect transistors [FETs] at the heart of most modern microchips,” says Jeffrey Kash, of the IBM team that made the images. The light, which is in the near infrared and is extremely weak, can be detected only with cooled charge-coupled devices or special photomultiplier imaging tubes. Kash and his colleague James Tsang investigated this particular microprocessor because it consumed two orders of magnitude more current than it should have when it was not performing any operations. Kash and his colleagues obtained images of the chip in this “quiescent” state that showed a series of spots indicating that this excessive current was confined to a small portion of the chip. They couldn't tell which of the 7.8 million transistors were at fault, however, because they couldn't identify individual transistors in their images. “The issue was, how do you know where you are, how do you navigate,” says Kash.

    The team solved the navigation problem by spying on the individual FETs as they shuttled electrons around the chip when it was operating normally. “The only time a current is flowing is when you have a change of logic state,” says Kash. The FETs produce picosecond light pulses as they switch on and off, so by photographing the chip in normal operation, Kash and his colleagues obtained a “road map” of the positions of the FETs. When the researchers superimposed this image on the photo showing the excess leak currents, they pinpointed exactly where the leaks occur.

    The technique is useful for more than troubleshooting. “We can look at hundreds of thousands of FETs on a chip,” says Kash, “and that is very helpful” in improving the design of subsequent chips. Indeed, Ingrid De Wolf of IMEC, Belgium's Interuniversity Microelectronics Center in Leuven, says optical-emission diagnosis of chips is beginning to spread throughout the microelectronics industry. Researchers are going beyond simple imaging, she adds: “We are now also trying to get more information from the spectrum of the emitted light, so we can measure the energy of the electrons.”


    Biologists Get Up Close and Personal With Live Cells

    1. Trisha Gura
    1. Trisha Gura is a science writer in Cleveland.

    When Dutch naturalist Anton van Leeuwenhoek constructed a simple microscope in 1683, he opened a window to the previously hidden universe of microorganisms. Although it took biologists 200 years to understand fully the true nature of the specks that van Leeuwenhoek saw for the first time, his invention forever changed scientists' perspectives on the natural world. Now, the naturalist's intellectual descendants are bringing into focus an even tinier biological world: the inner workings of single living cells. And their efforts are promising another fundamental shift in perspective as biologists view processes that, until now, have been seen only in the mind's eye.

    In-depth view.

    These 3D images of a mouse embryo were obtained with Carnegie Mellon's Automated Interactive Microscope. Images are processed in different ways to reveal different structural features and the spatial relationships between them.


    Armed with a battery of lasers, photodetectors, and computers, plus a novel crew of powerful and adaptable fluorescent reagents (see sidebar), microscopists are literally bringing to light live cells, tissues, and even whole organisms. With a technique called multiphoton imaging, which gently excites fluorescence from cellular components without killing the cells, researchers are following fertilized zebrafish egg cells as they mature to larvae, seeing nerve cells exchange signals deep in the brains of live rats, and viewing organelles communicating in hard-to-image plant cells. “Every time we turn around, we find a new application,” says multiphoton-imaging pioneer Watt Webb at Cornell University in Ithaca, New York.

    Others are pushing the frontiers of inner space with a version of a technique that astronomers are using to probe outer space: interferometry. Interference microscopes use two interacting beams of light to tease apart tiny cellular structures with a resolution of less than 100 nanometers—sharp enough to see cytoskeletal proteins pushing forward migrant cells and clusters of genes lined up on separate chromosomes. And a few pioneers are now combining a variety of these cutting-edge technologies into a single instrument: a microscopist's dream machine that will allow researchers to pick and choose imaging modes, and even interact with their specimens, without moving their samples from under the instrument's lens.

    Revolution in resolution.

    Stefan Hell's 4Pi microscope, combined with computerized image restoration (right), yields 10- to 15-fold improvement in 3D resolution of actin filaments of a mouse fibroblast, compared with confocal image (left).


    “There is a small revolution going on in light microscopy,” says Stefan W. Hell, a physicist at the Max Planck Institute for Biophysical Chemistry in Heidelberg, Germany, who recently patented and licensed his own version of an interference-based microscope called a 4Pi-confocal. “This will definitely change how biological imaging is being done.”

    Multiplier effect. The driving force behind this small revolution is molecular biology itself, says D. Lansing Taylor, director of the Center for Light Microscope Imaging and Biotechnology at Carnegie Mellon University in Pittsburgh. “Once you clone a gene, overexpress it, and knock it out, you still need data that can tell you when and where things are happening,” he says.

    Until recently, biologists generally had to infer what is happening from electron micrographs of thin, specially prepared specimens or by tagging molecules with fluorescent dyes and flooding specimens with light. But traditional fluorescence imaging, like electron microscopy, often limits observations to dead tissue. Many dyes and cellular proteins fluoresce only when they are zapped by short-wavelength, high-energy photons, which can be highly damaging to living cells. And because the entire specimen is illuminated, photons bouncing off other cellular components can greatly reduce the contrast of the image. Researchers have partially solved the contrast problems with the so-called confocal microscope, a device that illuminates only one section of the specimen and has a pinhole in front of the photodetectors to block out much of the stray light. But multiphoton microscopy tackles both the contrast and the photodamage problems.

    Nervous action.

    Multiphoton image of living Purkinje cell filled with fluorescent dye; calcium ions released by dendritic spine (bottom) of stimulated cell.

    DENK AND SVOBODA, NEURON 18, 351 (1997)

    The key to this technique is the use of special pulsed lasers to fire precisely focused bursts of lower energy photons at the sample. If two or three photons strike the target molecule almost simultaneously, they produce the same effect as one photon with two or three times the energy. This double or triple punch lights up proteins that have been tagged with special dyes, and it can make some proteins fluoresce on their own. The lower energy of the individual photons cuts down collateral damage, allowing the cell to be kept alive for hours instead of minutes. And the tight focus of the laser beam—compared with bathing the entire specimen with light—greatly reduces effects of light scatter.

    “The technology is catching on like mad,” says Webb, who, with his colleagues Winfried Denk and James Strickler, reported the invention and first use of two-photon excitation in 1990 (Science, 6 April 1990, p. 73) and—along with several other groups—has since extended the technique to three-photon excitation (Science, 24 January, p. 530). Warren Zipfel, a biophysicist in Webb's lab, recently teamed up with plant biologist Maureen Hanson's group at Cornell to use multiphoton imaging to track communication signals between plant organelles called plastids (see sidebar on p. 1989). “This is an illustration of what you can do with multiphoton excitation,” says Webb. “It's enabling plant biologists to see processes in living cells in ways never possible before.”

    Neuroscientists also are using multiphoton imaging to look at previously unseen processes. “If you want to go even 500 micrometers into brain tissue, it is impossible with confocal [microscopy],” says Denk, a physicist-turned-biologist now at Bell Laboratories in Murray Hill, New Jersey. “You lose so much of the excitation energy because of light scattering.” What that means is fluorescence gets lost as the light bounces around inside tissues such as the brain. The situation is analogous to walking into a cave illuminated only by sunlight from the entrance—the farther inside you go, the darker and fuzzier the surroundings appear.

    Denk and his colleagues have gotten around the problem by using a multiphoton microscope to peer into the brains of live rats, homing in on the dendritic spines, spikey branchlike structures at the junctions between nerve cells. After filling individual neurons with a calcium-sensitive fluorescent dye and tickling the animals' whiskers, the researchers followed calcium ions as they traveled between the spines. The results provide information about the biochemistry of learning and memory formation that Denk says could not be obtained by studying neurons in petri dishes or in slices of brain mounted on slides. “There has been a big debate in neuroscience as to whether results in primary culture can really be generalized to live tissues,” Denk says. “People want to know, ‘How does a cell really work with other cells around it?’”

    While Denk's work and that of others, including Steve Potter at the California Institute of Technology (Caltech) in Pasadena, are beginning to answer that question for neurobiologists, developmental biologists are also hot on the multiphoton trail. Developing embryos are hard to image because egg yolk tends to scatter light, says cell biologist Victoria Centonze, deputy director of the Integrated Microscopy Resource Center at the University of Wisconsin, Madison. But multiphoton microscopy can cut through the scatter, and its relatively gentle probing can provide up to 20 hours of observation of a live embryo. This allows researchers to follow egg cells as they shape into embryos or track a gene that controls limb development.

    Interfering lasers. While multiphoton imaging is opening a window into cellular interiors, interference microscopy is beginning to sharpen biologists' view of individual structural components. The technique involves splitting laser light into two beams and focusing them on a sample, so that their light waves interact with each other as they pass through the specimen. The resulting pattern of light-dark interference fringes, in essence, illuminates the sample in layers instead of lighting up the whole sample at once. “Where there are regions of brightness, it stimulates fluorescence, and where there is a null, there is no excitation,” says Taylor of Carnegie Mellon. That pattern also improves contrast, says Taylor, because neighboring zones will not fluoresce and blur the one being imaged. And the layers can be moved up and down to provide three-dimensional information by shifting the angle of the laser beams and thus the interference pattern.

    Biophysicist Fred Lanni and his colleagues at Carnegie Mellon have used a simple version of an interference microscope to follow fibroblasts as they crawl into a healing wound. At the same time, several groups in Germany, including Hell's team at the Max Planck Institute, are modifying interference principles in other ways: to monitor cell-scaffolding proteins during mitotic division or to look at aberrations, tagged with fluorescent dyes, in DNA from patients with genetic illnesses such as Prader-Wille Syndrome and certain types of leukemia. Hell says that when computers are used to correct blurring of the image, the three-dimensional (3D) resolution of cellular components such as F-actin proteins can be improved up to 15 times over that of confocal or multiphoton imaging alone.

    In another modification of interference microscopy, researchers at the Max Planck Institute for Psychiatry in Munich have devised a system to peer at the neuronal networks in rats. German physician Hans-Ulrich Dodt, who had an avid interest in astronomy, teamed up with Walter Zieglgänsberger, a pharmacologist and physiologist, and created an instrument that uses an infrared-imaging technique similar to one used by astronomers. “We went from light-years to micrometers,” Zieglgänsberger says.

    Their instrument illuminates the sample with obliquely angled beams of infrared light whose wavelengths are out of phase. The image is captured directly by a special infrared-sensitive camera, without requiring damaging dyes or fluorescent markers. Dodt and Zieglgänsberger have used the technique to visualize living brain structures down to the resolution of single spines, which allows them to perform a variety of observations with far greater precision. “It's like fishing in a pond blindly and then having the lake clear so that I can see all the fish,” Zieglgänsberger says.

    While infrared video microscopy has combined several existing techniques into one device, Taylor and his colleagues at Carnegie Mellon, along with others, are working on bringing together several existing microscopes into one instrument. “The goal is to allow researchers to interact with a dynamic system, like a developing embryo, in real time,” says Taylor, whose instrument, known by the acronym AIM—for automated interactive microscope—is currently in the late development stages.

    By hooking up the instrument to a powerful computing system, researchers can piece together the 3D images—which otherwise take hours to sort through—very rapidly, Taylor says. That way, researchers could watch a fertilized egg divide, say with multiphoton imaging, then add a drug or reagent and switch to interference microscopy to see how the drug takes effect. “The whole purpose of the AIM is that the data are collected, processed, and displayed during the time of a biological event,” Taylor says. “That way a researcher can change the course of an experiment all in the time frame of a biological process such as cell division or locomotion.”

    While interference and multimodal imaging devices are still in the developmental stages, multiphoton technology is now commercially available. But it comes at a high price. The Cornell Research Foundation has patented the technology and licensed it exclusively to BioRad Microscience Ltd., in Hemel Hempstead, U.K., which is selling the complete instruments for anywhere from $300,000 to $450,000. About $100,000 of that stems from the cost of the laser, while the rest of the major costs are based on meeting European codes for the instrument, according to Webb. Even with that price tag and BioRad's decision not to sublicense, many researchers are optimistic that multiphoton imaging will become widespread. “Its advantages are so clear,” says biochemist Steve Potter, who has set up a multiphoton instrument in Scott Frasier's lab at Caltech. “Once the lasers become mass-produced, I predict every confocal [microscope] will become multiphoton.”


    Integrated Microscopy Resource, University of Wisconsin, Madison

    Center for Light Microscope Imaging and Biotechnology, Carnegie Mellon University, Pittsburgh

    Caltech Biological Imaging Center links to other imaging sites

    Department of Biology, University of Utah, Gard Lab, time-lapse images of Xenopus development


    Jellyfish Proteins Light Up Cells

    1. Trisha Gura

    The power of multiphoton imaging to illuminate components deep within living cells and tissues rests almost as much on improving fluorescent reagents as it does on high-tech lasers (see main text). Although some proteins, such as serotonin, fluoresce naturally when zapped with multiple photons, others must be tagged with fluorescent dyes in order to become visible.

    Colorful mutants.

    Green fluorescent protein comes in different hues.

    Photo courtesy of R. Y. Tsien and R. Heim (HHMI/UCSD and Aurora Bioscience Corp.)

    The dyes themselves, or the procedures to get them into cells, tend to kill everything before a laser beam ever touches the sample. However, a new type of marker, based on a protein from a jellyfish that glows brilliantly when hit with 500-nanometer-wavelength light, is helping to light up living cells. Called green fluorescent protein (GFP), it was first cloned in 1992 by Doug Prasher's group at Woods Hole Oceanographic Institution in Massachusetts, and can be genetically engineered into cells—eliminating the need to apply toxic stains to the specimen.

    The cDNA of this protein can be hooked up to any gene and expressed along with that gene's protein product. The only thing needed to make GFP fluoresce is a single dose of high-energy ultraviolet light or its lower energy multiphoton equivalent. “The crucial difference between GFP and [earlier, widely used dyes] is that it works [better] in live cells or animals,” says GFP pioneer Roger Tsien of the University of California, San Diego. What's more, mutant forms of the protein glow in different colors—from yellow-green to bright blue—which enables researchers to follow the workings of several molecules simultaneously.

    Tsien and others are now working frantically to make mutants that glow brighter, fluoresce in more colors, or hook onto calcium ions and phosphate groups in cells and tissues. A recent map of the protein's physical structure is aiding the task (Science, 6 September 1996, p. 1392), and already multiphoton-imaging pioneer Watt Webb's group at Cornell University has found that the excitation of one of the GFP mutants is much brighter than rhodamine, the brightest synthetic dye. “It's becoming a sort of mutual synergy,” says Winfried Denk, of Bell Laboratories in Murray Hill, New Jersey. “Once you can see effectively, then there is the incentive to develop new [fluorescent markers].”


    Spectral Technique Paints Cells in Vivid New Colors

    1. Gary Taubes

    You can think of image enhancement as the art of helping the eye do what it does naturally. Take the two images below. On the top is a conventional micrograph of cells from a pap smear. The cells have been stained to bring out the contrast between different types: Mature epithelial cells are pink-orange, while younger cells stain blue-green, as does the precancerous dysplastic cell in the middle. A pathologist would identify it by its abnormally large nucleus, but it wouldn't be hard to miss.


    On the bottom is the same image, spectrally classified. Richard Levenson and Daniel Farkas of Carnegie Mellon University in Pittsburgh created individual spectra for each pixel in the image with the help of a microscope called the SpectraCube. The microscope divides light from each pixel into beams that travel along paths of varying lengths, then are recombined and allowed to interfere. Mathematical analysis of the resulting interference patterns yields a spectrum.

    By comparing each pixel's spectrum to those of reference pixels (boxes on original micrograph), Levenson and Farkas's system identifies groups of pixels with similar spectra and assigns them distinctive colors, making them much easier to tell apart than they are in the original stained micrograph. The nucleus of the dysplastic cell, only subtly different in color from that of a normal cell in the traditional micrograph, is here colored a unique and fiery red, befitting its threatening nature.

    The technique had already been applied to cytogenetics by Thomas Ried and Evelin Schröck of the National Center for Human Genome Research in Bethesda, Maryland. They color-coded and differentiated the 24 pairs of human chromosomes after labeling them with tracers that endowed each one with a slightly different spectrum (Science, 26 July 1996, p. 494). Having extended the technique to pathology, Levenson and Farkas say it could be used throughout biology to increase the differentiation power of stains, dyes, and fluorescent molecules. “It divides the spectrum into a whole slew of new colors that otherwise couldn't be appreciated by the eye,” says Levenson. “It's our belief that important information resides in those colors, and spectral classification can bring it out.”


    Play of Light Opens a New Window Into the Body

    1. Gary Taubes

    A light bulb or a laser beam is not the first tool you'd think of using to get a look inside an opaque object such as a brain or a breast. That may be why the idea seems to attract researchers for unusual reasons. City University of New York physicist Robert Alfano, for instance, says he entered the field because one of his students sent a light beam through a glass of milk and saw the shadow of a bead suspended in the milk. Britton Chance, a biochemist at the University of Pennsylvania, got started because his synchrotron broke. With little to do, he had the curious thought that it might be possible to propagate laser light through his own brain, so he tried it. Enrico Gratton of the University of Illinois then took to the research because Chance came to him wondering why lasers took so much longer to go through his students' brains than they did his own. “As a physicist,” says Gratton, “I never thought a laser would go through a head. But it does, and when it goes, you can learn something about what is inside.”

    The brain lights up.

    Finger movement on the opposite side of the body causes a change in oxygen content, detected by conventional MRI (left) and by pulses of light that diffuse through the brain and reveal changes in absorption (right).


    The fact is you can learn a lot about what is inside, and so these unorthodox beginnings have created a field of research in which light at optical and near-infrared wavelengths is used to image and probe inside human tissue. Some techniques haven't made it out of the lab. But others, such as ways of mapping the oxygenation of the brain and other organs with light, are already in clinical tests. Ultimately, researchers hope light will complement x-rays in mammography and maybe even eliminate the surgeon's knife for biopsies, tagging a tumor as benign or malignant simply by its optical properties. While some of these goals may be distant ones, light's advantages for imaging and diagnosis make it worth pursuing, says Stanford University engineer and physician David Benaron.

    “Most imaging modalities are not only expensive but they're potentially harmful,” says Benaron. “And those who need imaging are often critically ill people who can't easily be transported. They want the imaging modality to come to them. Optics is perfect: Light bulbs are small; they don't emit x-rays; and they're low power.” Perhaps light's greatest advantage is its colors. Other imaging techniques rely on contrast agents—such as chemical or radioactive dyes—to make them sensitive to different structures or tissues. With light, “every wavelength you use is a new contrast agent,” says Benaron. “You have the ability to gain contrast chemically, to see whether you're looking at hemoglobin, or bilirubin, or to analyze the water and fat content of tissues.”

    Optical mammograms.

    Two views of the same breast, derived from pulses of light that diffused through the tissue, reveal a tumor.


    The catch to imaging with optical wavelengths is the obvious one. Shine a light bulb or a laser pulse through a breast, and very little of it will go straight through. Most of the light, considerably more than 99.9999% of it, will be absorbed by molecules or will scatter off cells and cell organelles and at best may end up lighting up the tissue like a dim street light illuminating a dense fog. The challenge of optical imaging is either to eke a signal out of the few photons, known as coherent or ballistic photons, that make it straight through the tissue without scattering, or to reconstruct an image from the deluge of photons that scatter hundreds or thousands of times in the course of a few centimeters. Better yet, says General Electric physicist Deva Pattanayak, is combining the two, “and doing it at multiple wavelengths.”

    First light. In the late 1980s, Alfano was among the first to try to capture ballistic photons when he sent light pulses through a highly scattering medium—in his case, a glass full of polystyrene beads—and looked for photons that sneaked through without scattering. Alfano assumed that any photons that get a clear shot would arrive earlier than the photons that scattered along the way. And photons that “snake their way through the matter” with only a little scattering, he says, will follow shortly thereafter. “If you capture the early portion of the light, then you get a clear image,” he says. The trick was to catch those early photons, which is what Alfano has spent the last 7 years doing.

    “We have developed various ways of selecting the earliest portion of the light, the first couple hundred picoseconds [trillionths of a second],” he says. One is a blindingly fast shutter consisting of two pieces of oppositely polarized glass—a combination that ordinarily blocks any light—separated by what's known as a Kerr medium, which reacts to light by changing its optical properties.

    To trip this shutter, Alfano sends two pulses of light through the material to be studied. The first is a trigger pulse: It passes through the first polarizer and then hits the Kerr medium, making it birefringent. The result is that for the next few picoseconds, the medium will flip the polarization of any light that passes through it, allowing the light to slip through the second polarizer and in effect opening the shutter. The first part of the next pulse makes it through, but by the time the bulk of the pulse gets to the Kerr medium, it will have stopped being birefringent and the second polarizer will block the light. “We carve out a section of the scatter profile by using this system,” says Alfano.

    Ballistic cat.

    Capturing the first, “ballistic” photons from a light pulse sent through a turbid medium reveals the outline of a cat, embedded in the material. The image fades when later photons are captured.

    By converting the first few photons of the second pulse into an image, Alfano says he has been able to see droplets of water floating inside a glass of milk with millimeter resolution. To image actual tissue that is thicker than a centimeter or so, he will have to use scattered light as well, which is what he is working on now.

    Irving Bigio and his collaborators at the Los Alamos National Laboratory in New Mexico, however, believe they can see deeper into tissues by replacing the pulsed light with a continuous beam, which delivers more photons. The Los Alamos technique sorts out the ballistic photons by splitting the light into two matching beams. One goes through the sample while the other, the reference beam, is routed around it; then the two are recombined and allowed to interfere. “We time the reference beam so that it has the same path length as the ballistic photons in the probe beam,” says Bigio. Only the ballistic photons interfere with it. “Then the scattered photons will have a longer path length, so they will no longer be in phase with the reference beam and therefore will not be able to produce an interference pattern.” The interference pattern produced is literally a hologram, which can be read out by a third laser and will show “a shadow image of whatever is inside that scattering medium.”

    Bigio says that so far he and his colleagues have “generated fairly sharp images of cross hairs, 300 micrometers in diameter, embedded in a scattering medium that is 2 centimeters long and has half the scattering power of real tissue.” They hope to optimize the method until they can image a millimeter-sized object through 4 or 5 centimeters of real tissue.

    Until then, however, the use of ballistic light is likely to be limited to surface or near-surface imaging. Indeed, one system based on roughly the same principle—but limited to extremely thin slices of tissue—is already in the clinic for biopsies of the retina. James Fujimoto of the Massachusetts Institute of Technology and his colleagues shine light on tissue with a fiber-optic source and gather the photons reflected from the first few hundred micrometers, looking for light that comes back without significant scattering. Like Bigio, he knows he has minimally scattered light when it interferes with a reference beam. The results, says Benaron, are “images of amazing resolution in living tissue, although the depth is limited to probably less than 2 millimeters.”

    As Fujimoto and his colleagues describe in this issue of Science, they now have expanded the concept to general optical biopsy and have figured out a way to use the system in catheters, endoscopes, and surgical microscopes (see Report on p. 2037 and Perspective on p. 1999). “In principle,” he says, “you can image at micrometer-scale resolution any part of the body you can access optically through instruments.”

    Fog light. Getting even coarse resolution at depths of more than a couple of centimeters, however, means exploiting photons that have been scattered so many times that they form a diffuse glow. But this diffusive light can supply surprising amounts of information. The proportion of light absorbed at different wavelengths reflects the chemical makeup of the tissue—fat, water, or blood content, for example—while the deflection or scattering of the light depends on how the cells are organized. For example, the regularly ordered cells in healthy tissues will scatter light differently than will the tumultuous explosion of cells in a tumor. “These different effects can be used to deduce the structure of the tissue you're looking at,” says Benaron. “That is the crux of it.”

    One technique for exploiting diffusive light relies on the time it takes photons in a pulse of light to penetrate the tissue. Fire in a flash of light at a single wavelength, and “what you detect is a pulse of light with a delay over time,” explains Benaron. “It's like an echo. It peaks after a period of time and then decays. That curve allows you to separate out how much absorption there is in your sample and where it is, and how much scattering there is in the sample and where that is.” The higher the absorption, for example, the faster the signal decays, while the average time the light takes to diffuse through the tissue depends more heavily on scattering. Probing the tissue with multiple pulses at different wavelengths can reveal chemical composition. And by moving the sources and detectors around, the system can gather enough information to reconstruct a three-dimensional image.

    Benaron and his Stanford colleagues have created a fiber optic-based head band that allows light to be emitted and detected at 32 locations on the skull. This, he says, is enough to provide a “unique solution that allows you to solve for an image.” Because oxygen-starved brain tissue absorbs light at different wavelengths compared to healthy brain tissue, this kind of imaging could allow physicians to watch a patient for a possible stroke during surgery or monitor the effectiveness of drugs designed to restore blood flow after a stroke.

    Another diffusion technique also uses light at one or a few frequencies, but modulated in a smooth sine wave. In this case, the key is to watch how the shape of the wave changes as it passes through tissue, explains Arjun Yodh, a Penn physicist who collaborates with Britton Chance: “None of the photons themselves travel more than about a millimeter before they get their direction completely randomized.” But the overall pattern is preserved, he says. “If you look at the photon number density, the number of photons per unit volume in this tissue, it is going to vary as a function of position and time.”

    By measuring the amplitude and the phase of the photon-density wave as it reaches different points on the tissue surface, the researchers can figure out how much scattering and absorption the photons experienced. Mathematically, the analysis is no different from analyzing the scattering of any waves. It's analogous, says Yodh, to watching a wave on a lake scattering from piers and rocks, and then reconstructing the position and the size of the objects from the scattering pattern.

    This technique, too, can map tissue oxygenation, based on differences in the absorption patterns of density waves at several different wavelengths. Chance says he and his collaborators can now “make pictures, with a resolution of about a half-centimeter, of regions of the brain, breast, or legs that don't have oxygen and therefore don't function well.” They can also detect hemorrhage, because the oxygenation of leaking blood is different from that of blood flowing normally through veins and arteries.

    Chance is collaborating with researchers at Baylor University in Waco, Texas, in a clinical study that compares the diffusive-light method with x-ray and computerized tomography (CT) scans on subjects brought into the emergency room with potential brain damage. In Chance's technique, a light source and detector are placed against the skull and the transmission of light at two wavelengths is measured. The procedure, which takes a few seconds per measurement, is then repeated on the other side of the brain. “What they find is a huge change in differential absorption at those two wavelengths when there's a stroke, or bleeding.” says Yodh. The light-absorption signal, measured with a cheap and portable system, could serve as an early warning telling physicians when a CT scan is urgently needed.

    The Holy Grail in this field, as Chance puts it, is developing light-based systems that could detect breast tumors and even determine whether a tumor is malignant or benign based solely on its response to light. Many researchers are skeptical that this will ever be done, if for no other reason than because of the near impossibility of getting sufficient resolution out of tissue more than a few centimeters thick.

    Both Gratton and Bruce Tromberg of the University of California, Irvine, however, are working on systems to do just that. Gratton says they have demonstrated that absorbed and scattered light can reveal tumors, but “the real question is can we see all tumors?” As for the task of distinguishing malignant from benign tumors, he describes it as “another order of magnitude.” Blood oxygenation might be one basis for the distinction, he says; it may be lower in malignant tumors because the tissue is growing faster. The cells' mitochondria—which are more abundant in cancers—could also provide a clue, because the density of mitochondria should affect light scattering. “We're trying to understand fundamentally what it is about tissue that changes” in a cancer, says Tromberg, “and why it looks like it does.”

    In spite of the technical hurdles, researchers persist because they believe optical imaging will be simplicity itself in practice. Benaron describes one vision: “If someone comes into an office and says ‘I have this lesion,’ you stick a light probe onto it and image the lesion. And the computer, using the absorption and scattering characteristics, can tell you whether this is normal or a cancer. That's more than just a pipe dream.”


    Site for diffusive imaging group at the University of Pennsylvania

    Site for diffusive imaging group at the University of Illinois, Urbana-Champaign


    Firefly Gene Lights Up Lab Animals From Inside Out

    1. Gary Taubes

    What better way to look inside your resident lab animal than to put a light source inside it and detect the light seeping out? A team of researchers at Stanford University has done just that by genetic engineering.

    In a proof of concept, physician and engineer David Benaron, virologist Christopher Contag, and microbiologist Pamela Contag spliced the gene for luciferase, the enzyme that puts the fire in fireflies, into a salmonella bacterium. The photos below, made with no more than a souped-up video camera, show mice infected with the glowing salmonella. Taken 5 hours apart, they trace the course of the infection when untreated (top pair) and when treated with antibiotics (bottom pair).


    A transgenic mouse created by John Morrey of Utah State University represents the next step: an animal with the luciferase gene in every cell of its body. The photo below shows the glow that appears in the ears of this mouse when the gene is turned on. In this mouse, the luciferase gene is tied to a genetic switch that, in human cells, is activated when HIV, the AIDS virus, is replicating. Mice aren't susceptible to HIV infection, but the Stanford researchers simulated its effect with a chemical known as DMSO, which turns on the genetic switch and, with it, the light. “We can image in the intact animal where and when the gene is activated by watching the lights,” says Benaron. He adds that with the right animal model for HIV infection—which is still “a huge step,” says Contag—the scheme might be used, for instance, to test HIV drug treatments. “We would no longer have to wonder if the drug is effective in vivo; we could watch the virus replicate and see what happens when we give an antiviral,” says Benaron.


    He adds that the spatial resolution of the technique is limited to roughly 10% of the depth—which means that a glowing cell 5 centimeters deep can be resolved to within a half-centimeter. Even so, Benaron sees unlimited potential. “You can use bioluminescent approaches to study processes in vivo which cannot otherwise be visualized at any resolution,” he says. “You could use it to study gene expression in real time. Want to know when a gene turns off and on during development? Add luciferase. Or evaluate genetic therapy. Right now we have no real-time information on genetic therapy. This would give you a way to track genetic therapies in vivo.”


    Rethinking the Telescope for a Sharper View of the Stars

    1. Andrew Watson
    1. Andrew Watson is a science writer in Norwich, U.K.

    Twinkling specks in the firmament might be good enough for poets and children's nursery rhymes, but astronomers need to do better. For decades, they have tried to transform these featureless points of light into distinct images—of galaxies, nebulae, binary stars, even features on single stars—by building larger and larger versions of the basic reflecting telescope pioneered by Isaac Newton. But they have run into limits set by the turbulent atmosphere, the effects of gravity and temperature on giant mirrors, and practical constraints on telescope size. Simply making bigger instruments is no longer enough: Astronomers have had to get smarter and rethink the telescope.

    Stripping away the shimmer.

    An adaptive optics system on the Canada-France-Hawaii Telescope sharpens a view of the galactic center (right) by eliminating the atmospheric blurring seen in a normal image (left).


    Inside the domes that are sprouting like high-tech mushrooms on mountaintops in Hawaii, Chile, and the American Southwest are telescopes that embody this new thinking. They combat the warping effects of gravity on their giant mirrors with computer-controlled “active optics” systems. They reclaim images from the ravages of the atmosphere with adaptive optics—what Masanori Iye of the National Astronomical Observatory in Japan calls “a miracle instrument” that precisely undoes the atmospheric distortions. And a few systems now transcend the limits on telescope size by combining light from three or more separate telescopes.

    The goal of all this, says Roger Davies of the University of Durham in the United Kingdom, is “a sharper and deeper view of the universe.” Uncorrected images of, say, the core of our galaxy from the largest ground-based telescopes show mainly haze. But images made with the adaptive optics-equipped Canada-France-Hawaii Telescope (CFHT) on Mauna Kea, Hawaii, show distinct stars swarming around a source of intense gravity, probably a black hole. And by combining light from three small, widely spaced telescopes, Ken Johnston and his team at the U.S. Naval Observatory recently made the highest resolution picture ever created in optical astronomy, showing two stars that are such close companions no single optical telescope has ever been able to separate them.

    Sizing up a giant.

    A view of Betelgeuse from the Hubble Space Telescope is the first direct image showing a star as more than a point.


    The appeal of larger mirrors is as strong as ever: They gather more light, revealing fainter objects, and in principle they also allow a telescope to see finer detail. The biggest mirrors now scanning the heavens are those of the twin 10-meter Keck telescopes on Mauna Kea, each with a light-gathering area as large as a suburban backyard. Close behind are several others under construction. These include the Very Large Telescope (VLT) on the Paranal mountain in Chile—actually four separate telescopes, each with an 8.2-meter mirror—the Japanese Subaru 8.2-meter instrument on Mauna Kea, and the twin telescopes of the Gemini Project, one in Hawaii and one in Chile.

    But the force of gravity can make building such large mirrors a self-defeating exercise, says master mirrormaker Roger Angel of the Steward Observatory in Arizona. “As mirrors get bigger, in the traditional ways of making them the only way to hold their shape was to make them thicker and heavier”—with the result that the telescope structure becomes overly massive, he notes. Angel cites the example of a Russian 6-meter telescope mirror, 60 centimeters thick, that weighed 50 tons. “You cannot build very large telescopes using that philosophy,” adds Davies, a member of the Gemini team.

    Instead of relying on stiff mirrors, designers of the Gemini, VLT, and Subaru telescopes opted for mirrors about 20 centimeters thick, too thin to fight gravity on their own. These mirrors rely on active optics to compensate. As gravity distorts the mirror, a couple of hundred tiny pistons attached to the back push it to within a few tens of nanometers of a perfect shape. The system calculates the necessary adjustments by monitoring the image of a selected star, reshaping the mirror perhaps once an hour to remove distortion from the image. The Keck telescopes, the second of which became operational only a few months ago, rely on a variant of the idea. Each 10-meter mirror has 36 separate, individually controlled, hexagonal mirror segments.

    So close, so far.

    The sharpest optical images ever made, from the Navy Prototype Optical Interferometer, distinguish the two stars in the binary system Mizar A. Separated by just a quarter of the Earth-sun distance, they lie 80 light-years off.


    Gravity is not the only enemy of large mirrors; temperature changes are another. Large mirrors cool slowly at night, and if a mirror is much warmer than the air layer above, it will drive convection—the process responsible for the shimmer seen over hot pavement—thus blurring the image. Most telescopes are equipped with coolers to help minimize the temperature difference, but Angel and his team at the Steward Observatory Mirror Laboratory, the world's largest research facility for optical fabrication, have an additional stratagem.

    Their mirrors are molded with a honeycomb structure on the back, which supports the glass front face and allows it to be just a couple of centimeters thick. That speeds the response of these mirrors to changes in temperature, says Angel. These “hollow mirrors” are also rigid enough to need less active control than the thin mirrors of the VLT, Gemini, and Subaru. Currently, Angel and his team are casting their largest hollow mirror yet, 8.4 meters across, destined for the Large Binocular Telescope on Mount Graham in Arizona.

    Undoing the atmosphere. Active optics and temperature control can mean that the last few meters of the starlight's journey are untrammeled. But what about the few kilometers before it gets to that point: the trip through the turbulent, convective atmosphere? The answer, adaptive optics, “has been a gleam in the eye for 40 years,” says Angel. Now, thanks in part to the U.S. “Star Wars” (SDI) program of the 1980s, that gleam is becoming sharper. Adaptive optics relies on monitoring the image of a bright star next to the object being observed to measure the atmosphere's blurring effects. The system measures the “wrinkles” in what should be a flat wave front coming from the guide star and manipulates a small deformable mirror off which the image bounces. By taking readings and reshaping the deformable mirror a thousand times a second, the system precisely undoes atmospheric distortion. In principle, this allows the telescope to produce images as sharp as the theoretical “diffraction limit” of its mirror permits, surpassing the clarity of even the Hubble Space Telescope.

    The first adaptive optics systems, developed for military purposes, were declassified and offered to the astronomy community in 1991 (Science, 28 June 1991, p. 1786). “Over the past 1 to 2 years, the first scientific results have now started to come out,” says Ray Sharples, a colleague of Davies's at Durham who helped build an adaptive optics system for the William Herschel 4.2-meter telescope in the Canary Islands. With it, Durham astronomers have been able to resolve stars in the core of the globular cluster M15 and spot the otherwise invisible dwarf companion of the star Gliese 105a. Another pioneering system is sharpening the vision of the CFHT. Called PUEO, after a sharp-eyed Hawaiian owl, it produces “very clean images” approaching the diffraction limit of the telescope, says François Roddier of the Institute for Astronomy in Hawaii, a leader in developing the system.

    Adaptive optics is not a perfect solution, however, because it has a kind of tunnel vision, correcting only along a single line of sight. “Basically, a compensated image consists of a diffraction-limited core surrounded with a halo of uncompensated light,” says Roddier. This means that adaptive optics systems will never render crisp panoramic pictures; those will remain the preserve of the 2.4-meter Hubble Space Telescope in its privileged roost high above the distortions of the atmosphere. What's more, the object of interest has to lie close to a sufficiently bright guide star, and sometimes there's nothing suitable nearby.

    Here astronomers have a technical fix, which will see the light in adaptive optics systems that will be added to the Keck and Gemini telescopes: creating their own guide stars. In the upper atmosphere, some 90 kilometers up, is a layer of sodium atoms. A powerful laser beam reaching the sodium layer can make a tiny spot glow yellow, providing an artificial guide star that can be set aglow next to any object of interest. “Laser guide stars can increase the type of scientific observation you can do,” says Sharples.

    Telescopes in tandem. With active and adaptive optics now firmly established, bringing ground-based telescopes to the peak of their performance, one last obstacle remains: the diffraction limit itself, which is set by the mirror's size. Mirrors are not likely to grow much larger than 8 meters, says Angel, because of the practical difficulties of getting larger mirrors down freeways, into boats, and up mountain tracks. But astronomers have figured out a way to trick any telescope into behaving as if it were bigger: Link it up with others by interferometry, using techniques masterminded by radio astronomers decades ago. The result matches the resolution of a single telescope having a mirror equal in size to the spacing of the separate telescopes.

    Last year, a team at the University of Cambridge, U.K., led by John Baldwin, pooled signals from three small telescopes to make the first optical image ever to reveal the two components of the binary star Capella (Science, 16 February 1996, p. 907). Now Johnston and his group have used their Navy Prototype Optical Interferometer (NPOI) near Flagstaff, Arizona, to tease apart another binary star, Mizar A.

    The NPOI consists of three telescopes, each with a mirror effectively just 10 centimeters in diameter, set as far as 38 meters apart in a Y configuration. Yet the image it delivered has a resolution of 3 milli-arc seconds, more than 10 times sharper than any existing single-mirror telescope image. The two stars it distinguished lie 80 light-years off but are separated by just a quarter of the distance from Earth to the sun. “It seems to me it's absolutely clear now that one can do this sort of stuff,” comments Baldwin.

    It's not easy, however; interferometry requires that the light from each telescope travel precisely the same distance, to within a wavelength of light, from mirror surface to combination point. That's a punishing requirement, given light's short wavelengths. What's more, Johnston points out, the small mirrors of these first optical interferometers don't gather much light, limiting them to bright sources. Although interferometry cancels out atmospheric distortions across the entire array, making the mirrors larger than about 10 centimeters or so reintroduces the atmospheric demon for each individual mirror. But astronomers have yet another ploy in their quest for sharper images of the faintest objects: fitting each of the telescopes in an interferometer with adaptive optics. “That will allow the use of mirrors larger than 10 centimeters as interferometer apertures, so that fainter objects can be detected,” says Johnston.

    All three innovations should one day find themselves mountaintop neighbors. Plans are afoot to turn the Kecks into an interferometer by combining the two big telescopes with four smaller new ones. And, if all goes well, sometime early in the next century astronomers will link the four big mirrors of the VLT, each one equipped with active and adaptive optics systems, into an interferometer capable of producing images 50 times sharper than those of Hubble.

    Beyond the mountaintops, the logical next step in the search for the sharpest images is putting interferometers in space. “It will happen in space, but it will take a long time,” says Baldwin. Angel and his colleague Neville Woolf have already made a proposal for a space-based interferometer, one of several now in play. Theirs, featuring four 4-meter dishes made of glass just 2 millimeters thick, spaced along an 80-meter beam, should be sharp-eyed and sensitive enough to separate the faint glow of an Earth-like planet from its parent star. After their successes at seeing detail in the stars, telescope builders think that goal is in reach. “The technology to build this thing is unquestionably with us,” says Angel.


    Images from the adaptive-optics equipped Canada-Hawaii-France telescope

    Adaptive optics world tour

    Gemini project general site

    The Durham adaptive optics system

    Subaru Telescope general site

    Very Large Telescope general site

    Site for Steward Observatory, including mirror lab and Large Binocular Telescope

    Keck Telescopes general site

    Site for COAST, the Cambridge optical interferometer

    Site for NPOI, Navy optical interferometer

Stay Connected to Science