# News this Week

Science  27 Jun 1997:
Vol. 276, Issue 5321, pp. 1960
1. PLANT SCIENCE

# Corn Genome Pops Out of the Pack

1. Jon Cohen

Congress is poised to launch a corn genome project, but plant geneticists want to make sure other, related cereal grains aren't ignored

IRVINE, CALIFORNIA—In the next few weeks, key members of the U.S. Congress are planning to plant seed money into appropriations bills to launch a new genome project. This ambitious effort, focused on corn, or maize, the quintessential American food crop, could mean tens of millions of dollars for crop genetics research in the next few years.

The prospect of such an initiative has grabbed the attention of plant scientists, who hope it could do for crop genetics what the multibillion-dollar Human Genome Project is beginning to do for human genetics. By helping to unravel the multitude of genetic mysteries hidden in corn's crunchy kernels, the project could aid in understanding and combating common diseases of grain crops. It could also provide a big boost for efforts to engineer plants to improve grain yields and resist drought, pests, salt, and other environmental insults. Such advances are critical for a world population expected to double by 2050, says Robert Herdt, director of agricultural sciences at the Rockefeller Foundation. “Four species provide 60% of all human food: wheat, rice, maize, and potatoes,” says Herdt. “And we don't have good strategies for increasing the productivity of plants.”

Herdt was one of 50 plant scientists who gathered at the National Academy of Sciences' center here 2 to 5 June for a meeting billed as “Protecting Our Food Supply: The Value of Plant Genome Initiatives.” Although there's widespread enthusiasm for a plant genome project focused on a crop, the meeting revolved around the question of how much emphasis should be put on corn—which has a huge, complex genome. Several researchers believe the project should intensively study rice, too, which has a much simpler genome, and several other species from the genetically similar grasses family.

Unlike many meetings in which scientists dreamily discuss prospects in their field, the talks at this one were integral to the policy-making process. One of the meeting's co-organizers, Ronald L. Phillips of the University of Minnesota, St. Paul, is also chief scientist for the competitive grants program at the U.S. Department of Agriculture (USDA) and chair of a task force preparing a report to Congress about how the government should proceed on such an initiative. “This is the only meeting that I knew was going to be important from the outset,” said Michael Freeling, a University of California, Berkeley, geneticist and the meeting's co-organizer, at the opening session.

## Ear ye, ear ye

As much as scientists would like to see a crop genome project that studies several grasses, organizing it around corn makes a great deal of political sense. Corn is a potent economic force, providing much of the feed for the country's livestock, basic ingredients for everything from drugs to ethanol, and up to $8 billion in exports. And the industry is represented by a formidable political lobby, the National Corn Growers Association. Indeed, the idea for a publicly funded corn genome project began to take root late in 1995, when the growers' association put its muscle behind it. The sales pitch includes a 70-page business plan and a slick video promoting a “national corn genome initiative.” They have won the backing of Senator Christopher “Kit” Bond (R-MO), who chairs the subcommittee that funds the National Science Foundation (NSF). “One of the reasons we're here [in Irvine] is because of the corn growers and the political momentum they created,” said corn geneticist Joachim Messing of Rutgers University in Piscataway, New Jersey. Bond says he is keeping an open mind about the project and that he welcomes input from scientists. “Give us a game plan,” Bond told presidential science adviser Jack Gibbons and NSF director Neal Lane at a 22 April hearing on NSF's 1998 budget request. Bond asked Gibbons to assemble a panel to come up with such a plan, and the White House responded by creating the task force that Phillips chairs. An interim report is due this month, with a final report by December. Several hundred academic researchers in the United States are studying corn genetics, estimated Ed Coe of the University of Missouri, Columbia, and three companies have projects under way to identify corn genes. But there is scant coordination between any of these efforts, and much of the data from the private companies are not widely available (see sidebar). The solution, according to the corn growers, is a federally funded,$143 million research program that would stitch together these varied efforts.

Given a chance to put science in the driver's seat, plant geneticists are trying to block out the key issues. The most fundamental question is the same one that faced researchers who launched the Human Genome Project a decade ago: What level of detail is needed? One group would like to fish out from corn and other model plants just the sequences of genomic DNA most likely to code for genes. That's typically a small part of any genome. Another camp says that sequencing the entire genome is the only way to find all the genes and understand their relation to each other. “This is ‘déjà vu all over again,’” says David Cox, who co-directs a center working on the Human Genome Project at Stanford University. The lesson from the human experience, he says, is that “you need both.”

However, the big problem with sequencing the entire corn genome is just that—it's big. Corn has about 3 billion pairs of bases (the building blocks of DNA), which makes it comparable in size to the human genome. It also has an abundance of repeated sequences, which probably contain worthless information. Rice, on the other hand, is the smallest of the crop grasses (see table), with only 430 million base pairs. Moreover, rice has a great deal in common with corn, wheat, oats, barley, and other members of the grasses family, says rice researcher Susan McCouch of Cornell University. “Rice is the closest thing to the ancestral version of the grass genome,” says McCouch. “Yet it still embodies the essential set of genes for grasses.”

Gnats and giants.

Grain genomes range in size, but are much larger than the nonhuman species being sequenced.

View this table:

Researchers have already found broad similarities between the genes of different members of the grass family. “It's no longer OK for me to be a wheat geneticist and for you to be a rice geneticist,” says Michael Gale, a plant molecular biologist from the John Innes Centre in Norwich, U. K. “We all have to be cereal geneticists.” But he says “it's still an open question” how many genes the different grasses actually share.

Obtaining the entire rice sequence would clarify how much “synteny”—similar genes that appear in similar locations of the genome—exists between the grasses. But how to get the most bang for the bucks that researchers hope Congress will devote to the project is another question. “If we get $100 million, do we suck it all into the rice genome?” asked Jeff Bennetzen, a corn geneticist at Purdue University. “For me, whole-genome sequencing is a low priority.” Timothy Helentjaris of Pioneer Hi-Bred emphasized that rice has little political muscle. “If you got down to details and said 70% [of the budget] is for rice sequencing, you'd set off red flags,” said Helentjaris. ## Cornucopia project By the meeting's end, participants had cobbled together a plan that seemed to satisfy the majority. The first was a proposal for an international rice genome sequencing effort, with the United States putting up half of the money and inviting China and Japan, both of whom are funding rice genome projects, to join. The scientists also suggested building up a database of short sequences, called “expressed sequence tags” (ESTs), that can be used to identify expressed genes. They recommended sequencing 500,000 ESTs for corn and 100,000 each for rice, wheat, oats, barley, and sorghum. The group also called for computer databases to share data as they are generated and stock rights where researchers can freely receive the clones used to study the various plants. The plan won plaudits from government officials eager to avoid a congressional mandate. “I'm very enthusiastic about what I've heard at this meeting,” said Mary Clutter, head of the biology directorate at the NSF. “That is, focus on the science and let us build a program” to present to Congress. Clutter would like several agencies to participate in a project led by the USDA, with NSF funding a steering committee that would draw up a request for the 1999 fiscal year that begins on 1 October 1998. That's not soon enough for Kellye Eversole, a lobbyist for the corn growers at the meeting. “We don't want to spend another year on planning,” says Eversole. “We want to see this get off the ground in 6 to 7 months.” But James McLaren of Inverizon International, which drew up the business plan for the corn growers, signaled a willingness to be flexible about the scope of the project. “If you all tell me the best way to improve corn is to sequence rice, I'll support you,” said McLaren. “But you'd better be right, because [the corn growers] are standing out front.” Congress also seems eager to get started. A staffer in Bond's office who asked not to be named told Science that legislators plan to designate$10 million for the effort in two separate parts of the spending bill for the agriculture department. Another earmark might appear in the appropriations bill that funds NSF. But Clutter takes issue with that approach. “Earmarking … is anathema to the Administration,” said Clutter. “It means taking away money from something planned.”

Indeed, says Cliff Gabriel of the White House's Office of Science and Technology Policy, starting a genome project means curbing or ending an existing program. And although he didn't propose any candidates for the chopping block, he told the group that the Administration supports a grain initiative. “The time is right to do something,” he said.

2. PLANT SCIENCE

1. Cohen Jon

IRVINE, CALIFORNIA—Deciphering the genetics of the mustard plant won't by itself meet the world's increasing demand for food. But plant biologist Christopher Somerville thinks that it can teach his colleagues a lot about sharing as they embark on a grain genome project (see main text).

Somerville is part of a coordinated, international effort to decode, or sequence, all of the DNA in the genome of Arabidopsis (Science, 4 October 1996, p. 30). But a slide he presented at a recent meeting on food crops (see below) makes clear that the extent of collaboration has been uneven. While several groups were sharing sequence information fully, he says, Japanese and European researchers had yet to put any sequences into public databases. “From the beginning, we've had a lot of international cooperation, and this is not in that spirit,” says Somerville, who heads the Carnegie Institution of Washington's plant research branch in Stanford, California.

Leaders of both the Japanese and European Arabidopsis projects acknowledge the shortfall, but they say there's good reason. Michael Bevan of Britain's John Innes Centre, who heads the international consortium on the plant that Somerville and others contribute to, notes that European researchers, unlike their U.S. counterparts, don't release data until they have verified its accuracy. “The rapid release of highly accurate, annotated sequence is a goal we all aim to achieve,” says Bevan. Satoshi Tabata, who heads the Japanese project at the Kazusa DNA Research Institute in Chiba, Japan, says money has been a big obstacle to the posting of data. Tabata says the project will begin releasing data next month and “will keep releasing data without delay after that.”

Regardless of which view prevails, the Arabidopsis experience illustrates the obstacles to any effort to coordinate the sharing of genomic data. And even when scientists profess fidelity to the idea of sharing, the interests of industry and nationalism can be overwhelming. Michael Gale, a plant molecular biologist also at the Innes Centre, is much worried by what he says is recent pressure from the European Union (EU) to give industry first crack at any genome data. “The EU wants to protect its databases,” says Gale. “It's something we should all fight very vigorously.”

Rice genome researchers have long complained that Japan's 7-year-old rice genome project has been slow in sharing data (Science, 18 November 1994, p. 1187). While tensions have eased as the Japanese researchers have made their data and materials more widely available, similar concerns are being raised about the availability of data from a Chinese project. In particular, says Susan McCouch of Cornell University, Chinese researchers have had little interaction with foreign colleagues. “We have almost no information out of the Chinese project,” said McCouch. “No one I know has ever seen any of that data.”

Hong Guofan, director of the National Center for Gene Research at the Chinese Academy of Sciences in Shanghai, told Science that he expects the situation to improve shortly. An Internet site will offer data “as soon as the relevant financial arrangement has been settled.” Hong noted that Chinese rice researchers also submitted an abstract of their work at a meeting in South Carolina last October and have a related paper in press. McCouch replies that Hong canceled his talk at the South Carolina meeting and again at a meeting held in San Diego this January. “As far as I am concerned, an abstract is not ‘sharing data,’” says McCouch.

Other genomic data about grains are being held close to the vest because industry is directly funding the work. Three U.S. companies—Pioneer Hi-Bred, Monsanto, and DuPont—have corn genome projects under way. The highest profile belongs to Pioneer's project, a 3-year, $16 million deal with Human Genome Sciences of Rockville, Maryland, to pluck out pieces of corn genes and assemble them in a database. Setting the rules for access to such databases can be tricky. Pioneer's Steven Briggs says the data, while not in the public domain, are available. “We've granted everyone's request for access,” said Briggs about the project, which began in January 1996. Briggs urged the U.S. government to negotiate with industry to gain access to private databases. James Cook of the U.S. Department of Agriculture had some advice for the feds, too: Be tough with collaborators who withhold information. “If our partners are not holding up their end, pull the plug,” said Cook. For their part, most scientists prefer the carrot to the stick. “A data-release war [would be] a disaster for the genome project,” says Rob Martienssen, who co-leads the Arabidopsis sequencing project at Cold Spring Harbor Laboratory in New York. “We can only encourage them and lead by example.” 3. GENOMICS # Alzheimer's Maverick Moves to Industry 1. Eliot Marshall Since 1994, the British drug company Glaxo Wellcome has been buying bits and pieces of U.S. biotech firms as part of a push into genetics. On 17 June, the company announced a surprising choice to direct its growing genetics empire: Allen Roses of Duke University, a prominent neuroscientist and controversial Alzheimer's disease researcher. Roses will run this$47 million directorate from Glaxo's U.S. headquarters in Research Triangle Park, North Carolina.

Roses, an outspoken researcher whose ideas about the genetics of Alzheimer's have drawn a mixed reception from his peers, has been at Duke for 27 years and was named director of the university's Center for Human Genetics in 1996. He says the main reason he took the job with Glaxo is that “We are at a point now in the understanding of Alzheimer's disease [at Duke] that we are targeting” therapeutic products. “Universities don't make drugs and governments don't make drugs,” Roses says, but “Glaxo Wellcome does.” Glaxo Wellcome has funded Roses's work at Duke, and he says his research program will “be accelerated by my being inside” the company. Glaxo has agreed to allow Roses to continue some research at Duke as an adjunct professor.

As director of Glaxo's international genetics program, Roses will command a program based in labs in three countries (the United States, Britain, and Switzerland), comprising 150 researchers. According to Glaxo, the staff is expected to double over the next 18 months, as new departments are created to “ensure that genetics plays its part not only in drug discovery but also in development and in the commercialization of medicines.” Roses's job will be to forge a coherent strategy, linking combinatorial chemistry at Affymax Research Institute of Palo Alto, California (purchased by Glaxo in 1995), gene expression research at Incyte Pharmaceuticals Inc. of Palo Alto (as of last month, a partner of Glaxo's), and clinical genetics studies at Spectra Biomedical of Menlo Park, California (purchased by Glaxo this month).

Roses says one of the reasons the company chose him is that he's not a fence straddler. Indeed, he notes, some of his peers have called him a “street fighter.” For example, he recently spoke out at a Senate subcommittee hearing about what he called lack of vision in the public biomedical funding agencies. He says his grant requests to the National Institutes of Health received poor ratings from “narrowly focused scientists” with “dogmatic belief systems.” His lab would have closed, he added, had it not received funding from Glaxo Wellcome.

Roses may be best known for showing that a protein involved in cholesterol transport (apolipoprotein E) is a factor in Alzheimer's disease. Roses and his colleagues also linked genes that encode variants of the protein (the apoE genes) to varying degrees of risk for Alzheimer's disease. Alison Goate, an Alzheimer's researcher at Washington University in St. Louis, says that while most researchers would agree that the gene known as apoE 4 is “the single most important risk factor” for Alzheimer's disease in the under-70 population, some of Roses's other conclusions are not widely accepted. Most controversial, Goate says, is a theory of Roses and his Duke colleague Warren Strittmatter that “good” versions of the apoE gene (E2 and E3) produce a protein that helps maintain healthy nerve cells, while the “bad” variant (E4) fails to do so, leading to Alzheimer's disease (Science, 19 November 1993, p. 1210). Because some Alzheimer's patients do not have the apoE 4 gene, and some people who have the gene do not have the disease, many researchers doubt that a test for apoE 4 would have value in predicting whether a healthy person will get the disease.

While Roses may seem an iconoclast to some, his colleague Peter St. George-Hyslop of the University of Toronto says he's really “not all that outrageous … he likes to play that angle.” Goate agrees: “He thrives on controversy.” As for Roses's move to Glaxo, St. George-Hyslop comments: “It's good for them, bad for academic science.”

4. NATIONAL INSTITUTES OF HEALTH

# Varmus Grilled Over Breach of Embryo Research Ban

1. Eliot Marshall

Like a brainy kid getting a lesson from the neighborhood enforcers, Harold Varmus, director of the National Institutes of Health (NIH), endured 3 hours as the lone witness before a House investigative panel last week. He was grilled about an NIH-funded researcher, Mark Hughes, accused last year of violating a federal ban on embryo research. The 19 June inquiry before the oversight and investigations subcommittee—the panel once chaired by the fearsome John Dingell (D-MI)—also served as a reprimand of NIH's top brass, seated behind Varmus in the audience, who were faulted by legislators for lax management. The subcommittee chair, Representative Joe Barton (R-TX), called it a “friendly hearing,” but at times the questioning was anything but amiable.

Barton and other panel members, including Representative Ron Klink (D-PA), a sharp interrogator, concluded that Hughes, a molecular geneticist, had violated the ban on embryo research from 1995 to 1996. Hughes had searched for disease-causing mutations in DNA from embryos created by in vitro fertilization (IVF) to determine if they should be implanted in the mother's uterus. Barton said in an opening comment that he was concerned that Hughes had “conducted this prohibited [embryo] research openly on the NIH campus.” Barton implied that the NIH chiefs had looked the other way, allowing Hughes's research to go on “with a wink and a nod”—until it became a burning issue. Barton said, “It appears some at NIH believe they are above the law. They are wrong.”

Varmus confirmed that Hughes had violated the embryo research ban and other rules designed to protect human subjects. But “there was no wink and a nod.” Varmus maintained that he and other NIH leaders had been unaware of Hughes's alleged misconduct because Hughes was careful to hide it. “When Dr. Hughes's surreptitious pursuit of prohibited research was discovered,” Varmus said, “the NIH moved swiftly and decisively to terminate its research relationship with him and to ensure that no other similar violations were occurring.” But in an awkward moment, Varmus disclosed that he had not even learned details of the Hughes scandal until it surfaced in the newspapers in January 1997—3 months after officials of the National Human Genome Research Institute (NHGRI) had severed all ties with Hughes because of his apparent misdeeds. “It was unfortunate,” Varmus said: “My people assumed that someone else had [told me]” about the controversy, but no one had.

Hughes did not testify. But his lawyer, Scott Gant of Crowell & Moring in Washington, D.C., issued a statement in which Hughes claims, “I never intended to violate the ban on embryo research.” In interviews with Science, Hughes has insisted that NIH chiefs did not make it clear that federal rules forbade him to use NIH resources to practice his specialty—preimplantation genetic diagnosis (PGD) of DNA taken from a single cell of a human embryo. “The NIH leadership may believe that they expressly told me in person that my PGD research was barred,” says Hughes, “but that is not my recollection. …” He adds, “I was never given any written statements or policies indicating that I could no longer do my PGD work. I believe there was simply a miscommunication. …”

Hughes's charge of poor communication at NIH was supported by evidence at the 19 June hearing, but not his claim that he didn't know the rules. As Varmus noted, embryo research—particularly the IVF studies Hughes was involved in—had been widely debated. Since 1980, it had been off limits for NIH researchers. Congress cleared the way for funding of embryo research in 1993, but before approving any projects, Varmus, as NIH's new director, sought advice from a panel of experts on what should be allowed. Hughes was a member of the advisory group.

When the panel issued recommendations in late 1994 calling for limited use of embryos in research (Science, 9 December 1994, p. 1634), President Clinton stepped in with a new prohibition: He ruled that federal funding could not be used to create embryos for research. Then in 1995, the Republican Congress ruled that no funds could be used for “research in which a human embryo or embryos are destroyed, discarded, or knowingly subjected to risk of injury or death greater than that allowed under” other laws. NIH interpreted this to mean it could not fund any research on human embryos.

Varmus met Hughes—who had been recruited to the NIH campus in Bethesda, Maryland, in 1994 while the policies were in flux—in the NIH director's office on 12 June 1995 to clarify the rules. Hughes recalls the session as being packed with top officials. Varmus and his staff say that Hughes was told explicitly that he could not use NIH resources for DNA analysis of single cells taken from embryos. Hughes claims that this was not made clear. He says he continued to believe that, while research on embryos was off limits, analysis of DNA from single embryo cells was permitted. Unfortunately for NIH, the meeting produced no written memo to Hughes or NIH staff on the rules.

Hughes continued to analyze DNA—taken from single cells extracted from embryos in IVF clinics—in a lab he had set up at Suburban Hospital near NIH and, in at least one case, on the NIH campus. By chance, the test at NIH went wrong: Hughes had determined that DNA from an embryo did not carry a mutation that causes cystic fibrosis, but when the embryo was implanted and brought to term, Hughes confirms, the child tested positive for the disease. Complaints about research procedures conveyed to NIH by Hughes's postdocs triggered an internal inquiry by NHGRI staff in August 1996. The inquiry found that Hughes had violated the embryo research ban, given NIH fellows unapproved tasks, shipped NIH equipment on loan to an unapproved site, and failed to obtain proper ethics reviews for research protocols. NIH severed ties with Hughes on 21 October 1996.

After conceding that NIH had been slow to ask the Department of Health and Human Services inspector general to look into this case—an investigation that is still under way—Varmus pledged to do a better job of enforcing research limitations in the future. Barton approved, saying, “We can be much more unfriendly” if NIH doesn't show signs of enforcing the rules more strictly.

5. SPACE STATION

# Senate Raps NASA on Cost Overruns

1. Andrew Lawler

A powerful Senate committee chair said last week that he may try to curb cost overruns on the international space station by imposing a new ceiling on annual outlays for the project. Agency managers fear a new cap could trigger yet another redesign, and station supporters worry that any major changes could doom the program, which is scheduled to launch its first components in June 1998. The move comes just 1 month after NASA managers succeeded in calming angry House lawmakers, who felt Russia was reneging on its promised contribution.

The latest threat to the $30 billion project came at an 18 June hearing, when Senate Commerce Committee Chair John McCain (R-AZ) surprised NASA officials by calling for a comprehensive cap to hold the space agency more accountable for cost overruns. However, McCain, who supports the station, did not propose a specific figure. The same day, the General Accounting Office (GAO) warned that U.S. contractor overruns have reached$300 million, triple the level of a year ago. The problem, combined with Russia's tardiness in meeting its commitments, has caused a contingency fund to shrink much faster than planned, GAO said. The bad news for scientists is that NASA has used space shuttle and station science-facility funds to keep the lab on track.

The Administration and Congress informally agreed to a $2.1 billion annual limit on station funding in 1993. But GAO's Thomas Schulz warned Congress that “the station is going to cost more than the [current] cap allows.” A Senate aide was even more critical. “It's not a real cost cap—it's a fiction,” says the aide. “NASA is playing a shell game.” While admitting that the station's problems are severe, NASA managers say they are reluctant to alter the program. “I have no problems with reviews, but I worry about redesigns,” NASA Administrator Dan Goldin told McCain. “I don't think this program could survive another one.” McCain's proposal would include shuttle and civil service costs associated with the station program, which do not fall within the$2.1 billion figure. The new number—which almost certainly would be higher than the current limit because of the additional elements—could appear in Senate legislation authorizing NASA's 1998 funding, say congressional staffers. NASA managers say they will cooperate, but privately complain that there is too much uncertainty in the program to commit to a specific figure.

The GAO report, meanwhile, had harsh words for Boeing, the station's prime U.S. contractor, whose performance it said “showed signs of deterioration last year, [and] has continued to decline virtually unabated.” The company washed out in its last performance review, scoring zero out of 100 and forfeiting up to $33 million in incentive payments, say NASA officials. In an 18 June letter to NASA space flight chief Wilbur Trafton, Boeing defense and space group president Alan Mulally pledged to tighten oversight of subcontractors and keep to the schedule for major hardware. The problems with Russia and Boeing leave NASA with little room to maneuver. A reserve fund has already dropped from almost$3 billion last spring to $2.2 billion, the GAO noted, and it is expected to shrink to$1.4 billion by October.

Some are skeptical, however, that McCain's proposal will make a difference. “Congress has voted 16 times over the past 14 years to keep the program and has shelled out $18 billion,” says Marcia Smith, an analyst at the Congressional Research Service. With the first launch only a year away, she says it seems unlikely that Congress would cancel the effort, even if NASA exceeded a revised cap. 6. PACIFIC RIM # Labs Form Biomedical Network 1. Dennis Normile TOKYO—Biomedical scientists from more than a dozen institutions throughout the Pacific Rim are meeting here this weekend to launch an association to foster molecular biology and related research throughout the region. If all goes well—and if they can find the money—the researchers hope someday to develop into an organization with its own world-class laboratories. The idea for the tentatively named International Molecular Biology Network of Asia Pacific Rim grew out of a 4-year-old collaboration between the University of Tokyo's Institute of Medical Science and the Institute for Molecular Biology and Genetics at Seoul National University (SNU). Reciprocal annual meetings have spawned activities that have aided both institutions. When SNU virologist Sunyoung Kim wanted to study the interaction between two proteins, for example, he sent a graduate student to Tokyo to take advantage of an assay developed there. “If we wanted to set up [the experiment] in my lab in Korea, it would take 6 months,” Kim says. “We did it [in Tokyo] in 10 days.” The goal of the new organization, says Ken-ichi Arai, a molecular biologist at the Tokyo institute, is “to extend this type of interaction to other countries in the region.” The organizers received an enthusiastic response from a collection of institutions so loosely defined that it included Israel's Weizmann Institute of Science. “For those of us in the Asia-Pacific region, it will be very useful to have an organization that will allow us to interact a bit more closely,” says Nick Nicola, assistant director of the Walter and Eliza Hall Institute of Medical Research in Parkville, Australia. Y. H. Tan, director of the Institute of Molecular and Cell Biology at the National University of Singapore, thinks that the effort to enhance collaboration is “a great idea,” although he worries about the organization being dominated by one or two countries. One way to avoid that situation, says microbiologist Jeongbin Yim, director of SNU's institute and a co-organizer of the nascent organization, is to have scientists chart the association's future. For that reason, the first step is likely to be a series of get-acquainted research conferences, along with an exchange of postdoctoral researchers. Arai also anticipates using a World Wide Web page to help participants keep in touch. His model is the European Molecular Biology Organization, which started as a loose network of European institutions and has since acquired laboratories and a means to support research. Obtaining a dependable source of funding will be a major challenge, however. Yim and Arai tapped corporate contributions and government grants to subsidize the inaugural gathering, but participants probably will have to pay their own way to future meetings. Still, Arai hopes that the positive reaction to the initial meeting will convince governments that supporting this fledgling organization is in their own, long-term interest. 7. U.S.-RUSSIAN COLLABORATIONS # Cold Wind Blows Through Arctic Climate Project 1. Jeffrey Mervis Hydrologists Larry Hinzman and Vladimir Romanovsky were packing up to leave Russia's Far East earlier this month at the end of a 4-week research trip when disaster struck. The trip had already been a bit of an ordeal for the University of Alaska, Fairbanks, researchers, who were taking part in an international project involving Japanese and Russian scientists to monitor ground water and climate inside the Arctic Circle. It took three times longer than they had expected to clear their gear through customs and obtain the necessary permits for their equipment, which included a differential global positioning system (GPS), and a portion of their research had to be abandoned. But the fieldwork itself—staking out a 1-kilometer square of tundra and installing markers and sensors—had gone well. The first inkling that something was amiss came on Friday, 30 May, when customs officials in Yakutsk told them that the Federal Security Bureau, Russia's internal police, was interested in their activities. After being ordered to transfer their equipment to the Geodesic Supervision Commission and spending an anxious weekend worrying about its fate, they learned on Monday that the instrument's ability to obtain precise geographic coordinates made their data a state secret that could not leave the country. There was worse to come on Tuesday: a 3-hour interrogation by the security forces, who assumed the scientists were spies, and the seizure of Hinzman's logbook. On Wednesday, the police claimed his laptop computer and disks. With only 4 days to go on his visa, Hinzman and Romanovsky (a Russian citizen) decided the next day to leave Yakutsk. On 7 June, they arrived back in Fairbanks, safe but badly shaken, minus their data and equipment. U.S. government officials are weighing how to respond to Hinzman's treatment. “We're extremely concerned about this interruption in the free flow of scientific information,” says Douglas Siegel-Causey of the National Science Foundation's (NSF's) Office of Polar Programs, who says Hinzman “did nothing wrong” during his visit. But a strong formal complaint or a threat to pull out of joint activities might prejudice future joint ventures in a region that U.S. researchers are eager to study. “We spend a lot of time promoting collaborations, and any problem can put a real damper on things,” says Cathy Campbell of the White House Office of Science and Technology Policy. But “without all the facts, it's hard to know how [Hinzman's case] should be resolved.” NSF, which had awarded Hinzman$242,000 for the 2-year project, would like to know not just the facts of this case but whether it fits into a pattern. Several U.S. scientists working in Russia's Far East in the past few years have reported problems ranging from unilateral restrictions on their activities by provincial authorities to last-minute financial demands. NSF has asked its grantees to submit information on such episodes. If the reports do point to a pattern, they say, the next step may be to air the problem before a commission, headed by Vice President Al Gore and Russian Prime Minister Viktor Chernomyrdin, formed in 1993 to foster cooperation in science and other areas between the two former Cold War enemies.

One complicating factor is the growing independence of local and regional officials from the central government in Moscow. In June 1995, for example, the governor of the Chukotka region that borders the Bering Sea enacted a new policy requiring permits for all scientific activity in his jurisdiction. Over the next few months, three U.S.-Russian projects were canceled and eight more were delayed as scientists scrambled to meet the new rules. In the Hinzman episode, it is unclear whether the local security police were acting on their own or under orders from Moscow.

Some government officials say the episode is no cause for alarm, however. “From where I sit, it's not a recurring problem,” says Environmental Protection Agency (EPA) official Gary Waxmonsky, who staffs the environmental committee of the Gore-Chernomyrdin Commission. “There are other scientists using GPS equipment in Russia, so I don't see [the security police's reaction] as a generic problem.” He says problems are less likely to occur when federal officials are involved from the start, as with most of the projects EPA sponsors.

U.S. researchers hoping to conduct research in Russia's Far East—an area rich with opportunities that was largely closed to outsiders during the Cold War—are anxious that any official action by the U.S. government not make an already difficult situation worse. “There are still lots of opportunities, but some U.S. scientists have given up because it's too difficult to overcome the bureaucratic obstacles,” says Dale Taylor of the U.S. Geological Survey's biological resources division in Fairbanks and a driving force behind an effort to create a Beringian Heritage National Park that spans the two countries. “And that's very sad for Russia, where good scientists are working under very difficult conditions.” Collaborations, says Taylor, also are an essential ingredient in keeping Russian researchers afloat.

For glacial geologist Julie Brigham-Grette of the University of Massachusetts, Amherst, Hinzman's experience has forced her to examine the balance between her quest for discovery and her concern about safety. Brigham Grette's students encountered problems last fall during a field expedition in Chukotka, and she had to postpone a project scheduled for this spring after it became snarled in local politics. “There's an unexplored meteorite crater lake formed 4 million years ago that may have a continuous climate record, and I want to get a core sample,” she says about the Russian site. “It could be a major research project that would also contribute a lot to the local economy. … But I don't want my family to have to worry that I might be arrested just for doing my work.”

Hinzman is also taking stock. “When I got home, I said I'd never return. But last night, my wife bet me $1000 that I'd be back in Russian within a year.” He paused, turning the idea over in his head. “There's just so much science that needs to be done, and so much to learn.” 8. AUSTRALIAN PARTNERSHIPS # Centers Fear Self-Sufficiency Is Prelude to Government Cuts 1. Elizabeth Finkel 1. Elizabeth Finkel is a free-lance writer in Melbourne. MELBOURNE—Success breeds success. At least, that's the way it's supposed to work. But try telling that to the managers of Australia's Cooperative Research Centres (CRCs), set up in 1991 to unite government, industry, and academic researchers in an effort to boost the nation's high-tech economy. The most successful centers have attracted industrial support and laid the groundwork for new products, but they fear that this is only encouraging the government to cut their funding in the hope that industry will pick up a larger share of the bill. “It's a Catch-22 situation,” says Nick Nicola, former director of the CRC for Cellular Growth Factors in Melbourne. “If your CRC looked like it could be self-sustaining, they'd say there was no need for government funds. If it didn't look like it could be self-sustaining, they'd say you didn't deserve [the money].” CRC directors are running scared in part because the government recently announced that it would trim the program's overall operating budget by$10 million over 2 years and begin a yearlong review aimed at making centers more self-sufficient and more attuned to commercial applications. The program has also just gone through a bruising competition involving 13 of the original 15 centers and 23 new applicants; only six of the existing centers won full funding for another 7 years, and some that had won high marks from outside reviewers either lost out or had their government funds trimmed sharply.

## A timely union

One trick not even the biggest MRI machine can presently pull off on its own is following precisely when brain areas become active during a cognitive process. That's because the neurons themselves respond within 10 milliseconds of a triggering stimulus, while the blood-flow changes measured by fMRI or PET take several seconds to develop. This limitation has been a great frustration for neuroimagers. “Timing is everything in the brain,” says UC Davis's Mangun. Without timing information, researchers can only guess about how different brain areas build on each other's work as they perform a task.

To remedy this problem, Mangun's group and others have recently arranged a marriage of convenience between fMRI and PET imaging techniques and a pair of brain-recording methods whose forte is timing: EEG, which measures the electrical fields produced by brain neuron activity, and magnetoencephalography (MEG), which measures neurally generated magnetic fields. Both methods can take readings at more than 100 points on the scalp and can track how neural activity changes with time along the surface of the head. But they have a big weakness: They can't pinpoint the source of the electromagnetic signal.

Mathematical equations can point to brain areas where the activity might be, but the equations yield multiple solutions, with no way to tell which one is right. But “if you can calculate a [candidate] position, and then show that neuroimaging shows that there are active cells in that particular place, then that increases your confidence that you've got it right,” says EEG researcher Steven Hillyard of UC San Diego.

Hans-Jochen Heinze at Otto von Guericke University in Magdeburg, Germany, along with Mangun and Hillyard, did just that with a cognitive task in 1994. They presented subjects with pairs of symbols in both their right and left visual fields and directed their attention to either the right or left field by asking them to judge whether the symbols appearing there were the same or different. Earlier work in Hillyard's lab had shown that the EEG wave evoked by the symbols differs, depending on whether the subject is paying attention to them or not: A bump in the wave beginning about 80 milliseconds after the symbols were flashed, known as the P1 component, gets bigger when the subject pays attention.

To find the source of the activity that creates P1, the Heinze team had the subjects do the task once while the researchers took EEG recordings, and again in the PET scanner. The PET data showed two areas in the so-called “extrastriate” portion of the visual cortex that could be the source of P1, and the team then returned to the model to see whether these spots would work as possible sources that would explain the EEG data. Those sites, says Mangun, explained the data “very, very well.” Mangun has since shown that making the perceptual task easier selectively reduces both P1 and the attention-associated extrastriate activation seen in PET, further support that the two techniques are measuring the same brain function.

That experiment showed that imaging and electromagnetic techniques can work together, says Harvard's Dale. But the math used by the Heinze team could consider only two or three simultaneously active brain areas as possible sources of the EEG signal. And while that was fine in the case they had chosen, Dale points out that in most cognitive processes, many brain areas are activated. Dale is one of several researchers deriving a new generation of mathematical models that can pose thousands of sites of brain activity as potential sources and contain other improvements as well.

Like the model used by the Heinze group, Dale's model begins with electromagnetic data recorded on the scalp and predicts which configuration of active areas in the brain could best explain that activity. But instead of relying just on EEG recordings, it can use MEG and EEG data taken simultaneously. And while older methods model the brain as a sphere inside the skull, Dale's limits the potential sources of activity to the cerebral cortex. Moreover, because each brain is unique in how its cortex is folded, Dale uses a structural MR image to tailor the calculations to the individual brain.

The result is a localized, though fuzzy, estimate of combined activity in the brain that could produce the EEG and MEG signals at any point in time. Dale then takes fMRI data on brain activity during an identical experimental trial and uses those data to “weight” the solutions by having the equations favor areas shown to be active by the fMRI. The end result is a set of crisp images with the spatial resolution of fMRI that show changes in brain activity on a time scale of tens of milliseconds. “You can make a movie animating this,” Dale says.

Dale, Tootell, and Jack Belliveau, also of Harvard, have validated the technique by using it to look at the timing of the brain's response to a moving image, and Dale and Eric Halgren of UC Los Angeles have studied the time course with which the brain responds to novel versus repeated words. “It is an important wedding of techniques,” says Washington University's Raichle and is likely to become a staple of the field for researchers who want to know the pathways information takes in the course of a thinking process.

## Future frontiers

Millisecond movies of neural activity using fMRI, EEG, and MEG might seem visionary enough, but some in the field think such wonders will someday be possible with a single technique—either a new form of fMRI or a much less expensive alternative: imaging with ordinary light beams.

Keith Thulborn's team at the University of Pittsburgh Medical Center is working to devise a way to get images with real-time resolution information directly from fMRI, by measuring changes in the sodium magnetic resonance signal. “Sodium imaging may be a very direct way of looking at neuronal activity,” says Thulborn, because sodium ions flow into neurons when they fire. The passage of ions into the neurons changes sodium's magnetic resonance properties in a way that should be detectable by MRI, Thulborn says.

The imaging center at Pittsburgh already uses sodium imaging clinically to assess brain damage in patients with strokes, epilepsy, and tumors. Because the sodium signals are weak, it takes 10 minutes to create a reliable three-dimensional image, says Thulborn. But because MR images are built up from many individual snapshots, Thulborn says it would be possible to construct images that capture the immediate neural response by taking repeated snapshots timed at a very short interval after a repeated stimulus. Thulborn and a team of engineers and physicists have been working for 6 years to improve the MRI machine's ability to detect sodium. Their work has reduced the detection time from 45 minutes to 10, while increasing spatial resolution an order of magnitude, and they plan to test the experimental approach on a 3-T machine within the next few months.

Still other researchers are hoping to image neural activity directly without the $1-million-per-tesla price tag of fMRI. Their preferred medium: light. Studies in living brain slices have shown that the light-scattering properties of neurons change when they become active. Cognitive neuroscientists Gabriele Gratton and Monica Fabiani of the University of Missouri, Columbia, lead one of several labs trying to take advantage of that property by using near-infrared light from a fiber-optic source to image activity changes in living human brains. Their system, which they call EROS, for event-related optical signals, has a bargain-basement cost of less than$50,000.

When a fiber-optic source placed on the scalp shines light into the head, the light penetrates the skull and is scattered by brain tissues before some of it reemerges. EROS uses light sensors placed on the scalp just centimeters from the source to measure the time the light takes to emerge. Because that time is influenced by light scattering, which in turn is affected by neural activity, the system can detect changes induced by an experimental task. And it does it with a temporal resolution similar to that of an EEG. EROS can also locate the source of the scattering changes, based on detector placement and timing of the light's emergence, with spatial resolution of less than a centimeter. Using EROS, Gratton repeated the experiment by which Heinze, Mangun, and Hillyard first showed the power of combining PET with EEG. EROS produced the same results, localizing the effects of attention to the extrastriate cortex.

One limitation of EROS is that the light can only penetrate several centimeters into the head, and so the technique is unable to register activity from deep brain areas. Indeed, some researchers worry that it will not reliably image parts of the cortex that are buried in folds. “If it is limited to the superficial cortex, it will never replace fMRI,” says cognitive neuroscientist Steven Luck, of the University of Iowa, Iowa City. But Gratton and Fabiani say they have already imaged cortical areas deep in a fold and have ideas about how to reach even deeper regions.

“My eye is on optical techniques in terms of the next wave,” says neuroimager Bruce Rosen, of the magnetic resonance imaging center at Harvard Medical School. “In 10 years, I wonder if we will all be doing optical imaging and throwing away our magnets.” While most brain imagers might think that unlikely, this is a field that has learned never to say never.

• * “Neuroimaging of Human Brain Function,” The Arnold and Mabel Beckman Center of the National Academy of Sciences, Irvine, California, 29-31 May.

18. # New Eyes on Hidden Worlds

1. Tim Appenzeller,
2. Colin Norman

Picture this: the inner workings of a living cell, or the surface of another star. Once, scientists could see them only in their mind's eye, but now these and other unseen realms no longer need to be imagined. Thanks to advances in optics and electronics—among them lasers, ultrasensitive charge-coupled device detectors, and computerized control and image-processing systems—researchers in many fields are enjoying what amounts to a golden age of imaging. In this Special News Report, Science takes a look at some of the imaging technologies that are opening scientists' eyes to new worlds or letting them see familiar worlds in a new light.

The scanning tunneling microscope and its offspring are revealing atoms, molecules, cell-membrane channels, and other minuscule objects as individuals, each in its own environment. New microscopy techniques are allowing researchers to play peeping Tom on the inner lives of cells, watching organelles, genes, and even individual proteins go about their business. Molecules, too, need no longer be seen in still life, as demanding new techniques turn x-ray structures into movies that capture proteins in the process of changing shape. Some of the same technologies that are opening new views of the very small are also revolutionizing astronomers' views of the very large, as lasers, computer controls, and other wizardry multiply the seeing power of the world's largest telescopes.

Imaging is so pervasive in science that no single news report could do it justice. The rest of this issue of Science is testimony to the pace of progress: It contains no fewer than six Reports on advances in imaging or results obtained by new imaging techniques.

• “Optical biopsy” for high-resolution medical imaging (p. 2037);

• Magnetic force microscopy for studying the magnetic structure of materials that change their conductivity in a magnetic field (p. 2006);

• Near-field microwave microscopy of an electrically polarizable material (p. 2004);

• Fluorescence microscopy of the dynamics of stretched DNA molecules (p. 2016);

• Confocal microscopy combined with magnetic resonance to map light-emitting defects in diamond (p. 2012); and

• Multiphoton imaging to track communication between plant-cell organelles (p. 2039).

This issue also includes a Research News story on new directions in brain imaging, an area where feats of seeing the unseeable—human thought processes—are becoming almost routine.

19. ATOMIC IMAGING

# Candid Cameras for the Nanoworld

1. Ivan Amato

A 16-year-old atomic microscope has spawned a family of instruments that allow biologists, materials scientists, and chemists to see their fields from the bottom up

In 1982, a soft-spoken electrical engineer at Stanford University named Calvin Quate was flying 10 kilometers above the broad-brushstroke landscape on his way to a meeting in London. He was catching up on his technical reading when he came across a short article in the April issue of Physics Today. It is no exaggeration to say that what he read changed his life.

“I was going to stay for a week in London,” he recalls, “but when I got there, I immediately flew to Zurich.” He felt compelled to meet the protagonists of the article—Heinrich Rohrer and Gerd Binnig—at IBM's Zurich Research Center in Rüschlikon. Quate knew no one at the 200-researcher facility, but he managed to track down Christoph Gerber, a laboratory technician working for Rohrer and Binnig. “He showed me a notebook with all of these traces,” Quate says. “I was absolutely astonished. I knew that something big was happening.”

The traces were the first images from a new instrument called the scanning tunneling microscope (STM). They resembled topographic maps of gently wavelike territory, but the terrain was nothing like the vast expanses Quate had seen from his airplane; indeed, it would hardly have covered a protozoan's underbelly. The subtle rises and falls of the traces corresponded to atomic-scale steps and corrugations on the surfaces of gold and other metals. Isolated bumps might even have marked individual atoms that had settled onto the surfaces, although Binnig and Rohrer weren't quite ready to claim they were seeing atoms.

Atomic resolution or not, Quate suspected that the STM, which builds an image by scanning an atomically sharp probe across a surface, was about to change researchers' perspective on materials as dramatically as a passenger's vantage changes when an airplane lands. Instead of getting broad-brush pictures of atomic landscapes—the best that techniques such as neutron diffraction, x-ray diffraction, and electron microscopy could do—Quate guessed that researchers using the STM might be able to crawl through every atomic nook and cranny.

Any uncertainties were dispelled a year later when Binnig, Rohrer, Gerber, and their colleague Edmund Wiebel published a now-classic Physical Review Letters paper that included an image showing a repeating pattern of atoms on what is known as the silicon 7-by-7 surface. As far back as 1957, scientists had used electron diffraction to probe this particular crystal surface, but the results were vague enough to be consistent with several different models for how the surface atoms are arranged. The STM image almost single-handedly settled the issue, dramatically illustrating the power of the new tool. Binnig was so moved, he says, that “it was a little too much to feel joy.”

“That 7-by-7 image of silicon was the revolution,” Quate says. Since then, the revolution has continued without letup. In 1986, Rohrer and Binnig shared the Nobel Prize for their invention. Quate's own laboratory back at Stanford soon became a key stage for the scientific and technological drama that opened at Rüschlikon. And for researchers in disciplines from surface science to cell biology, the STM and its progeny, collectively known as scanning probe microscopes (SPMs), are ending the need to infer the structure and behavior of ultratiny entities—atoms, molecules, unit cells of crystals, pores and channels in cell membranes—from measurements made on crowds. “Quite often, macroscopic measurements average over ensembles so you don't learn much about physics at small scales,” says Donald Eigler, another of IBM's celebrity STM scientists. Now researchers can get downright personal with the tenants of the nanoworld, monitoring individual atoms or molecules and seeing how their particular environments affect them.

Just as light microscopes can see different properties of a material depending on how it is stained, SPMs can map such properties as magnetism, surface roughness, and electrical potential, depending on the kind of probe they are equipped with. They even allow researchers to reach into the nanoworld and change it, turning nanoscience into nanotechnology. Says Mark Ketchen, director of physical sciences at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, “Some 50 years from now, people will look back and say this was a major turning point.”

## Fine discrimination

Finesse was what made the STM possible. At the heart of today's version of the device is an electrically conductive tip (ideally tapered to a point a single atom across). When the tip is maneuvered to within a few atom-widths of a conductive surface, a quantum-mechanical phenomenon known as electron tunneling begins to occur. At these minuscule separations, the quantum-mechanically defined region of space that should contain an electron from an atom on the tip overlaps with the similarly defined region for electrons in the sample atoms just beneath the tip. This overlap behaves as a conduit through which electrons from the tip can tunnel into the sample. Because tiny changes in the overlap drastically speed or slow the electronic traffic through the conduit, the technique is exquisitely sensitive to surface relief.

To exploit these effects, Binnig and Rohrer developed a precise and stable control system for the stylus. Piezoelectric elements, based on ceramics that change size ever so slightly in response to an electric field, nudge the tip this way and that in increments of only a fraction of an atom-width. The piezoelectrics, in turn, are controlled by an electronic feedback system that lowers or raises the tip to maintain a constant tunneling current. The result is that the tip ends up tracing over a sample's surface contours at a constant, tiny altitude. A decade earlier, Russell Young, an engineer at the National Bureau of Standards—as the National Institute of Standards and Technology (NIST) was then known—had come close to inventing the STM when he devised the same basic scheme but fell short of perfecting a system for stabilizing the tip (see sidebar on p. 1984).

By scanning the tip in a tight, back-and-forth pattern while the feedback system maintains a constant current, the system can create arresting topographic maps of surfacescapes at resolutions all the way down to the level of individual atoms. Repeated images can be assembled into videos that show freshly exposed metal surfaces, in search of equilibrium, rearranging themselves into atomic steps or virus-sized islands. The STM can even identify different kinds of atoms by their electronic idiosyncrasies.

This world didn't remain an exclusive preserve for materials scientists for long. The STM can image only electrically conductive samples, so its first devotees churned out surface images of gold, platinum, silicon, graphite, gallium arsenide, and pretty much any other metal, semiconductor, or superconductor they could lay their hands on. But that was not enough for Binnig, who says, “I always was a bit disappointed that this instrument could not work on insulators,” which include the surfaces and molecules of the biological world.

Binnig brought this dissatisfaction with him when he and Gerber took a sabbatical from IBM's Zurich facility in 1985 to work with Quate at Stanford and at IBM's Almaden Research Center in nearby San Jose. There, Binnig says, he intuited the design of what became the most influential offspring of the STM, the atomic force microscope (AFM). The inspiration came one day when he was staring up from a sofa in his apartment at the stippled stucco ceiling and envisioned a fine stylus tracing its bumps. “I could have drawn out a design [of the AFM] with a pencil,” Binnig recalls.

Instead of a conductive tip interacting with a sample by way of tunneling electrons, the tip of Binnig's new instrument would be attached to a tiny cantilever—a wee diving board—that would bend and flex in response to the minute mechanical, electrostatic, and other atomic- or molecular-scale forces it encountered as it was scanned over a surface. A feedback system like the STM's would ensure that the force on the cantilever remained constant. Once Binnig and his colleagues had defined the principle of the AFM, it took only a few days for Gerber to build the first prototype. The AFM has since become the most popular type of scanning probe microscope.

## Opening the floodgates

The AFM was just one of the STM's ever-enlarging brood of children, each specialized for mapping a different property across submicroscopic landscapes. Even before Binnig invented the AFM, Dieter Pohl, also at IBM Zurich, introduced the near-field scanning optical microscope, which replaces the probe with a special light-gathering optical fiber that can image surfaces optically, at resolutions smaller than the wavelength of light illuminating the sample.

The pace of innovation only accelerated after the AFM appeared. For one thing, the AFM's talent for imaging nonconductive materials served as a welcome mat to enormous research communities in areas such as polymer chemistry and molecular biology. For another, researchers quickly found that by modifying the AFM tip, say with a magnetic coating or a hydrophobic chemical coating, the basic instrument could be specialized for measuring almost any physical property on unprecedentedly fine scales.

Among its instrumental siblings are the magnetic force microscope, the frictional force microscope, the electrostatic force microscope, the scanning ion-conductance microscope, the scanning chemical-potential microscope, and the scanning plasmon near-field microscope, to name only some of them. “New ideas keep coming out—new things to scan, new detectors, new innovative ways of using these techniques,” remarks IBM's Ketchen, who is part of a team developing yet another variation on the original: the scanning SQUID (superconducting quantum interference device) microscope, for imaging the minuscule magnetic fields that are the staple of data-storage technologies.

As a result, scanning probe microscopes are turning up in some surprising settings. Consider work by the husband-and-wife team of Paul and Helen Hansma, of the University of California in Santa Barbara, on biological molecules in solution. In 1989, they and six colleagues reported the first AFM-made video of a biological polymerization process. By making repeated scans and stitching them together, they visualized the linking and aggregation of fibrin, a blood-clotting protein, first into strands and ultimately into a netlike molecular fabric.

More recently, the Hansmas have spied on the progressive slicing and dicing of DNA molecules by the DNA-digesting enzyme DNaseI. They have also watched an RNA polymerase molecule bind to and travel along a DNA molecule, reading the code of its nucleotide bases one by one and transcribing them into messenger RNA. “One DNA polymerase molecule can replicate DNA at a rate hundreds of times that at which everyone in the genome project can decode DNA,” says Paul Hansma, who dreams of using scanning probe microscopes “to learn the operating mechanism of these incredible little machines.”

And last February in the Biophysical Journal, Helen Hansma, graduate student Daniel E. Laney, and two other colleagues reported that they had mapped not only the shapes but also the “feel” of individual synaptic vesicles—tiny sacs of neurotransmitter that release their cargo when one neuron communicates with its neighbor. By tapping the probe of an AFM across vesicles from the electric organ of the marine ray (Torpedo californicus), Hansma and her colleagues learned that the vesicles became turgid in the presence of calcium ions—the signal that triggers the vesicles to burst and release neurotransmitter. The feat could open the way for directly monitoring the biophysical changes that lead to the release of neurotransmitter, says Hansma.

Chemist Robert Dunn of the University of Kansas and his colleagues have been probing the workings of other cellular machinery: the channels and pores that riddle cellular membranes. These channels, made up of small bundles of molecules, open and close like locks on a canal to regulate the flow of substances into or out of the cytoplasm and nucleus. In one set of experiments, the Kansas researchers placed an AFM probe directly above so-called nuclear pore complexes in membranes from a frog egg cell to monitor their workings. “In the open state you see a channel, but after triggering the pore [with calcium ions] you see something like a piston stick up and block the central part of the channel,” Dunn says. That's a mechanism some biophysicists had speculated about, but Dunn's images lend it direct support.

AFMs have made a foray into chemistry in the hands of chemist Charles Lieber of Harvard. He uses them to examine the molecular basis of phenomena that include adhesion, lubrication, and wear. “People were always making difficult arguments about what was going on [on the molecular level] when you couldn't see it,” Lieber notes. “By attaching molecules to the force microscope tip, we can get chemical sensitivity” and tease apart those molecular interactions, he says. He and his co-workers scan the chemically coated AFM tip over surfaces micropatterned with different chemical toppings. They measure the forces the coated tip “feels” as it treks through chemically different neighborhoods, experiencing various amounts of hydrogen bonding and other chemical interplay.

One recent STM feat builds on its first: that famous silicon 7-by-7 surface. Phaedron Avouris and In-Whan Lyo at the T. J. Watson Research Center, with collaborator Yukio Hasegawa of the Institute for Materials Research at Tohoku University in Japan, used an STM to trace the movement of electrons through the silicon surface's “dangling bonds,” where an atom isn't fully bound to its neighbors. By examining subtle variations in the tunneling current depending on whether the dangling bonds are occupied or not, the group was able to monitor the electron traffic through the bonds—information that could be critical for designers of ultrasmall circuits.

## Remaking the nanoworld

Having opened these windows on the nanoworld, researchers are now reaching through them to modify that world to their liking, turning atoms into minuscule switches, batteries, and data-storage sites. In this effort, SPMs serve as both measuring devices and tools.

Two examples can be found in adjacent laboratories at NIST. One goes by the name Molecular Measuring Machine, or M3, the handiwork of Clayton Teague of the precision engineering group. This onion-structured instrument, whose concentric layers isolate the STM nestled in the middle from physical perturbations, relies on a laser-based tip navigation system to measure the distance between any two points on a surface area centimeters across to within 1 billionth of a meter, or just a few atomic widths. That's a feat akin to locating two widely separated grains of sand in a 2500-square-kilometer patch of desert and then measuring the distance between them to within 1 millimeter. Ordinary STMs, in contrast, are unable to map sites within such large areas.

This ability to survey the nanoworld should aid efforts to engineer it by planting or dislodging atoms with the tip of an STM or AFM. This interventionist brand of microscopy began in earnest with Eigler and his colleagues at IBM Almaden back in 1990. They made headlines by maneuvering 35 xenon atoms into Big Blue's famous company acronym atop a surface of crystalline nickel. Since then, Eigler and others have built more scientifically interesting structures; these include rings of iron atoms that behave like quantum corrals, constraining electronic motions on surfaces in accordance with the ways of quantum mechanics.

Now, across the hall from M3 in an acoustically isolated room with a massive metal door, Joseph Stroscio and his colleagues are trying to take the next step in building atom-scale test-beds for quantum mechanics. Their instrument, not yet completed, should be able to modify and scan samples under ultrahigh vacuums, at 2 degrees above absolute zero, and in extreme magnetic fields. Stroscio and his colleagues expect that these experimental conditions will enable them to probe quantum effects to which standard STMs operating under less demanding conditions are blind. “We hope to make [ultrasmall] structures and then ask questions like … what energy does it take to put an electron into the structure,” Stroscio says. The answer could feed into the design of prototype data-storage devices that rely on the presence or absence of a single electron.

Costly, specialized devices such as Teague's and Stroscio's, however, are only a small part of what scanning probe microscopy has become since its invention. Thanks to large-scale production at companies including Digital Instruments, Park Scientific, and Topometrix, scanning probe microscopes can cost 50,000 or even less—complete with a computer package for creating wall-poster images of your favorite sample. Thousands of the instruments are now touring atomic landscapes. After opening up the nanoworld, scanning probe microscopes have proceeded to democratize it. Without these tools, says Stroscio, “it would be like the Dark Ages.” Ivan Amato's book on materials science, Stuff, was just published by Basic Books. ## RELATED WEB SITES: Directory of scanning probe microscopy links, including companies and research groups Images and videos from IBM's Almaden Research Center Links to international scanning tunneling microscopy research groups Links to companies and international research groups 20. ATOMIC IMAGING # The Man Who Almost Saw Atoms 1. Ivan Amato Heinrich Rohrer and Gerd Binnig were not the first to spy on the world of atoms when the two IBM scientists invented the scanning tunneling microscope (STM) in the early 1980s. In the late 1950s, for example, Erwin Mueller at Pennsylvania State University had invented an atom-resolving device called the field ion microscope. In a vacuum chamber, a strong electric field tore charged atoms from the surface of a sample, sending them careening into a detector in positions that reflected their arrangement in the sample. Field ion microscopy, however, was limited to metal samples drawn into very sharp points. But in the late 1960s, one of Mueller's former students invented a device that could have anticipated the STM directly—if only he had completed it. Russell Young, then at the National Bureau of Standards (NBS, now the National Institute of Standards and Technology), called his instrument the “topografiner,” after a Greek word meaning to describe a place. And to STM aficionados, its basic scheme has a familiar ring. Piezoelectric elements scanned its fine metal tip across a surface, with feedback and control systems maintaining the tip at a constant height. Young and colleagues John Ward and Fredric Scire even measured tunneling currents—the basis of the STM—when they brought the topografiner's tip sufficiently close to a metal sample. They published reports claiming that the effect could, in principle, be used to measure a surface position to within about 0.3 nanometer, or about atomic resolution. “One can honestly say that the instrument developed [at NBS] and the instrument that achieved atomic resolution [at the IBM Zurich Research Center] looked very similar,” says Roland Wiesendanger of the University of Hamburg. But Young ran into technical and bureaucratic difficulties. Vibrations and other perturbations were preventing the topografiner from seeing atoms. In a 1972 paper in the Review of Scientific Instruments, the NBS researchers gave some sense of the difficulties by pointing out that they were able to achieve tunneling currents only by running experiments during odd hours when their building's air conditioner was off and by operating the instrument remotely so that their own movements would not generate resolution-killing disturbances. Binnig and Rohrer later solved these problems with multitiered isolation systems. Young never had a chance to try. In 1971, NBS management took him off the topografiner project in a resource-allocation decision. But Young never forgot that he was once on the verge of seeing atoms—and neither did the Royal Swedish Academy of Sciences. In awarding the 1986 Nobel Prize in physics to Binnig and Rohrer, the Nobel committee acknowledged Young's close approach to the STM and blamed his failure to beat Binnig and Rohrer on “exceptionally large experimental difficulties.” 21. RADAR IMAGING # Monitoring a Killer Volcano Through Clouds and Ice 1. Daniel Clery Late in the evening of 30 September last year, seismometers in Iceland detected the beginnings of a volcanic eruption beneath the Vatnajökull Glacier, the largest in Europe. By 2 October, the eruption had forced its way through the 500-meter-thick ice sheet—spewing steam and gas thousands of meters into the air. An estimated 2.3 cubic kilometers of meltwater was trapped beneath the ice and would soon burst out, threatening communities, roads, and communication links in this remote corner of southeast Iceland. Because of the inaccessibility of the region and the constant cloud cover, which made aerial surveillance difficult, the Icelandic authorities could not tell which way the meltwater would go. They got some timely help, however, from an unlikely source: a cloud-piercing radar satellite and a new image-processing technique that allows researchers to see movements in Earth's surface down to a scale of a few centimeters. University of Munich geographer Bettina Müschen and several colleagues at the German Aerospace Research Establishment (DLR) in Oberpfaffenhofen were involved in a project sponsored by the European Space Agency (ESA) to study radar images of Iceland from its ERS spacecraft. The team quickly realized that the satellites could help Iceland's disaster management by tracking the meltwater buildup. The synthetic aperture radar on ERS-2 had taken its first images of the Vatnajökull eruption in early October, but the Munich researchers believed that processing radar images of the eruption by a technique called SAR interferometry could generate even more valuable clues for disaster relief. In this technique, two images are taken from the same vantage point, say, 24 hours apart, and superimposed. The resulting interferogram shows graphically any movement that has occurred in that 24-hour period. ERS mission managers agreed to provide the services of an older spacecraft, ERS-1, then in the process of having its systems checked. On 21, 22, 23, and 24 October, both ERS spacecraft passed over Iceland acquiring images that were then processed into interferograms at DLR. “We detected subsidence of just centimeters per day, which was not visible to the eye. A few days later, [these movements] were confirmed on the ground,” says DLR's Achim Roth. “We could see the water going south,” says Müschen. As a result, the Icelandic authorities focused their monitoring and flood defenses to the south of the glacier. On 4 November, meltwater building up in a volcanic crater under the ice lifted the ice sheet and flooded southward. Over the next few days, floodwater and ice blocks of up to 1000 tons took out bridges and power and communication cables en route to the sea, but avoided a nearby village. Tracking the Vatnajökull eruption was the most dramatic use yet of SAR interferometry, a technique pioneered by researchers at NASA's Jet Propulsion Laboratory in Pasadena, California, in the 1970s that has taken off since the launch of ERS-1 in 1991. “ERS-1 provided the first reliable source of data,” says Steve Coulson, who coordinates SAR interferometry research for ESA. Besides making static topographical maps with unprecedented resolution, SAR interferometry is also being used to measure ground movement after earthquakes and volcanoes, the creep of glaciers, landslides, and subsidence caused by coal mining. According to Müschen, German insurance companies are looking into using SAR to assess the risk of natural disasters in different areas. Coulson says the latest application, still at the research stage, is to use interferometry to detect deforestation and different kinds of agricultural land use. SAR interferometry, he says, “seems to be the big thing at the moment.” RELATED WEB SITES: A recent conference on SAR interferometry European Space Agency's Earthnet Online page 22. MOLECULAR MOVIES # Fast-Action Flicks Draw Chemists' Rave Reviews 1. Robert F. Service Looking for the fastest paced action-adventure film of the summer? Forget Spielberg and Lucas. Keith Moffat and Michael Wulff have a couple of new, action-packed thrillers that are knocking the socks off viewers worldwide. They're leaving audiences “breathless,” proclaims one reviewer. But don't bother with the popcorn. The movies are over so fast, you won't even have a chance to reach for a kernel. Still, what they lack in duration, they make up for in splendid closeup shots: They reveal—in exquisite, three-dimensional, atomic detail—how the show's stars, protein molecules, change shape as they undergo simple reactions. These two new action flicks aren't the only ones on release. Scientists around the globe are turning out molecular movies of proteins, semiconductor crystals, and even simple molecules reacting in a gas, using short pulses of x-rays and electrons to freeze the action of molecules in motion. “We've dreamed of doing these kinds of things for decades,” says Moffat, a biochemist at the University of Chicago. “Now they're finally happening.” The new movies are by no means the first efforts to detect high-speed changes in molecules. Researchers have used laser-based techniques for years to track the knitting and breaking of chemical bonds, events that occur in just a few quadrillionths of a second. Those techniques essentially just detect when the events occur, however. By contrast, the new x-ray and electron-beam schemes take repeated frames of the position of each atom in the spotlighted molecule, giving scientists successive snapshots of its complete atomic structure and a direct look at exactly how a host of chemical reactions unfold. Just as film grew out of still photography, the new movies are an outgrowth of a molecular snapshot-taking technique known as diffraction, which is widely used to produce still lifes of molecules. Researchers begin by firing a beam of x-rays or electrons at a sample of aligned molecules—such as innumerable copies of a protein lined up in a crystal. By recording and analyzing how the waves or particles ricochet off the sample, the investigators can determine the precise location of each atom in a molecule. Diffraction is traditionally done with a continuous beam of x-rays or electrons. But by pulsing their beam instead of leaving it on, researchers can create an effect like that of a strobe light, freezing the action of molecules in motion. And in recent years, by using shorter and shorter strobe pulses, researchers have been able to capture faster and faster action sequences. One problem, however, is that molecular moviemakers need roughly the same number of x-rays or electrons to make each diffraction image. So, as the pulses get shorter—and that's no easy feat in itself—researchers have had to find ways to boost the flux of their x-ray and electron beams. For x-ray movies—at this point the most popular of the genre—that has meant turning to high-powered synchrotrons, which produce ultrabright flashes of x-rays. Most current synchrotrons can produce bright enough x-ray pulses that last about 10 milliseconds. But even that isn't short enough to capture the fastest action of proteins in real time, which occurs in just billionths of a second, or nanoseconds. So molecular moviemakers have had to come up with a variety of tricks to slow down the action, such as cooling their protein crystals to ultralow temperatures, which can drop the speed of a reaction 10 billion-fold (Science, 21 October 1994, p. 364). Researchers worry, however, that the frigid temperatures may affect the behavior of their molecular actors. “You may not merely slow down the normal reaction,” says Moffat. “You may get abnormal reactions at that temperature.” But x-ray moviemakers have begun to solve that problem using a trio of new synchrotrons, such as the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which are so bright that they can pack enough energy into pulses lasting just 150 picoseconds (1000 picoseconds equal 1 nanosecond). In just the past 6 months, for example, Moffat and Wulff, an ESRF physicist, along with other colleagues, have used the 150-picosecond x-ray pulses at ESRF to produce the first pair of real-time movies of proteins in action at room temperature. In the group's first such movie, the researchers unveiled the most detailed picture yet of how the iron-containing protein myoglobin—a common muscle protein—carries out its task of storing and releasing small molecules such as oxygen. In this case, the film tracked the release and rebinding of a single carbon monoxide molecule—a stand-in for oxygen—from myoglobin's central iron atom. For each frame the Chicago-ESRF team shot, the action was triggered by blasting a crystal of myoglobin molecules in a vacuum chamber with a short burst of laser light. The light was precisely tuned to be absorbed by the molecule's iron-containing heme group, breaking the iron atom's bond to CO. A fraction of a second later, the researchers opened a high-speed shutter, steering a 150-picosecond x-ray pulse into the chamber to produce a diffraction image of the crystal. By taking multiple snapshots, varying the length of time between the trigger pulse and the probing x-rays for each one, the researchers assembled a movie that shows how CO drifts away from the iron and later rebinds. It revealed such details of the action as how the iron atom and others close by in the protein withdraw slightly from the protein's center after the CO-iron bond breaks, allowing the CO to drift away (Science, 6 December 1996, p. 1726). The team's second feature shows the gyrations of a bacterial photoreceptor called photoactive yellow protein, as a small organic group in the protein absorbs a photon and drastically changes its shape in response. Together, these two releases show that high-speed x-ray movies have come of age for tracking real-time atomic changes in proteins, but there's still plenty of action that even these fast cameras can't capture. For that reason, other teams are working to come up with still faster x-ray pulses for tracking even speedier events. At last month's Quantum Electronics Laser Sciences Conference in Baltimore, for instance, researchers led by physicist Chris Barty at the University of California, San Diego, reported creating subpicosecond x-ray pulses by firing ultrafast laser pulses at a moving copper wire, which then sheds the excess energy as x-rays. They used those pulses to produce x-ray-diffraction movies that track the initial atomic motions involved in the melting of laser-heated, gallium arsenide semiconductor crystals. And last fall, researchers led by Charles Shank at the Lawrence Berkeley National Laboratory (LBNL) in California reported producing x-ray pulses lasting just 300 femtoseconds (or 3/10 of a picosecond) by firing near-infrared laser light across an accelerated beam of electrons; the energetic electrons essentially give the infrared photons a kick, boosting them to x-rays (Science, 11 October 1996, p. 236). The LBNL team has yet to produce images with its ultrashort, but as yet relatively low-flux, pulses. But “ultimately, we want to be able to look at chemical reactions as they occur,” says Shank. No matter how short, x-ray pulses will still miss plenty of action, such as reactions in gases rather than in solids like protein crystals. That's because x-rays interact only weakly with atoms: Generally, researchers need huge numbers of atoms lined up in crystals to deflect enough x-rays to provide high-quality images, explains Ahmed Zewail, a physicist at the California Institute of Technology (Caltech) in Pasadena. So Zewail and other moviemakers interested in tracking chemical reactions in gases have turned to short pulses of electrons, which interact with atoms more readily than x-rays do. While electron-diffraction experts have also been making movies for years—in this case of gas-phase chemical reactions—techniques here, too, have been advancing rapidly. In the 13 March issue of Nature, for example, Zewail and his Caltech colleagues reported making the fastest paced electron-diffraction movies to date by shortening their electron pulses about 1000-fold, from several nanoseconds to about 10 picoseconds, using a femtosecond light pulse to create short bursts of electrons that were then focused on their target. The Caltech team shot movies of molecules as they are torn apart by laser light. To do so, the researchers first used a laser to fire a pulse of photons into a vacuum chamber filled with a methane derivative containing two iodine atoms. The photons began breaking apart some of the methane molecules—essentially starting a reaction stopwatch. A second pulse, fired a fraction of a second later, hit a metal-coated cathode ray tube, stripping away the electrons and creating an ultrashort electron pulse, which was channeled into the vacuum chamber to produce a diffraction image of the dissociating methanes. In this case, those images don't reveal the exact position of each atom, because molecules in a gas are oriented randomly and are not lined up like those in a crystal. Nevertheless, because all the molecules have the same constituent atoms, the diffraction images are able to reveal the precise distance between atomic neighbors within the molecules. Buoyed by this success, the Caltech researchers hope to do even better. At this point, the movie only shows the methanes just before and after they break apart. To capture the bonds in the process of breaking will require shortening the pulses another 1000-fold, says Zewail's postdoc Jianming Cao. Nevertheless, says Carl Lineberger, a chemist at the University of Colorado, Boulder, “it's very exciting to see people taking steps toward seeing electron diffraction in real time.” That excitement will undoubtedly grow if, as expected, the number of new movie releases begins to take off. 23. MICROELECTRONICS # Catching Speeding Electrons in a Circuit City 1. Alexander Hellemans 1. Alexander Hellemans is a science writer in Paris. Take a long-exposure aerial photograph of a city at night, and you will see traffic patterns traced in the bright streams and dense clusters of car headlights. The image (below), obtained by a team of researchers at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, is the equivalent shot of a functioning microprocessor, the 1997 S/390 used in the current generation of IBM mainframes. The “traffic” consists of electrons emitting light as they pass through the transistors, or crossroads, of this silicon city. By directly viewing such traffic patterns, circuit designers can look for weak spots and bottlenecks in the millions of components on a chip. “Researchers have known since the 1980s that electrons emit light as they pass through the field-effect transistors [FETs] at the heart of most modern microchips,” says Jeffrey Kash, of the IBM team that made the images. The light, which is in the near infrared and is extremely weak, can be detected only with cooled charge-coupled devices or special photomultiplier imaging tubes. Kash and his colleague James Tsang investigated this particular microprocessor because it consumed two orders of magnitude more current than it should have when it was not performing any operations. Kash and his colleagues obtained images of the chip in this “quiescent” state that showed a series of spots indicating that this excessive current was confined to a small portion of the chip. They couldn't tell which of the 7.8 million transistors were at fault, however, because they couldn't identify individual transistors in their images. “The issue was, how do you know where you are, how do you navigate,” says Kash. The team solved the navigation problem by spying on the individual FETs as they shuttled electrons around the chip when it was operating normally. “The only time a current is flowing is when you have a change of logic state,” says Kash. The FETs produce picosecond light pulses as they switch on and off, so by photographing the chip in normal operation, Kash and his colleagues obtained a “road map” of the positions of the FETs. When the researchers superimposed this image on the photo showing the excess leak currents, they pinpointed exactly where the leaks occur. The technique is useful for more than troubleshooting. “We can look at hundreds of thousands of FETs on a chip,” says Kash, “and that is very helpful” in improving the design of subsequent chips. Indeed, Ingrid De Wolf of IMEC, Belgium's Interuniversity Microelectronics Center in Leuven, says optical-emission diagnosis of chips is beginning to spread throughout the microelectronics industry. Researchers are going beyond simple imaging, she adds: “We are now also trying to get more information from the spectrum of the emitted light, so we can measure the energy of the electrons.” 24. MULTIPHOTON IMAGING # Biologists Get Up Close and Personal With Live Cells 1. Trisha Gura 1. Trisha Gura is a science writer in Cleveland. When Dutch naturalist Anton van Leeuwenhoek constructed a simple microscope in 1683, he opened a window to the previously hidden universe of microorganisms. Although it took biologists 200 years to understand fully the true nature of the specks that van Leeuwenhoek saw for the first time, his invention forever changed scientists' perspectives on the natural world. Now, the naturalist's intellectual descendants are bringing into focus an even tinier biological world: the inner workings of single living cells. And their efforts are promising another fundamental shift in perspective as biologists view processes that, until now, have been seen only in the mind's eye. Armed with a battery of lasers, photodetectors, and computers, plus a novel crew of powerful and adaptable fluorescent reagents (see sidebar), microscopists are literally bringing to light live cells, tissues, and even whole organisms. With a technique called multiphoton imaging, which gently excites fluorescence from cellular components without killing the cells, researchers are following fertilized zebrafish egg cells as they mature to larvae, seeing nerve cells exchange signals deep in the brains of live rats, and viewing organelles communicating in hard-to-image plant cells. “Every time we turn around, we find a new application,” says multiphoton-imaging pioneer Watt Webb at Cornell University in Ithaca, New York. Others are pushing the frontiers of inner space with a version of a technique that astronomers are using to probe outer space: interferometry. Interference microscopes use two interacting beams of light to tease apart tiny cellular structures with a resolution of less than 100 nanometers—sharp enough to see cytoskeletal proteins pushing forward migrant cells and clusters of genes lined up on separate chromosomes. And a few pioneers are now combining a variety of these cutting-edge technologies into a single instrument: a microscopist's dream machine that will allow researchers to pick and choose imaging modes, and even interact with their specimens, without moving their samples from under the instrument's lens. “There is a small revolution going on in light microscopy,” says Stefan W. Hell, a physicist at the Max Planck Institute for Biophysical Chemistry in Heidelberg, Germany, who recently patented and licensed his own version of an interference-based microscope called a 4Pi-confocal. “This will definitely change how biological imaging is being done.” Multiplier effect. The driving force behind this small revolution is molecular biology itself, says D. Lansing Taylor, director of the Center for Light Microscope Imaging and Biotechnology at Carnegie Mellon University in Pittsburgh. “Once you clone a gene, overexpress it, and knock it out, you still need data that can tell you when and where things are happening,” he says. Until recently, biologists generally had to infer what is happening from electron micrographs of thin, specially prepared specimens or by tagging molecules with fluorescent dyes and flooding specimens with light. But traditional fluorescence imaging, like electron microscopy, often limits observations to dead tissue. Many dyes and cellular proteins fluoresce only when they are zapped by short-wavelength, high-energy photons, which can be highly damaging to living cells. And because the entire specimen is illuminated, photons bouncing off other cellular components can greatly reduce the contrast of the image. Researchers have partially solved the contrast problems with the so-called confocal microscope, a device that illuminates only one section of the specimen and has a pinhole in front of the photodetectors to block out much of the stray light. But multiphoton microscopy tackles both the contrast and the photodamage problems. The key to this technique is the use of special pulsed lasers to fire precisely focused bursts of lower energy photons at the sample. If two or three photons strike the target molecule almost simultaneously, they produce the same effect as one photon with two or three times the energy. This double or triple punch lights up proteins that have been tagged with special dyes, and it can make some proteins fluoresce on their own. The lower energy of the individual photons cuts down collateral damage, allowing the cell to be kept alive for hours instead of minutes. And the tight focus of the laser beam—compared with bathing the entire specimen with light—greatly reduces effects of light scatter. “The technology is catching on like mad,” says Webb, who, with his colleagues Winfried Denk and James Strickler, reported the invention and first use of two-photon excitation in 1990 (Science, 6 April 1990, p. 73) and—along with several other groups—has since extended the technique to three-photon excitation (Science, 24 January, p. 530). Warren Zipfel, a biophysicist in Webb's lab, recently teamed up with plant biologist Maureen Hanson's group at Cornell to use multiphoton imaging to track communication signals between plant organelles called plastids (see sidebar on p. 1989). “This is an illustration of what you can do with multiphoton excitation,” says Webb. “It's enabling plant biologists to see processes in living cells in ways never possible before.” Neuroscientists also are using multiphoton imaging to look at previously unseen processes. “If you want to go even 500 micrometers into brain tissue, it is impossible with confocal [microscopy],” says Denk, a physicist-turned-biologist now at Bell Laboratories in Murray Hill, New Jersey. “You lose so much of the excitation energy because of light scattering.” What that means is fluorescence gets lost as the light bounces around inside tissues such as the brain. The situation is analogous to walking into a cave illuminated only by sunlight from the entrance—the farther inside you go, the darker and fuzzier the surroundings appear. Denk and his colleagues have gotten around the problem by using a multiphoton microscope to peer into the brains of live rats, homing in on the dendritic spines, spikey branchlike structures at the junctions between nerve cells. After filling individual neurons with a calcium-sensitive fluorescent dye and tickling the animals' whiskers, the researchers followed calcium ions as they traveled between the spines. The results provide information about the biochemistry of learning and memory formation that Denk says could not be obtained by studying neurons in petri dishes or in slices of brain mounted on slides. “There has been a big debate in neuroscience as to whether results in primary culture can really be generalized to live tissues,” Denk says. “People want to know, ‘How does a cell really work with other cells around it?’” While Denk's work and that of others, including Steve Potter at the California Institute of Technology (Caltech) in Pasadena, are beginning to answer that question for neurobiologists, developmental biologists are also hot on the multiphoton trail. Developing embryos are hard to image because egg yolk tends to scatter light, says cell biologist Victoria Centonze, deputy director of the Integrated Microscopy Resource Center at the University of Wisconsin, Madison. But multiphoton microscopy can cut through the scatter, and its relatively gentle probing can provide up to 20 hours of observation of a live embryo. This allows researchers to follow egg cells as they shape into embryos or track a gene that controls limb development. Interfering lasers. While multiphoton imaging is opening a window into cellular interiors, interference microscopy is beginning to sharpen biologists' view of individual structural components. The technique involves splitting laser light into two beams and focusing them on a sample, so that their light waves interact with each other as they pass through the specimen. The resulting pattern of light-dark interference fringes, in essence, illuminates the sample in layers instead of lighting up the whole sample at once. “Where there are regions of brightness, it stimulates fluorescence, and where there is a null, there is no excitation,” says Taylor of Carnegie Mellon. That pattern also improves contrast, says Taylor, because neighboring zones will not fluoresce and blur the one being imaged. And the layers can be moved up and down to provide three-dimensional information by shifting the angle of the laser beams and thus the interference pattern. Biophysicist Fred Lanni and his colleagues at Carnegie Mellon have used a simple version of an interference microscope to follow fibroblasts as they crawl into a healing wound. At the same time, several groups in Germany, including Hell's team at the Max Planck Institute, are modifying interference principles in other ways: to monitor cell-scaffolding proteins during mitotic division or to look at aberrations, tagged with fluorescent dyes, in DNA from patients with genetic illnesses such as Prader-Wille Syndrome and certain types of leukemia. Hell says that when computers are used to correct blurring of the image, the three-dimensional (3D) resolution of cellular components such as F-actin proteins can be improved up to 15 times over that of confocal or multiphoton imaging alone. In another modification of interference microscopy, researchers at the Max Planck Institute for Psychiatry in Munich have devised a system to peer at the neuronal networks in rats. German physician Hans-Ulrich Dodt, who had an avid interest in astronomy, teamed up with Walter Zieglgänsberger, a pharmacologist and physiologist, and created an instrument that uses an infrared-imaging technique similar to one used by astronomers. “We went from light-years to micrometers,” Zieglgänsberger says. Their instrument illuminates the sample with obliquely angled beams of infrared light whose wavelengths are out of phase. The image is captured directly by a special infrared-sensitive camera, without requiring damaging dyes or fluorescent markers. Dodt and Zieglgänsberger have used the technique to visualize living brain structures down to the resolution of single spines, which allows them to perform a variety of observations with far greater precision. “It's like fishing in a pond blindly and then having the lake clear so that I can see all the fish,” Zieglgänsberger says. While infrared video microscopy has combined several existing techniques into one device, Taylor and his colleagues at Carnegie Mellon, along with others, are working on bringing together several existing microscopes into one instrument. “The goal is to allow researchers to interact with a dynamic system, like a developing embryo, in real time,” says Taylor, whose instrument, known by the acronym AIM—for automated interactive microscope—is currently in the late development stages. By hooking up the instrument to a powerful computing system, researchers can piece together the 3D images—which otherwise take hours to sort through—very rapidly, Taylor says. That way, researchers could watch a fertilized egg divide, say with multiphoton imaging, then add a drug or reagent and switch to interference microscopy to see how the drug takes effect. “The whole purpose of the AIM is that the data are collected, processed, and displayed during the time of a biological event,” Taylor says. “That way a researcher can change the course of an experiment all in the time frame of a biological process such as cell division or locomotion.” While interference and multimodal imaging devices are still in the developmental stages, multiphoton technology is now commercially available. But it comes at a high price. The Cornell Research Foundation has patented the technology and licensed it exclusively to BioRad Microscience Ltd., in Hemel Hempstead, U.K., which is selling the complete instruments for anywhere from300,000 to $450,000. About$100,000 of that stems from the cost of the laser, while the rest of the major costs are based on meeting European codes for the instrument, according to Webb. Even with that price tag and BioRad's decision not to sublicense, many researchers are optimistic that multiphoton imaging will become widespread. “Its advantages are so clear,” says biochemist Steve Potter, who has set up a multiphoton instrument in Scott Frasier's lab at Caltech. “Once the lasers become mass-produced, I predict every confocal [microscope] will become multiphoton.”

RELATED WEB SITES:

Integrated Microscopy Resource, University of Wisconsin, Madison

Center for Light Microscope Imaging and Biotechnology, Carnegie Mellon University, Pittsburgh

Caltech Biological Imaging Center links to other imaging sites

Department of Biology, University of Utah, Gard Lab, time-lapse images of Xenopus development

25. MULTIPHOTON IMAGING

# Jellyfish Proteins Light Up Cells

1. Trisha Gura

The power of multiphoton imaging to illuminate components deep within living cells and tissues rests almost as much on improving fluorescent reagents as it does on high-tech lasers (see main text). Although some proteins, such as serotonin, fluoresce naturally when zapped with multiple photons, others must be tagged with fluorescent dyes in order to become visible.

The dyes themselves, or the procedures to get them into cells, tend to kill everything before a laser beam ever touches the sample. However, a new type of marker, based on a protein from a jellyfish that glows brilliantly when hit with 500-nanometer-wavelength light, is helping to light up living cells. Called green fluorescent protein (GFP), it was first cloned in 1992 by Doug Prasher's group at Woods Hole Oceanographic Institution in Massachusetts, and can be genetically engineered into cells—eliminating the need to apply toxic stains to the specimen.

The cDNA of this protein can be hooked up to any gene and expressed along with that gene's protein product. The only thing needed to make GFP fluoresce is a single dose of high-energy ultraviolet light or its lower energy multiphoton equivalent. “The crucial difference between GFP and [earlier, widely used dyes] is that it works [better] in live cells or animals,” says GFP pioneer Roger Tsien of the University of California, San Diego. What's more, mutant forms of the protein glow in different colors—from yellow-green to bright blue—which enables researchers to follow the workings of several molecules simultaneously.

Tsien and others are now working frantically to make mutants that glow brighter, fluoresce in more colors, or hook onto calcium ions and phosphate groups in cells and tissues. A recent map of the protein's physical structure is aiding the task (Science, 6 September 1996, p. 1392), and already multiphoton-imaging pioneer Watt Webb's group at Cornell University has found that the excitation of one of the GFP mutants is much brighter than rhodamine, the brightest synthetic dye. “It's becoming a sort of mutual synergy,” says Winfried Denk, of Bell Laboratories in Murray Hill, New Jersey. “Once you can see effectively, then there is the incentive to develop new [fluorescent markers].”

26. MEDICINE

# Spectral Technique Paints Cells in Vivid New Colors

1. Gary Taubes

You can think of image enhancement as the art of helping the eye do what it does naturally. Take the two images below. On the top is a conventional micrograph of cells from a pap smear. The cells have been stained to bring out the contrast between different types: Mature epithelial cells are pink-orange, while younger cells stain blue-green, as does the precancerous dysplastic cell in the middle. A pathologist would identify it by its abnormally large nucleus, but it wouldn't be hard to miss.

On the bottom is the same image, spectrally classified. Richard Levenson and Daniel Farkas of Carnegie Mellon University in Pittsburgh created individual spectra for each pixel in the image with the help of a microscope called the SpectraCube. The microscope divides light from each pixel into beams that travel along paths of varying lengths, then are recombined and allowed to interfere. Mathematical analysis of the resulting interference patterns yields a spectrum.

By comparing each pixel's spectrum to those of reference pixels (boxes on original micrograph), Levenson and Farkas's system identifies groups of pixels with similar spectra and assigns them distinctive colors, making them much easier to tell apart than they are in the original stained micrograph. The nucleus of the dysplastic cell, only subtly different in color from that of a normal cell in the traditional micrograph, is here colored a unique and fiery red, befitting its threatening nature.

The technique had already been applied to cytogenetics by Thomas Ried and Evelin Schröck of the National Center for Human Genome Research in Bethesda, Maryland. They color-coded and differentiated the 24 pairs of human chromosomes after labeling them with tracers that endowed each one with a slightly different spectrum (Science, 26 July 1996, p. 494). Having extended the technique to pathology, Levenson and Farkas say it could be used throughout biology to increase the differentiation power of stains, dyes, and fluorescent molecules. “It divides the spectrum into a whole slew of new colors that otherwise couldn't be appreciated by the eye,” says Levenson. “It's our belief that important information resides in those colors, and spectral classification can bring it out.”

27. MEDICINE

# Play of Light Opens a New Window Into the Body

1. Gary Taubes

A light bulb or a laser beam is not the first tool you'd think of using to get a look inside an opaque object such as a brain or a breast. That may be why the idea seems to attract researchers for unusual reasons. City University of New York physicist Robert Alfano, for instance, says he entered the field because one of his students sent a light beam through a glass of milk and saw the shadow of a bead suspended in the milk. Britton Chance, a biochemist at the University of Pennsylvania, got started because his synchrotron broke. With little to do, he had the curious thought that it might be possible to propagate laser light through his own brain, so he tried it. Enrico Gratton of the University of Illinois then took to the research because Chance came to him wondering why lasers took so much longer to go through his students' brains than they did his own. “As a physicist,” says Gratton, “I never thought a laser would go through a head. But it does, and when it goes, you can learn something about what is inside.”

The fact is you can learn a lot about what is inside, and so these unorthodox beginnings have created a field of research in which light at optical and near-infrared wavelengths is used to image and probe inside human tissue. Some techniques haven't made it out of the lab. But others, such as ways of mapping the oxygenation of the brain and other organs with light, are already in clinical tests. Ultimately, researchers hope light will complement x-rays in mammography and maybe even eliminate the surgeon's knife for biopsies, tagging a tumor as benign or malignant simply by its optical properties. While some of these goals may be distant ones, light's advantages for imaging and diagnosis make it worth pursuing, says Stanford University engineer and physician David Benaron.

“Most imaging modalities are not only expensive but they're potentially harmful,” says Benaron. “And those who need imaging are often critically ill people who can't easily be transported. They want the imaging modality to come to them. Optics is perfect: Light bulbs are small; they don't emit x-rays; and they're low power.” Perhaps light's greatest advantage is its colors. Other imaging techniques rely on contrast agents—such as chemical or radioactive dyes—to make them sensitive to different structures or tissues. With light, “every wavelength you use is a new contrast agent,” says Benaron. “You have the ability to gain contrast chemically, to see whether you're looking at hemoglobin, or bilirubin, or to analyze the water and fat content of tissues.”

The catch to imaging with optical wavelengths is the obvious one. Shine a light bulb or a laser pulse through a breast, and very little of it will go straight through. Most of the light, considerably more than 99.9999% of it, will be absorbed by molecules or will scatter off cells and cell organelles and at best may end up lighting up the tissue like a dim street light illuminating a dense fog. The challenge of optical imaging is either to eke a signal out of the few photons, known as coherent or ballistic photons, that make it straight through the tissue without scattering, or to reconstruct an image from the deluge of photons that scatter hundreds or thousands of times in the course of a few centimeters. Better yet, says General Electric physicist Deva Pattanayak, is combining the two, “and doing it at multiple wavelengths.”

First light. In the late 1980s, Alfano was among the first to try to capture ballistic photons when he sent light pulses through a highly scattering medium—in his case, a glass full of polystyrene beads—and looked for photons that sneaked through without scattering. Alfano assumed that any photons that get a clear shot would arrive earlier than the photons that scattered along the way. And photons that “snake their way through the matter” with only a little scattering, he says, will follow shortly thereafter. “If you capture the early portion of the light, then you get a clear image,” he says. The trick was to catch those early photons, which is what Alfano has spent the last 7 years doing.

“We have developed various ways of selecting the earliest portion of the light, the first couple hundred picoseconds [trillionths of a second],” he says. One is a blindingly fast shutter consisting of two pieces of oppositely polarized glass—a combination that ordinarily blocks any light—separated by what's known as a Kerr medium, which reacts to light by changing its optical properties.

To trip this shutter, Alfano sends two pulses of light through the material to be studied. The first is a trigger pulse: It passes through the first polarizer and then hits the Kerr medium, making it birefringent. The result is that for the next few picoseconds, the medium will flip the polarization of any light that passes through it, allowing the light to slip through the second polarizer and in effect opening the shutter. The first part of the next pulse makes it through, but by the time the bulk of the pulse gets to the Kerr medium, it will have stopped being birefringent and the second polarizer will block the light. “We carve out a section of the scatter profile by using this system,” says Alfano.

By converting the first few photons of the second pulse into an image, Alfano says he has been able to see droplets of water floating inside a glass of milk with millimeter resolution. To image actual tissue that is thicker than a centimeter or so, he will have to use scattered light as well, which is what he is working on now.

Irving Bigio and his collaborators at the Los Alamos National Laboratory in New Mexico, however, believe they can see deeper into tissues by replacing the pulsed light with a continuous beam, which delivers more photons. The Los Alamos technique sorts out the ballistic photons by splitting the light into two matching beams. One goes through the sample while the other, the reference beam, is routed around it; then the two are recombined and allowed to interfere. “We time the reference beam so that it has the same path length as the ballistic photons in the probe beam,” says Bigio. Only the ballistic photons interfere with it. “Then the scattered photons will have a longer path length, so they will no longer be in phase with the reference beam and therefore will not be able to produce an interference pattern.” The interference pattern produced is literally a hologram, which can be read out by a third laser and will show “a shadow image of whatever is inside that scattering medium.”

Bigio says that so far he and his colleagues have “generated fairly sharp images of cross hairs, 300 micrometers in diameter, embedded in a scattering medium that is 2 centimeters long and has half the scattering power of real tissue.” They hope to optimize the method until they can image a millimeter-sized object through 4 or 5 centimeters of real tissue.

Until then, however, the use of ballistic light is likely to be limited to surface or near-surface imaging. Indeed, one system based on roughly the same principle—but limited to extremely thin slices of tissue—is already in the clinic for biopsies of the retina. James Fujimoto of the Massachusetts Institute of Technology and his colleagues shine light on tissue with a fiber-optic source and gather the photons reflected from the first few hundred micrometers, looking for light that comes back without significant scattering. Like Bigio, he knows he has minimally scattered light when it interferes with a reference beam. The results, says Benaron, are “images of amazing resolution in living tissue, although the depth is limited to probably less than 2 millimeters.”

As Fujimoto and his colleagues describe in this issue of Science, they now have expanded the concept to general optical biopsy and have figured out a way to use the system in catheters, endoscopes, and surgical microscopes (see Report on p. 2037 and Perspective on p. 1999). “In principle,” he says, “you can image at micrometer-scale resolution any part of the body you can access optically through instruments.”

Fog light. Getting even coarse resolution at depths of more than a couple of centimeters, however, means exploiting photons that have been scattered so many times that they form a diffuse glow. But this diffusive light can supply surprising amounts of information. The proportion of light absorbed at different wavelengths reflects the chemical makeup of the tissue—fat, water, or blood content, for example—while the deflection or scattering of the light depends on how the cells are organized. For example, the regularly ordered cells in healthy tissues will scatter light differently than will the tumultuous explosion of cells in a tumor. “These different effects can be used to deduce the structure of the tissue you're looking at,” says Benaron. “That is the crux of it.”

One technique for exploiting diffusive light relies on the time it takes photons in a pulse of light to penetrate the tissue. Fire in a flash of light at a single wavelength, and “what you detect is a pulse of light with a delay over time,” explains Benaron. “It's like an echo. It peaks after a period of time and then decays. That curve allows you to separate out how much absorption there is in your sample and where it is, and how much scattering there is in the sample and where that is.” The higher the absorption, for example, the faster the signal decays, while the average time the light takes to diffuse through the tissue depends more heavily on scattering. Probing the tissue with multiple pulses at different wavelengths can reveal chemical composition. And by moving the sources and detectors around, the system can gather enough information to reconstruct a three-dimensional image.

Benaron and his Stanford colleagues have created a fiber optic-based head band that allows light to be emitted and detected at 32 locations on the skull. This, he says, is enough to provide a “unique solution that allows you to solve for an image.” Because oxygen-starved brain tissue absorbs light at different wavelengths compared to healthy brain tissue, this kind of imaging could allow physicians to watch a patient for a possible stroke during surgery or monitor the effectiveness of drugs designed to restore blood flow after a stroke.

Another diffusion technique also uses light at one or a few frequencies, but modulated in a smooth sine wave. In this case, the key is to watch how the shape of the wave changes as it passes through tissue, explains Arjun Yodh, a Penn physicist who collaborates with Britton Chance: “None of the photons themselves travel more than about a millimeter before they get their direction completely randomized.” But the overall pattern is preserved, he says. “If you look at the photon number density, the number of photons per unit volume in this tissue, it is going to vary as a function of position and time.”

By measuring the amplitude and the phase of the photon-density wave as it reaches different points on the tissue surface, the researchers can figure out how much scattering and absorption the photons experienced. Mathematically, the analysis is no different from analyzing the scattering of any waves. It's analogous, says Yodh, to watching a wave on a lake scattering from piers and rocks, and then reconstructing the position and the size of the objects from the scattering pattern.

This technique, too, can map tissue oxygenation, based on differences in the absorption patterns of density waves at several different wavelengths. Chance says he and his collaborators can now “make pictures, with a resolution of about a half-centimeter, of regions of the brain, breast, or legs that don't have oxygen and therefore don't function well.” They can also detect hemorrhage, because the oxygenation of leaking blood is different from that of blood flowing normally through veins and arteries.

Chance is collaborating with researchers at Baylor University in Waco, Texas, in a clinical study that compares the diffusive-light method with x-ray and computerized tomography (CT) scans on subjects brought into the emergency room with potential brain damage. In Chance's technique, a light source and detector are placed against the skull and the transmission of light at two wavelengths is measured. The procedure, which takes a few seconds per measurement, is then repeated on the other side of the brain. “What they find is a huge change in differential absorption at those two wavelengths when there's a stroke, or bleeding.” says Yodh. The light-absorption signal, measured with a cheap and portable system, could serve as an early warning telling physicians when a CT scan is urgently needed.

The Holy Grail in this field, as Chance puts it, is developing light-based systems that could detect breast tumors and even determine whether a tumor is malignant or benign based solely on its response to light. Many researchers are skeptical that this will ever be done, if for no other reason than because of the near impossibility of getting sufficient resolution out of tissue more than a few centimeters thick.

Both Gratton and Bruce Tromberg of the University of California, Irvine, however, are working on systems to do just that. Gratton says they have demonstrated that absorbed and scattered light can reveal tumors, but “the real question is can we see all tumors?” As for the task of distinguishing malignant from benign tumors, he describes it as “another order of magnitude.” Blood oxygenation might be one basis for the distinction, he says; it may be lower in malignant tumors because the tissue is growing faster. The cells' mitochondria—which are more abundant in cancers—could also provide a clue, because the density of mitochondria should affect light scattering. “We're trying to understand fundamentally what it is about tissue that changes” in a cancer, says Tromberg, “and why it looks like it does.”

In spite of the technical hurdles, researchers persist because they believe optical imaging will be simplicity itself in practice. Benaron describes one vision: “If someone comes into an office and says ‘I have this lesion,’ you stick a light probe onto it and image the lesion. And the computer, using the absorption and scattering characteristics, can tell you whether this is normal or a cancer. That's more than just a pipe dream.”

## RELATED WEB SITES:

Site for diffusive imaging group at the University of Pennsylvania

Site for diffusive imaging group at the University of Illinois, Urbana-Champaign

28. BIOMEDICINE

# Firefly Gene Lights Up Lab Animals From Inside Out

1. Gary Taubes

What better way to look inside your resident lab animal than to put a light source inside it and detect the light seeping out? A team of researchers at Stanford University has done just that by genetic engineering.

In a proof of concept, physician and engineer David Benaron, virologist Christopher Contag, and microbiologist Pamela Contag spliced the gene for luciferase, the enzyme that puts the fire in fireflies, into a salmonella bacterium. The photos below, made with no more than a souped-up video camera, show mice infected with the glowing salmonella. Taken 5 hours apart, they trace the course of the infection when untreated (top pair) and when treated with antibiotics (bottom pair).

A transgenic mouse created by John Morrey of Utah State University represents the next step: an animal with the luciferase gene in every cell of its body. The photo below shows the glow that appears in the ears of this mouse when the gene is turned on. In this mouse, the luciferase gene is tied to a genetic switch that, in human cells, is activated when HIV, the AIDS virus, is replicating. Mice aren't susceptible to HIV infection, but the Stanford researchers simulated its effect with a chemical known as DMSO, which turns on the genetic switch and, with it, the light. “We can image in the intact animal where and when the gene is activated by watching the lights,” says Benaron. He adds that with the right animal model for HIV infection—which is still “a huge step,” says Contag—the scheme might be used, for instance, to test HIV drug treatments. “We would no longer have to wonder if the drug is effective in vivo; we could watch the virus replicate and see what happens when we give an antiviral,” says Benaron.

He adds that the spatial resolution of the technique is limited to roughly 10% of the depth—which means that a glowing cell 5 centimeters deep can be resolved to within a half-centimeter. Even so, Benaron sees unlimited potential. “You can use bioluminescent approaches to study processes in vivo which cannot otherwise be visualized at any resolution,” he says. “You could use it to study gene expression in real time. Want to know when a gene turns off and on during development? Add luciferase. Or evaluate genetic therapy. Right now we have no real-time information on genetic therapy. This would give you a way to track genetic therapies in vivo.”

29. ASTRONOMY

# Rethinking the Telescope for a Sharper View of the Stars

1. Andrew Watson
1. Andrew Watson is a science writer in Norwich, U.K.

Twinkling specks in the firmament might be good enough for poets and children's nursery rhymes, but astronomers need to do better. For decades, they have tried to transform these featureless points of light into distinct images—of galaxies, nebulae, binary stars, even features on single stars—by building larger and larger versions of the basic reflecting telescope pioneered by Isaac Newton. But they have run into limits set by the turbulent atmosphere, the effects of gravity and temperature on giant mirrors, and practical constraints on telescope size. Simply making bigger instruments is no longer enough: Astronomers have had to get smarter and rethink the telescope.

Inside the domes that are sprouting like high-tech mushrooms on mountaintops in Hawaii, Chile, and the American Southwest are telescopes that embody this new thinking. They combat the warping effects of gravity on their giant mirrors with computer-controlled “active optics” systems. They reclaim images from the ravages of the atmosphere with adaptive optics—what Masanori Iye of the National Astronomical Observatory in Japan calls “a miracle instrument” that precisely undoes the atmospheric distortions. And a few systems now transcend the limits on telescope size by combining light from three or more separate telescopes.

The goal of all this, says Roger Davies of the University of Durham in the United Kingdom, is “a sharper and deeper view of the universe.” Uncorrected images of, say, the core of our galaxy from the largest ground-based telescopes show mainly haze. But images made with the adaptive optics-equipped Canada-France-Hawaii Telescope (CFHT) on Mauna Kea, Hawaii, show distinct stars swarming around a source of intense gravity, probably a black hole. And by combining light from three small, widely spaced telescopes, Ken Johnston and his team at the U.S. Naval Observatory recently made the highest resolution picture ever created in optical astronomy, showing two stars that are such close companions no single optical telescope has ever been able to separate them.

The appeal of larger mirrors is as strong as ever: They gather more light, revealing fainter objects, and in principle they also allow a telescope to see finer detail. The biggest mirrors now scanning the heavens are those of the twin 10-meter Keck telescopes on Mauna Kea, each with a light-gathering area as large as a suburban backyard. Close behind are several others under construction. These include the Very Large Telescope (VLT) on the Paranal mountain in Chile—actually four separate telescopes, each with an 8.2-meter mirror—the Japanese Subaru 8.2-meter instrument on Mauna Kea, and the twin telescopes of the Gemini Project, one in Hawaii and one in Chile.

But the force of gravity can make building such large mirrors a self-defeating exercise, says master mirrormaker Roger Angel of the Steward Observatory in Arizona. “As mirrors get bigger, in the traditional ways of making them the only way to hold their shape was to make them thicker and heavier”—with the result that the telescope structure becomes overly massive, he notes. Angel cites the example of a Russian 6-meter telescope mirror, 60 centimeters thick, that weighed 50 tons. “You cannot build very large telescopes using that philosophy,” adds Davies, a member of the Gemini team.

Instead of relying on stiff mirrors, designers of the Gemini, VLT, and Subaru telescopes opted for mirrors about 20 centimeters thick, too thin to fight gravity on their own. These mirrors rely on active optics to compensate. As gravity distorts the mirror, a couple of hundred tiny pistons attached to the back push it to within a few tens of nanometers of a perfect shape. The system calculates the necessary adjustments by monitoring the image of a selected star, reshaping the mirror perhaps once an hour to remove distortion from the image. The Keck telescopes, the second of which became operational only a few months ago, rely on a variant of the idea. Each 10-meter mirror has 36 separate, individually controlled, hexagonal mirror segments.

Gravity is not the only enemy of large mirrors; temperature changes are another. Large mirrors cool slowly at night, and if a mirror is much warmer than the air layer above, it will drive convection—the process responsible for the shimmer seen over hot pavement—thus blurring the image. Most telescopes are equipped with coolers to help minimize the temperature difference, but Angel and his team at the Steward Observatory Mirror Laboratory, the world's largest research facility for optical fabrication, have an additional stratagem.

Their mirrors are molded with a honeycomb structure on the back, which supports the glass front face and allows it to be just a couple of centimeters thick. That speeds the response of these mirrors to changes in temperature, says Angel. These “hollow mirrors” are also rigid enough to need less active control than the thin mirrors of the VLT, Gemini, and Subaru. Currently, Angel and his team are casting their largest hollow mirror yet, 8.4 meters across, destined for the Large Binocular Telescope on Mount Graham in Arizona.

Undoing the atmosphere. Active optics and temperature control can mean that the last few meters of the starlight's journey are untrammeled. But what about the few kilometers before it gets to that point: the trip through the turbulent, convective atmosphere? The answer, adaptive optics, “has been a gleam in the eye for 40 years,” says Angel. Now, thanks in part to the U.S. “Star Wars” (SDI) program of the 1980s, that gleam is becoming sharper. Adaptive optics relies on monitoring the image of a bright star next to the object being observed to measure the atmosphere's blurring effects. The system measures the “wrinkles” in what should be a flat wave front coming from the guide star and manipulates a small deformable mirror off which the image bounces. By taking readings and reshaping the deformable mirror a thousand times a second, the system precisely undoes atmospheric distortion. In principle, this allows the telescope to produce images as sharp as the theoretical “diffraction limit” of its mirror permits, surpassing the clarity of even the Hubble Space Telescope.

The first adaptive optics systems, developed for military purposes, were declassified and offered to the astronomy community in 1991 (Science, 28 June 1991, p. 1786). “Over the past 1 to 2 years, the first scientific results have now started to come out,” says Ray Sharples, a colleague of Davies's at Durham who helped build an adaptive optics system for the William Herschel 4.2-meter telescope in the Canary Islands. With it, Durham astronomers have been able to resolve stars in the core of the globular cluster M15 and spot the otherwise invisible dwarf companion of the star Gliese 105a. Another pioneering system is sharpening the vision of the CFHT. Called PUEO, after a sharp-eyed Hawaiian owl, it produces “very clean images” approaching the diffraction limit of the telescope, says François Roddier of the Institute for Astronomy in Hawaii, a leader in developing the system.

Adaptive optics is not a perfect solution, however, because it has a kind of tunnel vision, correcting only along a single line of sight. “Basically, a compensated image consists of a diffraction-limited core surrounded with a halo of uncompensated light,” says Roddier. This means that adaptive optics systems will never render crisp panoramic pictures; those will remain the preserve of the 2.4-meter Hubble Space Telescope in its privileged roost high above the distortions of the atmosphere. What's more, the object of interest has to lie close to a sufficiently bright guide star, and sometimes there's nothing suitable nearby.

Here astronomers have a technical fix, which will see the light in adaptive optics systems that will be added to the Keck and Gemini telescopes: creating their own guide stars. In the upper atmosphere, some 90 kilometers up, is a layer of sodium atoms. A powerful laser beam reaching the sodium layer can make a tiny spot glow yellow, providing an artificial guide star that can be set aglow next to any object of interest. “Laser guide stars can increase the type of scientific observation you can do,” says Sharples.

Telescopes in tandem. With active and adaptive optics now firmly established, bringing ground-based telescopes to the peak of their performance, one last obstacle remains: the diffraction limit itself, which is set by the mirror's size. Mirrors are not likely to grow much larger than 8 meters, says Angel, because of the practical difficulties of getting larger mirrors down freeways, into boats, and up mountain tracks. But astronomers have figured out a way to trick any telescope into behaving as if it were bigger: Link it up with others by interferometry, using techniques masterminded by radio astronomers decades ago. The result matches the resolution of a single telescope having a mirror equal in size to the spacing of the separate telescopes.

Last year, a team at the University of Cambridge, U.K., led by John Baldwin, pooled signals from three small telescopes to make the first optical image ever to reveal the two components of the binary star Capella (Science, 16 February 1996, p. 907). Now Johnston and his group have used their Navy Prototype Optical Interferometer (NPOI) near Flagstaff, Arizona, to tease apart another binary star, Mizar A.

The NPOI consists of three telescopes, each with a mirror effectively just 10 centimeters in diameter, set as far as 38 meters apart in a Y configuration. Yet the image it delivered has a resolution of 3 milli-arc seconds, more than 10 times sharper than any existing single-mirror telescope image. The two stars it distinguished lie 80 light-years off but are separated by just a quarter of the distance from Earth to the sun. “It seems to me it's absolutely clear now that one can do this sort of stuff,” comments Baldwin.

It's not easy, however; interferometry requires that the light from each telescope travel precisely the same distance, to within a wavelength of light, from mirror surface to combination point. That's a punishing requirement, given light's short wavelengths. What's more, Johnston points out, the small mirrors of these first optical interferometers don't gather much light, limiting them to bright sources. Although interferometry cancels out atmospheric distortions across the entire array, making the mirrors larger than about 10 centimeters or so reintroduces the atmospheric demon for each individual mirror. But astronomers have yet another ploy in their quest for sharper images of the faintest objects: fitting each of the telescopes in an interferometer with adaptive optics. “That will allow the use of mirrors larger than 10 centimeters as interferometer apertures, so that fainter objects can be detected,” says Johnston.

All three innovations should one day find themselves mountaintop neighbors. Plans are afoot to turn the Kecks into an interferometer by combining the two big telescopes with four smaller new ones. And, if all goes well, sometime early in the next century astronomers will link the four big mirrors of the VLT, each one equipped with active and adaptive optics systems, into an interferometer capable of producing images 50 times sharper than those of Hubble.

Beyond the mountaintops, the logical next step in the search for the sharpest images is putting interferometers in space. “It will happen in space, but it will take a long time,” says Baldwin. Angel and his colleague Neville Woolf have already made a proposal for a space-based interferometer, one of several now in play. Theirs, featuring four 4-meter dishes made of glass just 2 millimeters thick, spaced along an 80-meter beam, should be sharp-eyed and sensitive enough to separate the faint glow of an Earth-like planet from its parent star. After their successes at seeing detail in the stars, telescope builders think that goal is in reach. “The technology to build this thing is unquestionably with us,” says Angel.

## RELATED WEB SITES:

Gemini project general site

Subaru Telescope general site

Very Large Telescope general site

Site for Steward Observatory, including mirror lab and Large Binocular Telescope

Keck Telescopes general site

Site for COAST, the Cambridge optical interferometer

Site for NPOI, Navy optical interferometer