News this Week

Science  29 Aug 1997:
Vol. 277, Issue 5330, pp. 1192

    The Calculus of School Reform

    1. Gretchen Vogel

    Despite years of effort and hundreds of millions of dollars, there's scant evidence that the movement to reform U.S. mathematics and science education has significantly improved student learning. Why not?

    Seven years ago, elementary school teacher Allyson Glass was a self-described math phobic. A former physical education teacher in her second year of teaching third grade, she relied entirely on a standard math textbook and spent only the prescribed time—about 1 hour a day—teaching the subject. But after a two-summer fellowship studying how to improve math and science teaching, she says she was “totally transformed.” Even during history lessons, her students at Benjamin Franklin Elementary in Meriden, Connecticut, use math to study the geometric patterns in a quilt from colonial times. “The kids know they can't get away from a lesson without doing some math,” she says. But they don't seem to mind—most of her students say it's their favorite subject.

    What's happening in Glass's classroom is being repeated this fall in thousands of schools across the United States, as a movement to implement national educational standards in math and science takes hold. The movement dates back at least to 1989, when the National Council of Teachers of Mathematics (NCTM) issued guidelines on what students should know at various grade levels. In 1993, the American Association for the Advancement of Science (AAAS), which publishes Science, issued a set of benchmarks for science, math and technology education. Two years later, the National Research Council (NRC) followed suit with similar standards for science. Now, educators are revising teacher training, curricula, and assessment practices in an attempt to meet these or similar guidelines, which call for a new approach to teaching, with more hands-on learning leading to a deeper understanding of the subjects. The National Science Foundation (NSF) has also poured $25 million a year since 1990 into more than a dozen programs to develop standards-based curriculum materials in mathematics and science.

    The push is coming from the top: President Clinton has made a cornerstone of his second Administration the goal that U.S. students will be first in the world in math and science by 2000. And it's a rare Clinton speech that omits mention of his proposal for a voluntary national test—of reading in the fourth grade and of mathematics in grade eight—that will allow parents and teachers to chart their children's progress.

    Yet, for all this ferment, the effort to implement math and science standards has been a slow and, at times, frustrating experience. The results speak volumes about the difficulties of reforming an educational system run by thousands of independent school districts. Although scores on some tests have improved nationally, significant gains in student achievement remain, for the most part, elusive. One major hindrance is the vast number of teachers who took few math or science courses in college and, unlike Allyson Glass, have had no additional training in the new world of standards-based education. Progress is also slowed by textbook publishers reluctant to make real changes in their products and by the continued emphasis on standardized tests that measure only proficiency with basic facts.

    The standards-setting movement has also stirred up political passions that underlie the bedrock conviction that local communities should control what their kids learn. Indeed, in some places, battles between proponents and opponents of math reform have left the movement in disarray (see sidebar on p. 1194). A debate is also heating up at the national level as powerful critics attack the notion of national standards as an unwanted incursion into local control. For example, Representative William Goodling (R-PA), chair of the House Committee on Education and the Work Force, has introduced two bills to block the nationwide tests for fourth and eighth graders that Clinton advocates. In a recent Washington Post editorial, Goodling wrote that such testing “could lead to a national curriculum … which Americans don't want and don't need.”

    As a result, while anecdotal evidence of positive change abounds, no state—or even school district—can yet claim to have reached the promised land of standards-based learning. Yet many educators take some comfort from the fact that, difficult as it is, change is at least occurring throughout the vast and fragmented U.S. education enterprise. “Considering where American education was [in the late 1980s], there has been remarkable progress,” says education professor David Cohen of the University of Michigan, Ann Arbor. “But, considering where American education was, it would be really good if there had been more progress.”

    Whose standards?

    The NCTM, AAAS, and NRC guidelines are often billed as national standards, and educators say they have had widespread impact. “The NCTM standards have definitely been used as an important resource document in virtually every state,” says Gordon Ambach, executive director of the Council of Chief State School Officers (CCSSO) in Washington, D.C. But many states have adopted their own variations. For example, the NRC science standards for grades five through eight list six ideas that lead to an understanding of structure and function in living systems. However, the Illinois science framework simply states that by the end of eighth grade, students should be able to “identify similarities and differences among animals that fly, walk, or swim” and “compare structures of plant cells to animal cells.”

    It's hard enough for even well-trained teachers to keep up with this bewildering diversity of standards. So it may be no surprise that most say they really don't understand what the standards ask them to do. Despite several public relations campaigns—including 500,000 brochures and a video featuring jazz trumpeter Wynton Marsalis and other entertainers—45% of fourth grade teachers nationwide told a 1996 Department of Education survey that they have “little or no knowledge” of the NCTM standards, while only 6% said they were “very knowledgeable” about them. The numbers are somewhat better for eighth-grade teachers—only 16% claim little or no knowledge of the standards, and 17% are very knowledgeable.

    Even when teachers are well versed in the standards, they face a tough problem in applying them in the classroom. “You can't just sit down and look at the NCTM standards and say, ‘This is what I'm going to do tomorrow, and I'll be implementing the standards,’” says John Wheeler, a mathematics consultant for the Iowa Department of Education. Education professor Marcia Linn of the University of California, Berkeley, agrees. “It's like reading the Bible or the Talmud,” she says. “There's 102 million interpretations.”

    Researchers who have observed “standards-based” classrooms agree that much of what passes for standards-based teaching is a pale imitation of the real thing. Teachers are “making changes around the edges,” says Lorraine McDonnell, a political scientist at the University of California, Santa Barbara. There may be more hands-on activities and more small-group learning, she says, but rarely are students required to apply the concepts they learn to real-world problems—one of the central recommendations of the standards.

    Such sophisticated teaching is much harder, says McDonnell, and requires more teacher training and more time in the classroom. But training is often lacking. Although “how-to” workshops on implementing the standards are available in many states, the Department of Education survey found that 60% of U.S. students are taught by teachers who have never attended such a workshop, much less a comprehensive program like Allyson Glass's fellowship.

    Standards-based textbooks could help guide teachers, but publishers have been slow to catch on. “A lot of textbook publishers claim they meet NCTM standards, but it's a little like a Rorschach test,” says Senta Raizen of the National Center for Improving Science Education (NCISE) in Washington, D.C. “I don't think there is a textbook series you can buy off the shelf that really speaks to the standards.”

    Faced with that hurdle, Jo Ellen Roseman, curriculum director for Project 2061, an education reform project sponsored by AAAS, has helped to design a system for analyzing math and science textbooks. It tracks the NCTM and NRC standards as well as 2061's own set of standards, called Science for All Americans and Benchmarks for Science Literacy. Although she cautions that the project is not finished, preliminary results aren't wildly encouraging. “We've found some units that are outstanding,” she says, “but very few.” Even NSF-sponsored projects “don't necessarily come through with flying colors.”

    Most available materials, Roseman says, spend too little time on a given topic, introduce concepts earlier or later than the standards recommend, and rely on memorization rather than allowing students to draw their own conclusions. “Publishers are not yet convinced that the market is driving them to produce these kinds of materials,” she adds. “But we're beginning to work with developers” to revise and improve their texts.

    One frustration faced by educational reformers is the lack of good information on what students know and how the implementation of standards has influenced that knowledge. The meager evidence is decidedly mixed. On the plus side, mathematics scores on the National Assessment of Educational Progress (NAEP)—a national test given by the Department of Education—have risen since 1992. Progress on this test is especially heartening, say reformers, because it includes open-ended questions that measure the kinds of learning they are trying to encourage. And U.S. third and fourth graders scored above the international average in math on the highly publicized Third International Mathematics and Science Study (TIMSS), which compared students from 29 countries. Yet seventh and eighth graders scored below the international mean (Science, 22 November 1996, p. 1296; and 13 June, p. 1642). The TIMSS results were especially curious because the new standards have so far had a limited impact in the lower grades, at least judging by the proportion of teachers who say they are familiar with them.

    At the local level, reformers say that traditional tests, which tend to measure how good students are at remembering facts quickly, are not a fair gauge of their efforts. “So much of what is in the standards,” says Iowa's Wheeler, “is not adequately measured by traditional multiple-choice norm referenced tests like the Iowa Tests of Basic Skills,” which hundreds of thousands of students across the country take each year. “So if you're not testing that kind of thing, you don't have any baseline data to go on.” At the same time, the push for accountability has increased the use of such tests. And when scores have dipped, parents have taken to the streets—and school board offices—to protest standards-based reforms.

    Accordingly, the standards not only call for a new curriculum and new teaching strategies, but also new tools to measure students' learning. The NAEP mathematics and science tests are a step in the right direction, say reformers, with their open-ended problems that require students to explain their reasoning and give credit for correct reasoning as well as correct answers. In addition, the New Standards, a student assessment system that includes open-ended test questions and student portfolios, is gaining popularity.

    If Clinton has his way, by 1999 every eighth grader in the country may have the chance to take a standards-based test. Such a test could help shepherd teachers toward a more consistent interpretation of the standards, note former NCTM President John Dossey and current President Gail Burrill, who are chair and vice chair of the committee charged with writing the mathematics exam to be given to eighth graders nationwide.

    But some reformers wonder how much impact a voluntary test can have. It is not yet clear what rewards and punishments Clinton's proposed test would carry, but it may be a test without teeth. “I'm skeptical about it,” says NCISE's Raizen. “Does it count? No. There are a lot of people who think this kind of test is going to drive reform, but I don't think so.”

    There's also the question of coverage. So far, only six states and 15 urban school districts have volunteered to join the testing program (see map). Still, the White House remains committed to preparing such a test through the Department of Education, and officials claim that a dozen more states are ready to join in.

    Wide influence.

    Many of the states that have written or revised their curriculum guidelines or standards in the last 4 years have referenced national documents. Six states have signed on to the voluntary tests being promoted by President Clinton.


    Standard iterations

    Even as educators around the country are struggling to implement the 1989 standards, NCTM leaders are working on a new and improved version for 2000. The revision provides a “chance to see where we are and what we've done,” says Mary Lindquist, education professor at Columbus State University in Georgia, who heads the math council's Commission on the Future of the Standards. The main message will stay the same, but the revised document will “clarify” several areas—including basic skills and proofs—that critics fault in the current version. The new document, called Standards 2000, will also update recommendations for using calculators and computers in the classroom. The NCTM has received critiques and suggestions in sessions at its national and regional meetings and through its Web site (, and it's sorting through thousands of responses before proposing revisions.

    But Glass says she doesn't need a national survey to glimpse the future of standards-based reform. “The teachers who have taught by the standards and who have invested in them believe in them,” she says. As a result, she adds, her students “become problem solvers. And I don't think that leaves a child.”


    The Special Needs of Science

    1. Gretchen Vogel

    While educators making efforts to set standards for science education face many of the same hurdles as those in mathematics, the diversity of the science community makes the job even tougher. Thanks in part to that diversity, there have been two separate efforts in the past 4 years to present a cohesive vision of what students should learn in science classes from kindergarten through grade 12. Combined with the relatively recent appearance of the two documents, these factors have caused science reform to lag behind math in many of the nation's schools.

    In 1993, Project 2061, an education reform effort sponsored by the American Association for the Advancement of Science (which publishes Science), issued Benchmarks for Science Literacy, which recommended what students should learn at various grade levels. (That document followed Science for All Americans, which in 1989 described what scientifically literate adults should know.) In 1995, at the request of the National Science Teachers Association and other scientific and educational societies, the National Academy of Sciences issued the National Science Education Standards. While the documents are compatible, say most educators, there are enough differences to confound well-intentioned teachers.

    Science reformers also labor under special conditions. Unlike in math, in which there was a consensus in the community on what to cover, the science standards generated pitched battles within fields about content, and between disciplines about their relative importance. In addition, the emphasis of hands-on and lab activities puts increased pressure on an already crowded school day, not to mention tight budgets.

    Although the hard-won compromises in the Benchmarks and the Standards are a welcome step in the right direction, some educators say they both have flaws. “They still are trying to teach everything and therefore not teaching anything in depth,” says Senta Raizen of the National Center for Improving Science Education in Washington, D.C. “If we really take inquiry seriously, and it takes a month or two or three, what are we giving up?” she asks. George Nelson of Project 2061 says an upcoming book, Designs for Science Literacy, should help teachers decide what to take out of their lesson plans. That's a much harder question, everyone agrees, than deciding what to put in.


    California Spars Over Math Reform

    1. Gretchen Vogel

    The world happens first in California, the saying goes. If so, then many of the changes in primary and secondary school mathematics being advocated by professional societies and national education organizations could have a tough time finding a permanent home in U.S. classrooms.

    In the late 1980s and early 1990s, California was seen as a shining example of how to implement standards-based reform. And because the state buys 12% of the nation's textbooks, what was happening in California was expected to have widespread impact. But the efforts of reformers—a revised state curriculum framework, new textbooks, and a new statewide test—sparked a vigorous backlash from parents and politicians after their introduction in 1992. As a result, the next version, due out in 1998, promises to be quite different from the standards promoted by the National Council of Teachers of Mathematics (NCTM) in its 1989 document (see main text).

    The reformers in California had hoped to create “mathematically powerful” students who could reason and apply mathematics to problems in their daily lives. The primary vehicle for change was the state's curriculum framework, which guides local school districts and the state's textbook adoption committee. The 1992 framework, quoting generously from the NCTM standards, called for teachers to question more and explain less, to group higher and lower ability students together, and to assign more projects and fewer workbook drills. By 1994 the radically new textbooks started appearing in classrooms.

    The reaction was swift: Parent groups around the state organized to fight what they called “fuzzy math” and “new New Math.” They said the new curriculum used untried teaching methods and replaced basic skill drills, such as multiplication tables and long division, with projects such as writing and illustrating a “Problem of the Week.” “It transformed math problems into English essay writing” and sacrificed mathematical precision, says Bill Evers, a political scientist at Stanford University and a leader of a Palo Alto-based parent group on math reform called Honest Open Logical Debate (HOLD).

    In an effort to wipe the slate clean, HOLD and other grassroots organizations persuaded state officials to move up by 1 year, to 1998, the next revision of its frameworks. The state test implemented with the 1992 standards, which included open-ended and essay questions, fared even worse: Governor Pete Wilson scrapped it in 1994. The controversy also robbed the reform movement of whatever political clout it once possessed. Antireform activists constitute the majority on the panels that are drafting both the new content and performance standards and the 1998 framework.

    Supporters of the 1992 framework fear that the new version will abandon a cornerstone of reform: namely, that all students—not just those pegged as gifted—should be able to understand mathematical reasoning in addition to knowing the basic facts. “The attempts to be bold and visionary have been cast as the only thing we care about,” says Shelley Ferguson, an elementary school teacher in San Diego who has been involved in the reform process. “They've been taken to the nth degree in an effort to trivialize what the reform should be about.”

    Both sides agree that the latest drafts of the standards, released in mid-August, are a far cry from NCTM's document. What is driving the new standards, say Evers and other committee members, is a state requirement to enumerate what students need to know in each grade. In drawing up their plan, the group relied heavily on standards from Virginia and from the Charlotte-Mecklenburg, North Carolina, school district, which follow that format.

    Evers says the new draft contains standards as rigorous as those in high-performing Japan, and that it will prepare students to go on to algebra and geometry in eighth grade, 1 year earlier than had been the norm. But Ferguson disagrees. “It's back to a laundry list of topics to know,” she laments. “Conceptual understanding and problem solving are pretty absent.”

    Whatever the outcome, reformers elsewhere say that the California math wars have taught them the importance of educating parents and policy-makers as well as teachers. “We can't neglect the effort to educate the public about public education,” says Suzanne Wilson of Michigan State University in East Lansing, who has followed the California reforms for more than a decade. If they do, reformers could end up with a report card marked incomplete.

  4. 1998 BUDGET

    Bipartisan Mood in Congress Opens Door for Pork

    1. Andrew Lawler

    The bipartisan flavor that has become so popular in Congress these days has brought with it the distinct aroma of pork. After falling into temporary disfavor with congressional budget cutters, legislative earmarks—also known as porkbarrel projects—no longer seem to be a lightning rod for criticism. That's good news for the institutions that stand to gain millions of dollars in R&D funding set aside by lawmakers in 1998 spending bills that Congress hopes to wrap up as it returns to work next week. But others worry that Congress is encouraging bad science by circumventing peer review.

    Adding money not requested by the Administration and targeted for specific districts or states is an ancient practice. However, it came under attack in recent years as part of a broader assault on wasteful government spending. But times have changed. The antigovernment ardor has cooled, key opponents of pork have retired, and Republicans and Democrats have set aside their differences in a plan to eliminate the budget deficit by 2002.

    NASA and the Environmental Protection Agency (EPA) appear to be the biggest recipients of proposed earmarks among R&D agencies. About $20 million of the $614 million that the House appropriated for EPA research in 1998 is for specific pork projects, for example. A typical earmark is the one offered by Representative Jerry Lewis (R-CA), who chairs the House panel with funding oversight of NASA and EPA. He's designated $2 million in NASA funding for a space radiation lab at Loma Linda University, a Seventh-Day Adventist school east of Los Angeles, which is in his district. Lewis has also arranged for the University of Redlands in California—also in his district—to get $1 million from a $6 million pot for EPA to study the rapidly disappearing Salton Sea.

    Space pork.

    Loma Linda University's new space radiation lab is a pet project of Representative Jerry Lewis (top photo).


    In the same bill, Representative Alan Mollohan (D-WV) won $1.9 million for the National Technology Transfer Center in Wheeling, West Virginia, while $2 million is headed to Houston's Mickey Leland National Urban Air Toxics Research Center, compliments of Texas legislators. Next door, the Louisiana delegation, which includes Appropriations Committee Chair Bob Livingston (R), wangled $2 million for research at the University of New Orleans into urban waste management and $1.3 million for oil spill remediation research at McNeese State University in Lake Charles, Louisiana.

    The Senate version of the bill has fewer earmarks, but they are individually more impressive. Senator Ted Stevens (R-AK), who chairs the Senate Appropriations Committee, succeeded in winning the largest NASA earmark of all—$2.5 million for a science learning center in the small town of Kenai, Alaska. And again, Democrats shared the spoils. Senator Daniel Inouye (D-HI), for example, inserted $2 million for work on a national space education curriculum by the Center for Space Education at the Bishop Museum in Honolulu.

    At this point, the projects appear to be mostly add-ons rather than substitutes for the agency's scheduled research. The House version of EPA's overall budget is $104 million above this year's level and $41 million above the president's request, while at NASA, the House has included a little less than $10 million in specific earmarks to a $5.7 billion appropriation that is $50 million higher than the White House requested for science, aeronautics, and technology. The bills that include funding for the Energy and Defense departments also have a smattering of specific R&D earmarks, such as the Senate's offer of $3.9 million in DOE money for biological imaging at the University of California, Los Angeles.

    Some appropriators have resisted the trend. Congress has decreased the number of research-related earmarks in the bill funding the Department of Agriculture, in particular those for new buildings and facilities. In the past, such earmarks have totaled about $60 million in the agriculture bill. And the bills funding the National Science Foundation (NSF) and the National Institutes of Health, with their strong traditions of peer review, remain generally free of pork.

    But those agencies are the exceptions, say congressional and Administration officials, who note that there is less pressure now to avoid earmarks. “Members are trying to build a more bipartisan relationship to get the [appropriations] bills through,” says one Administration official. Earmarks, added as a favor to key members of Congress, “get the legislation through more smoothly.” And the class of 1994—part of the new Republican majority—no longer has the power it once did to make an appropriations chair think twice before adding pork to a bill.

    Antipork forces lost an ally when former Science Committee Chair Bob Walker (R-PA) retired. He and former chair George Brown (D-CA) battled many R&D earmarks during the past several years. “You have to go to the floor and fight them,” says Walker, now with the Wexler Group in Washington. He says the practice “makes for bad science” because it gives an advantage to research that “passes no test but a parochial and a political one.” He acknowledges that earmarks can benefit small universities that are not part of the “old-boys' network” of larger research institutions, but he says doling out pork is not the best way to overcome biases in the funding system.

    Beneficiaries of congressional largess see it differently. Money for the Leland Center goes for peer-reviewed grants to researchers largely outside Texas, says Ray Campion, the physical chemist who is the center's president. He argues that EPA refuses to request money for the center because it is independent “and they don't control our research agenda.” As a result, the center must turn to Congress.

    Specific earmarks don't tell the whole story. Many legislators now target specific research areas rather than individual institutions, with the expectation that the program will benefit particular institutes and universities. Senator Kit Bond (R-MO), for example, added $40 million for a plant genome initiative at NSF, knowing that such an effort could well involve such important constituents as farmers, the Monsanto Co., and Washington University in St. Louis.

    So while the practice of earmarking may change form, it is unlikely to go away. “It's like Whack-a-Mole,” one staffer says, referring to the carnival game. “You hit in one place, and it comes up in another.”

  5. KOREA

    In Land of Industrial Giants, Universities Nurture Start-Ups

    1. Dennis Normile

    Seoul, KOREA—Sunyoung Kim, a molecular biologist at Seoul National University (SNU) in Korea, never thought of himself as an entrepreneur. His impressive academic pedigree—a Ph.D. from Oxford University, followed by a postdoctoral stint at the Massachusetts Institute of Technology's Whitehead Institute and an assistant professorship at Harvard Medical School in Boston—marked him as a dyed-in-the-wool researcher. But after developing a new gene-therapy technique, the 41-year-old associate professor at SNU's Institute for Molecular Biology and Genetics found himself not only the founder of a company to commercialize the technology but also a role model for Korean entrepreneurs. In March, his relatively modest deal with a British pharmaceutical firm won him front-page newspaper and television coverage, admiring letters from high schoolers, and envious phone calls from friends. “Everyone was thinking I had become a billionaire,” Kim says.

    New breed.

    SNU's Sunyoung Kim, with Seung Shin Yu, standing, and Seon-Hee Kim, seated, in a corner of the institute that serves as his company's lab.


    His first billion is still a long way off, but the attention is real. Korean planners and economists believe that the nation's reliance on a few large conglomerates, the so-called chaebol that include such familiar names as Samsung, Hyundai, and Daewoo, has reached a dead end. The chaebol are staggering under mountains of debt and poor management, and the country's former ferocious economic growth has slowed to a crawl. “The only hope for the Korean economy is in venture businesses,” says Jang Woo Lee, a professor of management at Kyungpook National University in Taegu, who says academia is an ideal place to spawn them.

    Because venture business success stories are rare—Lee puts the number at 20 or so—university and government officials are taking several steps to boost that number. This spring faculty at the country's national universities earned the right to take a 3-year leave to start a new business. Those who fail may return to the classroom. More significantly, universities are expanding or setting up venture business incubators, which typically provide would-be entrepreneurs with free or low-cost space and access to university computers, test and measurement equipment, and machine shops. Supported by government and university funds and some private contributions, they offer management seminars, faculty advice, and some sort of networking scheme to connect entrepreneurs and potential “angels.” The model adopts many features of successful incubators in the United States and elsewhere.

    The Korea Advanced Institute of Science and Technology (KAIST) pioneered this approach in 1992, and this month it expects the government to sign off on a new facility that will allow it to quadruple the number of businesses under its wing. “We have far more applications than we have space for,” says Ho Gi Kim, director of the center, which now nurtures 23 companies.

    But KAIST is hardly alone. In June, SNU's College of Engineering inaugurated its University New Technology Network, which will open university facilities and provide other support to budding entrepreneurs. “At the beginning of this year we recognized the importance of venture [activities] for the college of engineering,” says Jang Moo Lee, dean of the college. Pohang University of Science and Technology (POSTECH) plans to join the crowd in November by opening a venture business center, and it has already expanded the incubator concept by creating POSTECH Venture Capital Corp., a $30 million fund to invest in selected venture businesses. “No one else is combining an incubator with funding,” says Jeon-Young Lee, who doubles as POSTECH's dean of research and as president of the new venture capital firm.

    Starting small.

    SNU's Jang Moo Lee says universities still have a lot to learn about incubators.

    The goal of these incubators is to produce successors to Korea's most famous start-up group, the Seoul-based Medison Co., which was founded in 1985 by a group of KAIST graduate students. Medison, which employs 300 people and had sales last year of $92 million, holds a third of the world market for small ultrasound imaging systems used in medicine.

    But the effort to lure researchers into business faces plenty of hurdles. Government policies traditionally have favored the chaebol, and financial regulations make it difficult for new businesses to raise capital. Korea's compulsory military service, a 26-month stint facing most university students upon graduation, can disrupt the career plans of budding entrepreneurs. And still-standing government regulation prevents national university professors from serving on the boards of private corporations—a rule that keeps Sunyoung Kim off the board of the company he started.

    No one expects the incubators to produce success stories overnight. The average start-up may need 3 to 5 years to get on its feet, KAIST's Kim says. He admits that only one company, Mari Telecommunication Co., has “graduated” from its incubator, although he expects two or three more to take off shortly. Mari, with 26 employees, had sales last year of $1.2 million from two computer games. Ironically, the company is moving its head office to San Jose, California, to be closer to the U.S. market and its competitors, although it will keep an R&D center in Korea.

    The universities are also wrestling with such issues as intellectual property rights and whether successful ventures should return some portion of their earnings to academia. SNU's Lee says one possibility is for academic departments to ask for stock options in the companies they spawn. “We're in an incubation period ourselves,” he says.

    Despite those unresolved issues, Sunyoung Kim's venture has benefited from the warmer climate for entrepreneurs at universities. The company aims to commercialize a retroviral vector, a vehicle used in gene therapy to implant target genes in the host. Kim had modified the vector, a murine leukemia virus, to make it safer, more versatile, and easier to use than other vectors. A presentation at a conference at Cold Spring Harbor, New York, in September 1996 drew the attention of several pharmaceutical companies, which offered to support further development work.

    Kim was excited by the prospect of having an impact on real-world medicine. “You publish a paper and nobody reads it, because there are so many good papers,” he says. “I think curing one patient is better than publishing one good paper.”

    Acting on the advice of one firm, Sunyoung Kim and seven other investors gathered $250,000 in capital and established ViroMedica Pacific Ltd. Earlier this year, the company signed contracts with Seoul-based Korean Green Cross Corp. and the U.K.'s Oxford Biomedica for $1.6 million over 3 years to support further development work in return for rights to use the vector.

    Although not under the wing of any formal incubator, ViroMedica is treated like a member of the SNU molecular biology institute's family. It occupies a corner of the institute's lab, its four full-time employees are recent SNU graduates, and the company also employs several grad students on a part-time basis. The additional duties have been a strain, Sunyoung Kim admits, but they haven't diluted his love for science. “I am working even harder on regular university duties,” he says, “as I do not want my colleagues to think that I am neglecting [them] because of a money-making business.”

    Even so, Kim already talks like a veteran entrepreneur, describing plans to commercialize spin-off technologies and to raise money for his own building. Through it all, he exudes what may be the most important characteristic of a successful entrepreneur. “You have to be optimistic,” he says.


    Government Restores Funding Cuts

    1. Michael Balter

    Paris—Most of France traditionally shuts down for the month of August. In contrast to what usually happens, French scientists returning to their labs next week will get a pleasant surprise. The Socialist government plans to restore about $43 million in higher education and research funding cuts that the previous conservative administration—which was turned out of office in last June's elections—had intended to enact this September. In addition, the new government will immediately create 220 new research positions and 300 new scholarships for doctoral students.

    The news is sure to be welcomed by French researchers, who in recent years have become accustomed to receiving bad tidings when they returned from vacation, usually in the form of temporary funding freezes that later became permanent. As a result, scientists have lived in constant fear of overspending their budgets. “We are told not to spend more than one-twelfth of our research budget each month,” says microbiologist Richard D'Ari of the Institut Jacques Monod, a major research unit in Paris of the giant CNRS public research agency. “And we never know for sure whether we will get all 12 installments.”

    Vincent Courtillot, chief adviser to education and research minister Claude Allègre, says the intended cuts were part of severe austerity measures the previous government had been planning before it lost the election. “Everyone would have found out about it when they went back to work in the fall,” he adds. The funds saved from the chopping block include $7.9 million for CNRS and $9 million for university research.

    The government added the 220 new research jobs—120 permanent posts and 100 temporary positions, mostly in public research organizations such as CNRS and the biomedical research agency INSERM—as a first step in a long-term plan to step up recruitment of young scientists, which has not kept up with the retirement of senior scientists. Courtillot says the government eventually hopes to boost the annual recruitment rate from the current figure of 2.3% to 4%. But some researchers caution that the desire to increase the total number of scientists must be balanced against the limited funds available for research. “It is necessary to have a fairly solid influx of young people each year,” says D'Ari, “but at some point you reach a ceiling.”

    The new government, which has scored points with French scientists for its positive attitude toward research, will have another opportunity to increase its popularity when it unveils its proposed budget for 1998, probably in September. Yet Courtillot says scientists should not expect a major rise in overall research spending: “Claude Allègre doesn't want to ask for more money.” Rather, Courtillot says, Allègre hopes to free up money for laboratories by redirecting scientific priorities and cutting administrative costs (Science, 18 July, p. 308). Ultimately, the government's long-range plans to overhaul French research will depend on the success of this effort. But as Courtillot points out, “it's easier to make reforms when you're increasing research spending than when you're decreasing it.”

  7. JAPAN

    Merger Plan for Rival Science Agencies

    1. Dennis Normile

    Tokyo—Most support for science in Japan has been divided for decades between two fierce rivals, the Science and Technology Agency (STA) and the Ministry of Education, Science, Sports, and Culture (Monbusho). Like competing warlords, the two primary sponsors of Japanese science have fought for a larger share of the budget and competed to extend their turf into emerging scientific fields. But now they may have to learn to get along. A governmental panel has decided to recommend that the two be merged into one new Ministry of Science, Technology, and Education.

    The decision to attempt a merger was made last week by the Administrative Reform Council, an ad hoc committee chaired by Prime Minister Ryutaro Hashimoto that is charged with producing recommendations for streamlining the entire Japanese government. For 4 days, the council holed up in a Tokyo hotel and mapped out a plan to reduce the number of ministries and agencies to 13, down from the current 22, to make government operations more efficient and reduce bureaucratic infighting. While the council succeeded in outlining a restructured bureaucracy, it did not work out all the details. A report due at the end of November will flesh out the agencies' specific roles and responsibilities.

    The decision to streamline the scientific agencies does not mean the government intends to reduce support for research, says Nobuaki Kawakami, who heads STA's administrative reform office. “The members [of the reform council] are very aware of the importance of scientific research for Japan's future,” Kawakami says. Japan's plans to increase its public science budget from 0.6% of gross domestic product—the figure that prevailed in the early 1990s—to 1% by the early 2000s will not be affected, he insists.

    But even if the merger doesn't reduce funding, it is likely to impact some projects in ways that won't be clear until the details are worked out. It will be a challenge, for example, to blend the operating styles of the two agencies, which now have very different cultures. Monbusho, the primary source of support for university-based research, typically disburses relatively small amounts to individual researchers in a variety of scientific, engineering, and cultural disciplines. STA typically concentrates its resources in large-scale efforts in select applied fields—fusion research, for example, and commercial space development.

    Makoto Fujiwara, head of Monbusho's reform office, says this means that even if the agencies merge at the governmental level, affiliated research institutes may well remain separate. For example, Monbusho's Institute of Space and Astronautical Science focuses on space science, while STA's National Space Development Agency emphasizes commercial applications, although recently it began to expand its efforts in remote Earth observation to support environmental research. Fujiwara says such institutes “are not likely to soon be merged into one.”

    Most scientists are reserving judgment on the merger, for now. “Rather than the overall framework, I think what's important is how the details are worked out,” says Wataru Mori, a pathologist who is a former president of the University of Tokyo and a member of Japan's Council for Science and Technology.

    The high-level reform plan may also affect research offices other than those directly funded by the science agencies. For example, the Reform Council recommended that the present Environment Agency be upgraded to ministry status and take over some functions of the Ministry of Health and Welfare. But it is not clear whether this change would boost or restrict funding of environmental research.

    After delivering its November report, the reform council will send its recommendations to the legislature, which is scheduled to take up the issue early next year. If the reform efforts go as planned, the government will complete the restructuring by early 2001.


    NASA Boosts Earth Science Grants

    1. Andrew Lawler

    NASA's plan to launch a flotilla of Earth-observation satellites starting next year has sparked fears that the cost of the expensive hardware will leave little funding for researchers wanting to interpret the data. But last week, space agency officials said that, as part of a package of changes in the program—called Mission to Planet Earth (MTPE)—they have come up with significant savings that will be pumped into grants. By the year 2000, the plan would increase the number of 3-year research projects by nearly a third.

    The pledge to boost science spending is a response to a 2-year review of the $6.5 billion effort, which has as its centerpiece a series of sophisticated satellites to gather myriad measurements of the planet's temperature, cloud cover, ice sheet mass, and atmospheric makeup. The review, by an outside group of scientists led by Pamela Matson, an ecologist at the University of California, Berkeley, stems from a 1995 study by the National Research Council that called for significant changes in the program (Science, 22 September 1995, p. 1665).

    Money machine.

    Next year's launch of EOS-AM will spawn thousands of research proposals.


    The Matson panel concluded last month that current funding for data analysis is “so low as to put science at risk.” It noted that in one recent solicitation for projects to analyze land-use data, NASA was able to fund only 8% of the more than 250 proposals it attracted, and that dozens of rejected proposals were rated very highly by external peer review. MPTE officials concur. “The criticism has been right on target,” acting MTPE chief William Townsend told reporters. “It's time to fix it.”

    By simplifying spacecraft and information systems, NASA will save enough money to boost annual funding for data analysis from $130 million to between $160 million and $165 million by the turn of the millennium, says Townsend. Initially, NASA also intends to limit the amount of space data processed so that the project's ground-based information system—plagued with budget, design, and managerial problems—will not be overwhelmed. The Matson panel strongly supports that move, but warns that the agency must come up with an “intelligent strategy for doing so … and very soon.”

    In another cost-saving move, NASA plans to launch a re-engineered laser-altimeter satellite to gather data on ice sheets and cloud heights in 2001, a year earlier than planned, for a savings of $25 million over the original $225 million price tag. But the agency rejected calls from some in Congress to break up the Chem-1 spacecraft—the third in the Earth Observing System (EOS), which will carry atmospheric chemistry instruments—into smaller satellites. “We couldn't come up with anything we could do less expensively,” Townsend says.


    Drug Delivery Takes a Deep Breath

    1. Robert F. Service

    Medicines of the future, the fruits of today's biotechnology industry, may not require patients to face a shot in the arm. Instead they may need only inhale a fine powder or spray

    While biotechnology is proving highly successful at getting new drugs onto the market, getting them into patients is another matter. With the help of new technologies for gene splicing and expression, companies continue to push dozens of new drugs into clinical trials each year, according to Bioworld, an industry watchdog. But a large number of these compounds are peptides and proteins, which are easily broken down by enzymes in the stomach; if they were taken as a pill, they would never make it to their destination. As a result, patients must brave an injection to take advantage of most of these new high-tech medicines. For potentially life-threatening diseases, that is rarely an impediment, but drug company executives worry that for other treatments the complication and expense of injection—not to mention needle phobia—will stunt sales and hurt their bottom line.

    Special delivery.

    Inhalation devices (right) may free diabetics from a lifetime of insulin injections.


    Looking for another way, drug companies around the world are now exploring the possibility of having patients inhale their medicine. Their hope is that tiny drug particles inhaled deeply into the lung will cross through the thin tissue lining into the bloodstream and then make their way to their intended destination. Such inhalable drugs “would be a huge boon” to the drug industry, says W. Leigh Thompson, a pharmacologist and consultant who recently retired as chief scientist of drug giant Eli Lilly & Co. in Indianapolis. And the hope is more than just a wistful dream. Clinical trials are already under way with inhalable formulations of currently marketed drugs, including insulin, morphine, and drugs to fight osteoporosis. Work on many more compounds remains in an earlier preclinical stage.

    The new work is not the only alternative to injections being pursued. Industrial and academic researchers are also testing novel nasal sprays, skin patches, and even patches that dissolve when stuck to the gums. Nevertheless, many researchers seem particularly bullish about the new inhalable medications, in part because of their ease of use and steadily improving performance. “I think pulmonary delivery will emerge as the preferred route for some peptides and proteins,” says David Kabakoff, the executive vice president of Dura Pharmaceuticals, a San Diego-based biotech company that produces respiratory drugs. “After all, the purpose of the lung is to exchange substances with the bloodstream,” he says. Adds Thompson: “I think that it's the future.”

    But according to many industry observers, it is still too early for proponents of inhaled drug delivery, as well as hopeful patients, to breathe easily. It typically costs more to deliver the same dose of an inhaled drug than it does to inject it, as invariably only a fraction of inhaled drugs finds its way into the bloodstream as intended—the rest remains lodged in the delivery device or the patient's throat, where it is not absorbed. Any new scheme also must pass a host of safety and efficacy hurdles, so needles will not be disappearing anytime soon. Nevertheless, says Doris Wall, a drug-delivery expert with Bristol-Myers Squibb in New Brunswick, New Jersey, “People are watching [the field] with a whole lot of interest.” Over the past couple of years, adds Kabakoff, “the field has moved from ‘Can this work at all?' to ‘Where is this going to be attractive clinically, medically, and economically?’”

    A second wind

    Inhaled drugs themselves are, of course, nothing new. Tobacco, marijuana, and opium are just a few of the compounds, legal or otherwise, that are smoked and inhaled so that their active components pass into the bloodstream. Asthmatics have long used inhalers to administer airway-opening compounds such as albuterol. And in 1994, Genentech began marketing the first aerosol-delivered protein, a recombinant form of the natural human enzyme deoxyribonuclease, which breaks down unwanted DNA that builds up in the lungs of patients suffering from cystic fibrosis. Genentech and other companies are now working on delivering other peptides, proteins, and small-drug molecules into the lung to work locally in treating respiratory diseases, such as cystic fibrosis and asthma.

    But the broader and potentially more lucrative goal for many companies is to use the lungs simply as a route into the bloodstream. “It's a lot easier and more convenient to take a puff and get your drug” than to receive a shot, says Theresa Sweeney, a physiologist and pulmonary-delivery expert at Genentech in South San Francisco, California. The lung has an enormous surface area and an extremely thin tissue lining, both of which can make absorption of many drugs as fast as, or faster than, injecting them under the skin does. Lung tissue also seems to allow large molecules such as proteins to pass into the blood. Unlike the gut, there are few proteases present in the deep lung to break down the protein and peptide compounds.

    Researchers have long known about most of these advantages. But the current devices for sending drugs to the lung, used primarily for asthma medications, are too inefficient at delivering their cargo to make them economically viable for more than a handful of products. These devices, called nebulizers (which deliver drugs in a water-based mist) and metered-dose inhalers (in which the drug is suspended in a propellant) only manage to get about 5% to 10% of the drug from the inhaler into the lung.

    That is fine as long as the drugs are cheap to make, but not for expensive new peptide and protein drugs. One of the big problems with existing devices is that many rely on high-pressure propellants, which blast most of the drug directly into the back of a patient's throat, where it becomes lodged. Particle size is another challenge: They can either be too small—below 1 micrometer in diameter—which causes them to stick together and interferes with their activity, or too large—above 5 micrometers—which means they are normally poor fliers and get stuck in the upper airways or the back of the throat.

    In recent years, however, companies such as a trio of Californian firms—start-ups Inhale Therapeutic Systems of Palo Alto and Aradigm of Hayward, along with midsized Dura of San Diego—have come up with new delivery devices that they say better control particle size and increase the efficiency of delivery. Dura and Inhale both opted to deliver their drugs as fine powders rather than as tiny droplets of liquid. A premeasured dose is placed in a device that essentially blows the powder into a cloud of tiny particles, which is then inhaled with a slow breath deep into the lungs. Aradigm, meanwhile, has developed a new type of nebulizer that pushes a liquid formulation of a drug through an array of tiny holes, producing a mist of uniform, ultrafine droplets, which are then inhaled.

    Besides improving devices, in recent years researchers have made considerable headway in new drug-formulation techniques to stabilize proteins into dry powders of the right size particles, says Richard DiMarchi, Eli Lilly's vice president for research technologies and proteins. Here too, however, companies such as Inhale are loath to divulge their secrets. “The technology has progressed to the point where people can contemplate this as a practical alternative,” says Kabakoff. DiMarchi agrees: “I think the stars are appropriately aligned.”

    Many companies are betting that alignment will hold. Firms ranging from the likes of Inhale and Aradigm to giants such as Baxter, Boehringer Ingelheim, and Pfizer are either bankrolling or are themselves trying out lung delivery of everything from small molecules (such as morphine for cancer pain) to large proteins (such as antibodies to stem the allergic reactions that precede asthma attacks). But perhaps the biggest race is to develop inhalable insulin to treat diabetes. Today, more than 6.5 million people in the United States alone have been diagnosed with diabetes, about a quarter of whom must take daily injections of insulin, which allows the body to metabolize glucose. The current worldwide market for insulin, plus the needles and supplies needed to deliver it, is $2.9 billion and growing. And according to many, this area is ripe for a better alternative.

    Many diabetics inject themselves as often as four times daily, which can be difficult, especially for small children. And some insulin regimes rely on a slow-acting, long-lived formulation, so that meals must be taken at the right time and be of the right size—otherwise, patients risk dangerous fluctuations in their blood glucose levels.

    Bated breath

    Now teams at Inhale and Aradigm are racing to come out with an inhalable form of insulin, each using their own drug formulation and both claiming that their techniques produce the right-sized drug particles and high-efficiency delivery. Inhale combines the insulin with sugar molecules to make an ultrafine powder that works in the company's delivery device. Just prior to delivery, the blast of compressed air forces the powder through a nozzle, which breaks clumps apart into a trapped cloud of individual particles that the patient then breathes in. Aradigm officials say that the key to their rival, liquid-based technology is a cheap way to make disposable arrays of tiny, 1-micrometer-sized nozzles. The disposable nozzles are needed because protein molecules can get trapped in the tiny nozzles and interfere with their ability to create uniform droplets roughly 3 micrometers across.

    As for the exact efficiency of delivering insulin to the lung, both companies say that information remains proprietary. But according to several industry watchers, the new inhaler systems can consistently deliver 20% to 50% of their medicine to the lung. And Inhale's research director John Patton says Inhale's early clinical trials have shown that the insulin that does make it to the deep lung is absorbed quickly and is released into the bloodstream in a manner that is “very close to the natural release profile from a healthy pancreas,” the organ that normally produces insulin. Aradigm's efforts remain in the preclinical stage in the United States, but the company has carried out early-stage human trials in Australia.

    Any insulin-delivery system also has to administer precisely the right dose, because diabetics must be able to know how much food they need to balance the insulin, notes DiMarchi of Eli Lilly, one of the world's largest marketers of the substance. Both Aradigm and Inhale tightly control how much insulin is delivered with their devices—Inhale, by placing the compound in premeasured doses that are loaded into the inhaler; and Aradigm with electronic controls that monitor delivered dosage and the patient's breathing. While full clinical trials remain to be done, Thompson says that from what he has seen of Aradigm's delivery system at least, the doses it delivers are no more variable than those from different subcutaneous injections.

    Inhalable insulin has other hurdles to jump, however: DiMarchi points out that, although insulin's market is large, the profit margins on selling the drug are rather small. Most diabetics, he says, currently pay only about $1 a day for their medication. That means an inhalable formulation will likely need to be priced similarly. That could be a problem, particularly if the devices deliver only 20% to 50% of their drug cargo to the lungs and then only a fraction of that is absorbed into the bloodstream. Finally, he adds, Inhale's current clinical trials only involve the form of insulin intended to be taken before meals. Many diabetics inject another longer lived form, known as basal insulin, before sleep to regulate glucose levels through the night.

    But recent research may provide a solution here as well. In the 20 June issue of Science (p. 1868), David Edwards and his colleagues at Pennsylvania State University, the Massachusetts Institute of Technology, and the Israel Institute of Technology reported creating time-release, inhalable formulations of insulin and other drugs that were effective over a period of days. Edwards and his colleagues incorporated insulin into large—8 to 20 micrometers—porous, spherical particles made from a biodegradable polymer. Then they administered them to rats with an inhaler.

    Even though these particles are larger than those that prevailing wisdom says work best, Edwards and his colleagues found that the spheres' light and hollow core allowed them to fly deep into the lungs. Once there, the polymer's slow degradation released the insulin steadily over a period of 96 hours. The new work “was quite an interesting approach,” says Anthony J. Hickey, an aerosol-delivery expert at the University of North Carolina, Chapel Hill, and underscores the fact that there is still plenty of room for improving current airway-delivery systems.

    That message is not lost on those trying to commercialize the technology. “I think it's really exciting, all the potential [inhalable] drugs that are on the horizon,” says Genentech's Sweeney. Chances are that most will not pan out, but if some of them do, she adds, “that will have a big impact on how we think about delivering drugs.”


    No Hidden Starstuff in Nearby Universe

    1. Govert Schilling
    1. Govert Schilling is a writer in Utrecht, the Netherlands.

    Many astronomers have suspected that there is more to the nearby universe than meets the eye. But, according to a team of Dutch and U.S. astronomers, perhaps it's not very much more.

    Gloomy galaxy.

    A low-surface-brightness galaxy (left) compared to a normal one.


    The team has been analyzing a radio survey of a large swathe of the nearby universe, carried out with the world's largest single-dish radio telescope, to make a comprehensive inventory of neutral hydrogen gas, the principal raw material for stars. Some astronomers had proposed that, besides the hydrogen in the “known” galaxies, large caches of hydrogen—perhaps many times more than in known galaxies—might be hidden in intergalactic clouds or in dim, “low-surface-brightness” galaxies (LSBs). But as the team will report in a November issue of the Astrophysical Journal, the extra hydrogen was nowhere to be found. “We're not saying that low-surface-brightness galaxies don't exist,” says team member Martin Zwaan of the University of Groningen in the Netherlands. “It's just that they contribute very little to the total mass of neutral hydrogen gas in the local universe.”

    Although much fainter than normal spiral galaxies, LSBs are not much smaller than our Milky Way—indeed, some are much larger. As a result, the feeble light from their dim stars is spread out over a relatively large area of the night sky, making them notoriously hard to detect using optical telescopes. The first of them was not spotted until 1986, by Gregory Bothun, now at the University of Oregon in Eugene. According to Zwaan, recent deep-optical surveys have turned up many dim LSBs, suggesting that they might be as numerous as normal high-surface-brightness galaxies. If a majority of the LSBs are laden with neutral hydrogen gas, they might represent a major contribution to the universe's hydrogen inventory. Besides implying that the universe has the potential to form many more stars, the extra hydrogen could also make a small, but not insignificant, contribution to “dark matter”—the large quantity of invisible mass that cosmologists believe resides somewhere in the universe.

    To find these hypothetical hydrogen reserves, Zwaan and his colleagues Frank Briggs and David Sprayberry (both at Groningen), along with Ertu Sorar of the University of Pittsburgh, analyzed the results of a radio survey of a narrow strip of the sky, carried out a couple of years ago by Sorar with the 300-meter Arecibo radio dish in Puerto Rico. Neutral hydrogen gas emits radio waves at a characteristic frequency, enabling the sensitive Arecibo receiver to detect any significant amounts of hydrogen out to a distance of some 200 million light-years. The researchers then used the Very Large Array of radio telescopes in New Mexico to study in detail any reserves of hydrogen spotted at Arecibo. The radio sources all turned out to be associated with already-known galaxies.

    Bothun is not surprised, saying that his belief “has always been that most LSBs have no gas at the present epoch.” In the distant, early universe, he explains, the faint galaxies that are probably the precursors of today's LSBs are blue in color, indicating that ample hydrogen reserves were forming hot young stars. But closer to the present, such galaxies appear dimmer and redder, suggesting that they have used up their star-forming gas.

    Nor did the survey bear out the belief of some astronomers that large, starless clouds of protogalactic mist have survived into the present universe. “Some people have suggested that many intergalactic [neutral hydrogen] clouds would be found,” says Rachel Webster of the University of Melbourne, who is currently participating in a large radio survey of the southern sky with the Parkes Radio Telescope in Australia. The Parkes survey, to be completed in 2001, is less sensitive than Sorar's effort at Arecibo, but it covers a much larger area. “I can't exclude the possibility that they will find some intergalactic hydrogen clouds,” says Zwaan. However, Webster says that “the numbers must be small if there are any at all. I have adjusted my expectations accordingly.”


    New Kind of Cancer Mutation Found

    1. Trisha Gura
    1. Trisha Gura is a writer in Cleveland, Ohio.

    Researchers studying the genes for colon cancer may have accounted for part of a puzzling shortfall. Somewhere between 15% and 50% of colon cancers and the benign polyps that are often their precursors seem to have some hereditary component. Yet the colon cancer genes found so far have been linked to less than 5% of the total cases. Now, through a meeting of chance and clever detective work, Bert Vogelstein and Kenneth Kinzler at Johns Hopkins University School of Medicine and their colleagues have tracked down a genetic culprit that might explain at least part of the discrepancy—and it works in a way never seen before for any cancer-causing mutation.

    Paving the way.

    A single base change—adenine (A) replaces thymine (T)—leads to further mutations that inactivate the APC gene and increase the risk of colon cancer.

    In the September issue of Nature Genetics, the researchers report that they have found an inherited mutation in a gene called APC, which normally holds cell growth in check and can cause colon cancers when mutated. But unlike previously identified mutations, the new one does not directly affect the function of the gene. Rather, the mutation may render the surrounding DNA susceptible to mistakes by the enzyme that copies genes when cells replicate, thereby creating new mutations that do lead to loss of gene function. “This could be a landmark study of a novel mechanism,” says molecular biologist Jeffrey Trent of the National Human Genome Research Institute (NHGRI) in Bethesda, Maryland.

    Indeed, Trent and others say that the same mechanism might be at work in genes linked to other cancers, such as breast and prostate cancer, which have been found to contain similar “harmless” variations. “Clearly, we should look at other tumor-suppressor genes for other such sequences that people might have walked right past,” says NHGRI director Francis Collins.

    The discovery may also have immediate applications for early detection of colon cancers, especially in Ashkenazi Jews. The Vogelstein-Kinzler team found that 6% of Ashkenazim carry the mutation, making it the most common cancer-predisposing mutation in a defined ethnic group. Screening for the novel mutation in that population would thus be a good way of identifying people who have a high risk of colon cancer, so that they could be watched carefully and treated early.

    Identification of the new APC mutation was serendipitous, the result of a social visitor to Johns Hopkins, who mentioned that he had had several colorectal polyps and a slight family history of colon cancer. Vogelstein, whose lab had already uncovered a fistful of genes involved in the disease, offered to test the 39-year-old male for mutations.

    Vogelstein's group did find a change in the APC gene. But at first glance it appeared to be innocuous—a simple switch from a thymine (T) to an adenine (A) at position 1307 that didn't look like it would disrupt the gene's ability to function. Such gene changes, called polymorphisms, are common.

    What raised the researchers' suspicions was a strange phenomenon that occurred when they tested the patient's APC gene in a routine assay that allows it to make its protein product. They found that the protein began to pick up extra mutations in and around the region that contains the T-to-A switch. That apparently happened, Kinzler says, because the mutation creates a stretch of eight consecutive adenines, which are often misread by polymerase enzymes that transcribe genes into messenger RNAs (mRNAs)—the first step toward making proteins. “The DNA strands can get a little one-to-two base-pair bubble,” says Kinzler. “That allows the polymerase to put in an extra base without realizing there is a mistake.” Such “frameshift” mutations can totally garble the rest of the message, creating shortened forms of the protein or rendering it useless.

    If just mRNA synthesis were affected, the situation might not be harmful, because it wouldn't lead to permanent loss of all functional APC protein. But the same kind of error can also occur in the DNA itself during replication. The Johns Hopkins team found that this may in fact be happening in the colon cells of some patients who have an increased risk of colon cancer but don't carry the original APC mutations. When the investigators looked at the APC gene in tumor and blood samples of these patients, they found that all the tumors that carried the second type of inactivating mutation also had the T-to-A change. Blood cells from the same patients only had the T-to-A switch, however. These results suggest that the patients inherited the base change and developed the other mutations later, but only in the colon cells that became cancerous.

    The work so far has shown that the T-to-A mutation is present in about 6% of Ashkenazi Jews and in 28% of Ashkenazim with familial colon cancer. The Hopkins team and others are now expanding those studies and looking at how common the gene is in the general population. Another big question is whether other tumor-suppressor genes are prone to similar problems. “Everyone has seen polymorphisms in cancer genes,” says Vogelstein. “And all of us have assumed they are just harmless variants. This study suggests that those kinds of mutations may not really be harmless, but rather a kind of wolf in sheep's clothing.”


    Conjuring Matter From Light

    1. David Ehrenstein

    Turning matter into light, heat, and other forms of energy is nothing new, as nuclear bombs spectacularly demonstrate. Now a team of physicists at the Stanford Linear Accelerator Center (SLAC) has demonstrated the inverse process—what University of Rochester physicist Adrian Melissinos, a spokesperson for the group, calls “the first creation of matter out of light.” In the 1 September Physical Review Letters, the researchers describe how they collided large crowds of photons together so violently that the interactions spawned particles of matter and antimatter: electrons and positrons (antielectrons).

    Flash dance.

    An electron beam intersects a laser pulse, boosting photons to gamma energies and triggering an interaction that spawns particles.


    Physicists have long known that this kind of conjuring act is possible, but they have never observed it directly. The experiment is also a proof of principle for a technology, based on intense laser beams boosted to enormous energies with the help of SLAC's electron beam, for exploring a theory known as quantum electrodynamics. QED describes electromagnetic fields, such as those of light, and their interactions with matter, and its predictions are notoriously accurate. But physicists are eager to study it at so-called “critical” electromagnetic fields—fields so strong that their energy can be converted directly into the creation of electrons and positrons.

    To create a field as close as possible to critical, the 20-physicist collaboration started with a short-pulse glass laser that packs a half-trillion watts of power into a beam measuring just 6 micrometers across at its narrowest point, resulting in extraordinary intensities. To increase the energy of the photons, the team collided the pulses with SLAC's 30-micrometer-wide pulsed beam of high-energy electrons—a feat that required precise alignment and synchronization. When laser photons collided head-on with the electrons, they got a huge energy boost, much like ping-pong balls hitting a speeding Mack truck, changing them from visible light to very high energy gamma rays. Because of the laser's intensity, these backscattered gamma photons sometimes encountered several incoming laser photons simultaneously; a collision with four of them concentrated enough energy in one place to produce electron-positron pairs.

    Melissinos views the result as the first direct demonstration of “sparking the vacuum,” a long-predicted phenomenon. In it, the energy of a very strong electromagnetic field promotes some of the fleeting, “virtual” particles that inhabit the vacuum, according to QED, to become pairs of real particles.

    Electron-positron pairs are often spawned in accelerator experiments that collide other particles at high energies, and photons produced in the collision are what actually generate the pairs. But at least one of the photons involved is virtual—produced only for a brief moment in the strong electric field near a charged particle. The SLAC experiment marks the first time matter has been created entirely from ordinary photons.

    Princeton University physicist Kirk McDonald, another spokesperson for the collaboration, which also includes the University of Tennessee and SLAC, thinks the high-field experiments could shed light on phenomena at the surface of a neutron star, where magnetic fields are very strong, and in other exotic astrophysical settings. On a more practical level, the conversion of light into matter could also give particle physicists a new source of positrons that are exceptionally uniform in energy and momentum.

    The result is also the first step toward using powerful lasers and electron beams to test high-field QED predictions, such as what McDonald calls “vacuum optics”—the behavior of light in a strong-field environment. “We're exploring new regimes and trying to map out the basic phenomena,” he says. Physicist Tom Erber of the Illinois Institute of Technology looks forward to the results: “Hopefully, this will open the door to future experiments which will approach [more probing] tests of QED.”


    Ice Age Communities May Be Earliest Known Net Hunters

    For decades, archaeologists have marveled over the famous Venus figurines of Ice Age Europe. With their blank faces, swollen breasts, and voluptuous hips, these tiny, stylized figurines rank among the most haunting art of the Upper Paleolithic era, which began some 40,000 years ago. Most of the Venus figurines emerged from archaeological layers strewn with streamlined bone spear points and the giant bones of mammoths and other large game. As a result, researchers often envisioned the makers of these objects as members of big-game hunting societies, in which masculine muscle and brawn were essential to survival.

    Made by netmakers?

    Venus figurines from eastern Gravettian site of Avdeevo, Russia.


    Compelling new discoveries from the Czech Republic indicate, however, that some of these assumptions were mistaken. By combining research on microscopic fiber impressions left on Ice Age clay fragments with accounts from studies of contemporary and historic cultures, an American-Czech research team has now compiled strong evidence that the Gravettian people—who lived from Spain to southern Russia some 29,000 to 22,000 years ago—used nets, rather than speed and might, to capture vast numbers of hares, foxes, and other mammals. That would make them the earliest known net hunters, and it may help explain the larger, more settled populations that are a hallmark of Gravettian times.

    Impressive techniques.

    Positive cast and original impression (image top left), probably of type of weave illustrated to the right. Positive cast (image lower left) of type of weave shown in lower right.


    The evidence comes from two of the most famous eastern Gravettian settlements, Pavlov and Dolni Vestonice. In research currently in press in the Czech journal Archaeologicke Rozhledy and in two forthcoming volumes, a team headed by Olga Soffer and James Adovasio concludes that the inhabitants of these settlements were expert weavers who likely produced capture nets for hunting. The findings, says Soffer, a professor of anthropology at the University of Illinois, Urbana-Champaign, fundamentally change the traditional picture of these Stone Age societies. “This is not the image we've had of Upper Paleolithic macho guys out killing animals up close and personal with spears and stone points,” she explains. “Net hunting is communal, and it involves the labor of children and women.” Other researchers are fascinated. “I think it's very exciting and important work,” says Margaret Conkey, a Paleolithic archaeologist at the University of California, Berkeley. “You get a larger sense of everyone being involved in productive life, so it just opens up all these possibilities.”

    The studies began in 1993, when Soffer showed Adovasio, who is at Mercyhurst College in Erie, Pennsylvania, slides she had taken of several enigmatic clay fragments excavated decades earlier from the Pavlov site by Bohuslav Klima. Unearthed from a zone that was radiocarbon dated to between 26,980 and 24,870 years ago, the fragments bore a series of mysterious impressions. Adovasio, one of the world's experts on prehistoric fiber technology, quickly recognized the imprints of basketry or textiles on four fragments; the team published these initial findings in Antiquity in September 1996. Almost certainly, says Adovasio, the impressions were created from fabrics woven of fibers from wild plants, such as nettle or wild hemp, that were preserved by accident. “It may be that a lot of these fabrics were lying around on clay-house floors,” he notes, and they left indentations in the clay when people walked on them. These impressions were then baked in when the houses burned.

    Intrigued, the pair set to work sifting through nearly 8400 clay fragments excavated from Pavlov I and nearby Dolni Vestonice II. The search yielded 49 impressions on 43 specimens—most of them smaller than a silver dollar. Adovasio made positive casts of each fragment, photographed them by zoom stereomicroscope, and examined the images with Mercyhurst colleague David Hyland. The two anthropologists identified seven of the eight types of twining commonly employed for textiles or basketry. “A great deal of technological diversity and ability is represented in this assemblage,” notes Hyland.

    Hyland also discovered impressions of cordage ranging in diameter from 0.31 to 1.15 millimeters and bearing weaver's knots, a technique for joining two lengths of cords that is commonly used to make nets of secure mesh. “We're sure we've got nets,” says Soffer, “because we've got weaver's knots and we've got a series of them.”

    With an estimated mean mesh diameter of 4 mm, such netting could not have been used to capture large mammals. But the finding may explain a long-puzzling aspect of the animal remains unearthed at Pavlov and Dolni Vestonice. In a study published in 1994, R. Musil, a paleontologist at Masaryk University in the Czech Republic, noted the high number of hare, fox, and other small mammal bones at these sites and the nearby Gravettian settlement of Predmosti. Other analysts have noted a similar abundance at many central and Eastern European Gravettian sites. This prevalence, notes Adovasio, “has been previously explained away in every kind of manner, from ‘they clubbed them’; to ‘they threw little spears at them.’ But if you answer that these animals are the products of net hunting, they become perfectly explainable.” As for bigger animals, although the American-Czech team hasn't yet found any trace of large nets suitable for big-game hunting at Pavlov or Dolni Vestonice, they point out that such constructions were certainly well within the technical capability of the Gravettian weavers, given their expert skills in producing finer products.

    To gain some insight on how the net hunters might have deployed their nets, Soffer combed accounts in the ethnographic literature. Formerly mastered by such diverse groups as the aborigines of Australia, the Mbuti of Africa, and the indigenous inhabitants of North America's Great Basin, net hunting provided a safe way of snaring mammals as large as deer and mountain sheep. Men, women, and children all worked together as beaters, frightening the animals with loud noises and driving them in the direction of the nets. “Everybody and their mother could participate,” says Soffer. “Some people were beating the underbrush; others were screaming or holding the net.” Once the prey was caught in the mesh and safely immobilized, the hunters dispatched it with clubs and other weapons.

    Accounts also show that many historical hunters and gatherers mounted such drives to amass food for large seasonal and ceremonial gatherings. This fits closely with evidence from Dolni Vestonice and Pavlov. “As best we understand these sites now,” says Soffer, “[they] are aggregation sites where people are getting together in fairly large numbers.” Likely occupied throughout the cold season, the sites reveal many traces of Paleolithic ceremony, including clay Venus and animal figurines that appear to have been ritually destroyed.

    Adovasio, Soffer, and Hyland also suggest that net hunting may explain the most distinctive features of the Gravettian phase. Other researchers have long believed that the large populations, increasingly settled life, and complex technology of this phase was supported by extensive mammoth hunting. Soffer and her colleagues agree that Gravettian males sometimes hunted mammoths and other large game with spears, but they now argue that communal net hunting—capable of reaping huge windfalls of food regularly at very low risk of injury to human participants—may have been the key development.

    Other Paleolithic experts are intrigued by this new hypothesis. “I don't see why it shouldn't be,” says Desmond Clark, a professor of anthropology at the University of California, Berkeley. But the challenge, adds Clark, will be to find evidence of nets and textiles at other Gravettian sites.

    Soffer will be following these efforts with keen interest. After all, she explains, net hunting could hold the key to many puzzles, including the evolution of modern human anatomy: “When you look at the beginning of the Upper Paleolithic, the men are really hunky. But when you look at them after 20,000 years ago, they get smaller and weaker. The strong ones are not being selected for.” While many of her colleagues have related this change to the advent of spear throwers or bows and arrows, which reduced the need for physical strength, Soffer favors another hypothesis. “I think net hunting contributes to it,” she concludes. “You don't need brawn to do it.”

    Heather Pringle is a writer in Vancouver, Canada.


    Does Diversity Lure Invaders?

    1. Jocelyn Kaiser,
    2. Richard Gallagher

    Albuquerque, New Mexico—More than 3000 ecologists and conservation biologists met here from 10 through 14 August for a joint meeting of the Nature Conservancy and the Ecological Society of America on the theme of natural and human influences on ecosystems. Talks included discussions of tropical forest burning, endangered prairie chickens, and alien weeds.

    Exotic weeds like Japanese kudzu and European wild oats have swept over ecosystems from Florida to Australia. Indeed, many biologists consider continent-hopping alien species the second most important threat to biodiversity after habitat destruction. There has been at least one reason for optimism, however: Ecologists have long assumed that diverse landscapes should be more resistant to exotic plant invaders, as their array of species does a better job of using up all the available resources like nitrogen and sunlight. But new studies described at the meeting suggested that diversity isn't always a shield against invasions.

    Ripe for invasion.

    Alien weeds are turning up in the species-rich Rockies.


    In one provocative talk, researchers who examined several landscapes in the U.S. Midwest and the Rockies found that areas that are hot spots of plant biodiversity are sometimes magnets for invading weeds, perhaps because good growing conditions favor both native species and exotics alike. Another talk, an analysis of global patterns of plant invasions, described similar results and pointed up the importance of external factors in invasions, such as how often seeds are introduced to a landscape by human visitors.

    The expectation that species-rich ecosystems should be resistant to invasions stems from a notion (see p. 1260) that diversity goes hand in hand with ecological productivity and stability. That idea has been controversial, and so has the question of whether diversity wards off exotic invaders. Thomas Stohlgren of the U.S. Geological Survey's (USGS's) Biological Resources Division and colleagues at Colorado State University in Fort Collins decided to revisit the problem by looking at the numbers of native and exotic species in two biomes—temperate grasslands and mountains. The researchers counted plant types and the amount of cover in 180 1-meter-square plots in four types of prairies in the western United States and also in 200 plots in two forest and two meadow types in the Colorado Rockies.

    In the prairies, the team found that the more diverse 1-square-meter plots did contain fewer exotic weeds, but the opposite was true in the 1-square-meter plots in the Rockies. When the researchers examined their data on a larger scale—a set of plots within a total area of 1000 square meters—they found that exotic invaders were more numerous in more diverse areas in both biomes. At that scale, Stohlgren says, one sees “patches of invasion and high diversity in a sea of low diversity.”

    Stohlgren, whose results are in press in Ecology, noted another feature of the more diverse areas that could explain why they have so many invaders: They also had higher levels of nutrients such as nitrogen and carbon and tended to support denser foliage. Stohlgren believes that resource availability—which sometimes correlates with diversity—may explain the exotics' success. “Native plants and exotic plants like the same things—light, soil, water—the good life,” Stohlgren says.

    Stanford's Peter Vitousek says he doesn't think Stohlgren's result rules out the possibility that in some systems, diverse areas are more resistant to invasion. He adds, however, that “the argument that the effect of resource availability can be more important than diversity is original and useful.” And David Tilman of the University of Minnesota, St. Paul, whose own experiments in grasslands have found that diverse plots are less susceptible to invasion, is intrigued that the relationship Stohlgren found held up across several kinds of meadows and forests in the Rockies. This suggests diversity doesn't protect against invasions “across biome types,” Tilman says, and that “other factors now determine the chance it will be invaded.”

    Additional support for Stohlgren's findings came from an analysis described by Mark Lonsdale of the Australian Commonwealth Scientific and Industrial Research Organisation at a separate symposium. Lonsdale explored the relationship between diversity and exotic species in data from 162 sites worldwide, representing many different biomes. Like Stohlgren, Lonsdale found that species-rich biomes tended to have more exotic species, with temperate zones being most invaded and savannahs and deserts least.

    Lonsdale also found that there were more exotic species in nature reserves that have a lot of visitors, suggesting that how often exotic seeds are introduced is as important as the growing conditions in the new territory. “It seems obvious, but it hasn't seemed to enter the scientific consciousness before now,” Lonsdale says. Another key factor appeared to be cultivation. In Australia, Lonsdale found that at least 46% of noxious weeds were intentionally introduced for produce or as ornamentals. Similarly, in tracing back the origins of about 2400 naturalized species in the United States, Richard Mack of Washington State University in Pullman estimated that at least two-thirds were deliberately introduced or were contaminants in batches of imported seeds.


    A Nasty Brew From Pasture Fires

    1. Jocelyn Kaiser

    Albuquerque, New Mexico—More than 3000 ecologists and conservation biologists met here from 10 through 14 August for a joint meeting of the Nature Conservancy and the Ecological Society of America on the theme of natural and human influences on ecosystems. Talks included discussions of tropical forest burning, endangered prairie chickens, and alien weeds.

    Scientists studying global climate change are keeping a sharp eye on what's happening in the tropics. Slash-and-burn land use there pumps out an enormous amount of carbon dioxide and other greenhouse gases each year—adding up to an estimated 1.6 million metric tons of carbon, or about 23% of the total produced by human activity worldwide. But one talk at the meeting made clear that slash-and-burn farmers do far more burning than researchers had thought after they clear the primary forest. The extra burning sends into the skies a witch's brew of noxious pollutants as well as more greenhouse gases.

    Ripe for invasion.

    Alien weeds are turning up in the species-rich Rockies.


    Ecologist Boone Kauffman of Oregon State University in Corvallis described some of the most precise studies yet of the carbon balance in a patch of Amazonian rainforest when it's burned to create a pasture. To do these studies, the Oregon State team first had to establish a baseline by assessing how much biomass a forest contains. Carbon cycle modelers typically rely on forest wood inventories conducted only to tally larger marketable trees. To find out how much biomass a forest really holds, the Oregon State researchers wrap tape measures around trees, weigh dried grasses, measure downed logs, and sample soil nutrients. “It's brute force science that needed to be done,” says David Schimel of the National Center for Atmospheric Research in Boulder, Colorado.

    As Kauffman reported at the meeting, the group's latest fieldwork has revealed an overlooked pollution source: repeated burning of Amazonian pastures that have already been subjected to one slash-and-burn episode. Modelers have assumed that once the primary forest had been burned for a pasture, the only additional carbon released came mainly in the form of carbon dioxide from decomposing biomass. But when Kauffman studied over 18 forest and pasture fires in Rondonia and Pará states in Brazil between 1991 and 1995, he found that farmers burn the fields again every couple of years to get rid of weeds and spur the growth of grasses. In a typical decade, almost as much vegetation and dead wood is burned off existing pastures as was originally burned to create them. The burning releases not only extra carbon dioxide but also soot, nitrogen oxides, and nonmethane hydrocarbons, among other harmful compounds. “We get a lot more pollutants that have deleterious effects on global change, human health, and ozone depletion,” Kauffman says. “That has not been considered in the [climate] models.”

    Schimel says it's too soon to say whether the additional burning actually boosts the warming effect of greenhouse gas emissions from the tropics. For one thing, he says, the new growth that takes place after each episode of reburning draws carbon out of the atmosphere again. Also, the soot forms aerosols that might cool the atmosphere.

    Answers will also come from the Large-Scale Biosphere-Atmosphere Experiment in Amazonia, a $50 million, 4-year international project led by Brazil getting under way next year. It will team up hydrologists, atmospheric chemists, ecologists, and plant physiologists to get a better handle on carbon losses from land uses in the rainforest.


    Fingering a Genetic Bottleneck

    1. Jocelyn Kaiser

    Albuquerque, New Mexico—More than 3000 ecologists and conservation biologists met here from 10 through 14 August for a joint meeting of the Nature Conservancy and the Ecological Society of America on the theme of natural and human influences on ecosystems. Talks included discussions of tropical forest burning, endangered prairie chickens, and alien weeds.

    Biologists hoping to spur the recovery of endangered animals in the wild may face a big obstacle: The remaining population may have lost genetic diversity. A meager gene pool after years of inbreeding could help explain the poor reproductive success of, for instance, the Puerto Rican parrot, African cheetahs, and the Florida panther.

    But biologists haven't always agreed on whether a population has truly lost genetic richness because they can't compare the DNA of surviving animals to that of their ancestors. In one talk, however, researchers at the University of Illinois, Urbana-Champaign, described how they managed to do just that. The group directly documented loss of genetic diversity in an imperiled species—greater prairie chickens in Illinois—by comparing DNA from decades-old museum specimens with that of prairie chickens today. The result is “potentially one of the clearest examples we have” of a genetic bottleneck in an endangered species, says population geneticist Bob Lacy of the Chicago Zoological Society.

    Prairie chickens numbered in the millions in Illinois in the 19th century, but, largely due to habitat loss, only 50 were left by 1993 on a state reserve. To find out what had happened to the gene pool of these birds, evolutionary ecologist Juan Bouzat and colleagues Harris Lewin and Ken Paige compared six DNA markers, called microsatellites, from genetic material from the feather roots of 15 Illinois birds collected by museums in the 1930s and 1960s with the same DNA markers in the modern Illinois prairie chickens. They also examined these markers in populations in Kansas, Minnesota, and Nebraska, where the birds still number in the thousands. The modern-day Illinois chickens had clearly gone through a bottleneck: Their DNA had roughly half as many variants of these markers as the DNA from the museum specimens, and some of the variants had been lost “forever,” Bouzat says. The populations in the other three states were in better shape. They had about the same amount of diversity as the museum specimens.

    Lacy, who wasn't at the meeting but is familiar with the research, cautions that the study doesn't prove that lack of diversity is harming the Illinois birds, which have a low rate of reproduction. But Bouzat says other data comparing the birds' breeding success with that of chickens in the 1960s suggest that genetic factors are involved in the decline. Lacy adds that if similar studies can provide other clear-cut cases of genetic bottlenecks, they will influence controversies such as one involving the Florida panther. Biologists disagree over whether the panther's reproductive problems result from mercury poisoning or genetics. “They've come up with a highly instructive case,” Lacy says”.

  17. Prions Up Close and Personal

    Misfolded proteins, called prions, have been implicated in “mad cow” disease and in Creutzfeldt-Jakob disease in humans. Now researchers at the Swiss Federal Institute of Technology (ETH) in Zurich have taken an important step toward solving how prions can be pathogenic. They have determined the structure of the full-length prion protein—all 208 amino acids—in its normal, or healthy, shape.

    “It's wonderful stuff,” says Byron Caughey, a biochemist and prion expert at the National Institutes of Health's Rocky Mountain Laboratories in Hamilton, Montana. The new close-up, he says, should not only help researchers determine the as-yet unknown function of the normal protein but also provide clues about how it changes to cause disease.

    The new prion structure, reported in the 18 August FEBS Letters, follows up on work done over the last year by the ETH team and, independently, by a group at the University of California, San Francisco. In recent months, both teams unveiled partial structures of the molecule. They had been unable to obtain sufficient quantities of the properly folded full-length protein for study.

    The ETH researchers have now used chemical means to fold the full-length protein. They then determined the protein's structure with nuclear magnetic resonance, which identifies and locates the protein's atomic components by their specific magnetic fingerprint.

    The previous partial structures looked quite conventional—with a trio of helices that look like molecular telephone cords—but the new structure reveals, in addition, an unusual flexible tail, 97 amino acids long. “We were very surprised,” says ETH molecular biologist Rudi Glockshuber, who along with biophysicist Kurt Wüthrich led the Zurich team.

    Previous research suggests that in prion diseases, this floppy tail adopts a more regular sheetlike structure, which then resists normal breakdown by enzymes, instead forming aggregates. The ETH team's approach yields large amounts of the normal protein, which should make understanding that transformation easier, says Caughey.


    Shining a Bright Light on Materials

    1. Daniel Clery

    In each of the world's three major regions of scientific influence—North America, Europe, and Japan—a powerful light has come on. The light sources are synchrotrons, race-track-like particle accelerators hundreds of meters across, and the light they shed is “hard,” or high-energy, x-rays of unprecedented brilliance and purity. Researchers of all stripes who use x-rays to probe the structure of matter have come flocking to these “third-generation” light sources: the European Synchrotron Radiation Facility (ESRF) in Grenoble, France; the Advanced Photon Source at the Argonne National Laboratory in Argonne, Illinois; and Japan's Spring-8, which completed the trio this year. In this special news report, Science offers a tour of the ways in which these machines' unsurpassed beam quality and experimental conditions are opening new windows on everything from biomolecules and polymers to magnetic materials and Earth's core.

    The new facilities exploit a gift from nature that was long overlooked—even spurned. For decades following the realization early this century that x-rays could probe the structure of matter, researchers struggled with the feeble, poor-quality beams produced by x-ray tubes. Help eventually came from a quirk of particle accelerators first predicted by D. Ivarenko and I. Y. Pomeranchouk in 1944: As particles accelerate around a circle, they shed radiation.

    The first visible synchrotron radiation was detected at General Electric's 70 MeV synchrotron in 1947. At that time, it was viewed not as an opportunity, but as a nuisance. Synchrotrons were designed for particle physicists, and the photons thrown off by the particles were an annoying energy leak; physicists had to keep pumping in more energy just to keep the particles going. When researchers finally studied this discarded radiation, they found that it had very appealing properties: It spans a broad spectrum from the far infrared to hard x-rays; it is pulsed and naturally polarized; and it is three to four orders of magnitude more brilliant than the beams from x-ray tubes.

    In the mid-1960s, x-ray researchers began to camp out at high-energy physics labs. Like poor relations of the particle physicists, they would set up a few beamlines, parasitically extracting x-rays from the particle accelerators. By the middle of the 1970s, synchrotrons built specifically to provide x-rays began to appear, and the use of x-rays spread from physicists and chemists to materials scientists, geophysicists, and structural biologists. These synchrotron radiation sources—together with the neutron sources that were blossoming at the same time—were a new kind of facility aimed at multiple research communities consisting of thousands of users, who apply for beam time as they might apply for a grant.

    Still, these second-generation synchrotron sources were largely based on the technology of particle physics: To generate the x-rays, they relied on the bending magnets that kept the beams moving in a circle. Researchers soon realized, however, that if they created more bends, they could make more x-rays. They began fitting second-generation sources with insertion devices, which use rows of magnets with alternating polarity to make the electron beam swerve back and forth, emitting x-rays with each turn. Insertion devices up the brilliance of the x-ray beam by another few orders of magnitude.

    The third-generation sources, which began to come on line in the mid-1990s, rely almost entirely on insertion devices. The new sources come in two types: smaller ones that produce extreme ultraviolet and low-energy, or soft, x-rays; and giant, stadium-sized rings that generate high-energy, or hard, x-rays. The ESRF, for example, the first of the hard x-ray sources, is 844 meters in circumference and has 50 insertion devices positioned on straight sections of its electron storage ring. Not only do the devices increase the brilliance of the beam, but they also allow it to be tailored. Researchers can tune the wavelength of the x-rays, control their polarization, and choose between a single wavelength or a range of wavelengths. The high quality of the beam that emerges from insertion devices also makes x-ray optics—ingenious mirrors and “lenses” for focusing the hard-to-handle x-rays—much more effective. “You can play with the x-rays,” says geophysicist Denis Andrault of the University of Paris.

    With all these attributes, the new sources have taken researchers by storm. “The quality is spectacular,” says solid-state chemist Kosmas Prassides of the University of Sussex in the United Kingdom. At the new machines, synchrotron users finally feel at home, says geophysicist Ho-kwang Mao of the Geophysical Laboratory of the Carnegie Institution of Washington in Washington, D.C. “First- and second-generation sources developed slowly and were not very friendly for users. All the instruments were together in an open space. It was very noisy and there was not much room. The third-generation sources are more user friendly. There is virtually unlimited space to build equipment around the beamline,” says Mao. “Third-generation sources open up opportunities and open your eyes to new possibilities.”


    X-rays Find New Ways to Shine

    1. Alexander Hellemans
    1. Alexander Hellemans is a science writer in Paris.

    The brightness and beam quality of the new third-generation synchrotrons are helping researchers to probe matter more quickly and effectively, and to try out some new tricks

    Materials researchers have long wanted to unravel the structure of spider silk. Its unique combination of strength and elasticity would find innumerable applications if it could be reproduced in industry. So far, the detailed molecular structure responsible for these properties has eluded researchers. But now one team is bringing a new sledgehammer to bear on this intractable nut: a stadium-sized machine called the European Synchrotron Radiation Facility (ESRF), in Grenoble, France. With the ESRF's brilliant, needlelike beams of x-rays, they hope to dissect single fibers of spider silk.

    ESRF is one of a new generation of synchrotron radiation sources with high-quality beams of high-energy, or hard, x-rays that are revolutionizing many areas of research—from materials to geophysics to molecular biology. Previous generations of synchrotrons, for example, produced beams about 100 micrometers across, far too coarse to reveal the structure of a 5- to 8-micrometer strand of spider silk. But a team led by Christian Riekel has taken the ESRF's beam—already finer and brighter than those of older synchrotrons—and focused it down to a width of 1 or 2 micrometers. Confident that this will reveal the spider's secret, the group has already learned, says Riekel, “that you can get enough signal—a diffraction pattern—from a single fiber, and this is big progress. … A few years ago we needed at least a few hundred fibers.”

    ESRF was the first of these third-generation synchrotrons producing hard x-rays to come online, in 1992, and it was opened to the x-ray community in 1994. In the United States, the Advanced Photon Source (APS) at Argonne National Laboratory near Chicago generated its first x-ray beams in March 1995 and opened to users in 1996. And Japan has just begun commissioning the third and most powerful of the sources, Spring-8. The new sources improve on their predecessors not only through raw power, but also by tailoring the x-ray beam to users' precise requirements with devices called undulators and wigglers. These “insertion devices” can tune the beam's frequency, make it coherent, polarize it, and focus it to exquisite sharpness.

    All of this enables these machines not only to outdo their predecessors in determining material structures by the tried-and-tested technique of x-ray diffraction, but also to exploit new techniques for probing materials. Going by names such as holography, phase-contrast imaging, tomography, scanning microscopy, microdiffraction, and photon correlation spectroscopy, among others, these techniques can resolve minute defects and structural interfaces in materials. Take phase-contrast imaging, which can detect tiny cracks in metals or interfaces between soft matter by tracking shifts in the phase of the x-rays passing through the object. The technique “is possible with second-generation sources,” says Friso van der Veen of the University of Amsterdam in the Netherlands, “but it has undergone an acceleration with the introduction of ESRF, because the beam has such a strong brilliance.”

    The principle of a synchrotron source is simple: When fast-moving charged particles, such as electrons, are forced into a circular path by magnets, they shed energy in the form of photons. With enough particles and sufficient speed, the result is powerful x-ray beams. Earlier synchrotrons relied simply on bending magnets to get the particles to emit photons, but third-generation sources use a different approach. In straight sections of the electron storage ring, they position insertion devices. One type, called an undulator, consists of rows of magnets of alternating polarity that force the electrons to follow a slalom path. At every turn the electrons emit photons, and the photons from all the turns add up to produce an intense, sharply focused beam just 50 micrometers wide. “With undulators, we obtain beams that are narrower than laser beams,” says Pascal Elleaume, who is responsible for undulator development at ESRF. The undulator radiation is also coherent: Photons in the beam that have the same wavelength travel in step. Other insertion devices called wigglers force electrons into a giant slalom—fewer and wider turns—to produce beams that are less well collimated but have a continuous spectrum of energies, akin to white light.

    These beams can then be brought to an even sharper focus with the special x-ray optical devices that synchrotron researchers are developing. X-rays cannot be refracted by a lens, but mirrors can deflect x-rays at very low grazing angles, bringing them to a focus. Researchers are also trying out other types of optics. For example, ESRF's Anatoli Snigirev has developed a lens consisting of series of holes drilled in pieces of aluminum or beryllium. To an x-ray, the holes have a greater refractive index than the metal, so the holes act as an array of lenses. Another focusing system uses glass capillary tubes with a reflecting inner surface. And a collaboration of researchers from APS, ESRF, and Brookhaven National Laboratory has built a thin-film wave guide consisting of a layer of polyimide 1556 angstroms thick, sandwiched between totally reflecting layers of silicon and silicon dioxide. “With capillary tubes, you can get a beam down to a micron, but wave guides can get the beam down to the angstrom level, 100 to 1000 angstroms,” says Sunil Sinha, who is responsible for in-house experiments at APS.

    View this table:

    Such small beam sizes are transforming diffraction, the standard technique for investigating crystal structure, in which a beam is fired at a crystal and the pattern of deflected photons provides details of its interior. Because the very fine beams provided by the new sources are, in most cases, smaller than the crystal, researchers can investigate different parts of a crystal separately. Says Riekel: “You can look at inhomogeneities inside the crystal, or inside a large sample, such as a polymer fiber”—or a strand of spider silk. And Simon Mochrie of the Massachusetts Institute of Technology explains that because x-rays have wavelengths much shorter than light, ranging from a fraction of 1 angstrom to a few angstroms, they can resolve much more detail. “Polymer molecules are about 100 to 1000 angstroms wide, and that is a convenient size you can reach with x-rays.”

    Because the beams are brilliant as well as finely focused, researchers can gather data much more quickly than they could at previous sources. While speed is always welcome, it has opened up new opportunities. For example, Walter Lowe at Howard University in Washington, D.C., heads a team whose first experiment on an APS beamline used a very intense beam to record diffraction patterns of a semiconductor film at a rate of 30 frames per second. Besides probing the material, the beam also heats it. “The beam does some annealing in the film and we look at thermal changes,” says Lowe. While this first study simply demonstrated what was possible, Lowe's team plans to look at the dynamics of many different materials. “We look at everything from biology to semiconductor materials,” he says.

    Researchers at ESRF are doing similar work, says Director-General Yves Petroff: “We can now look—at a time scale of a few nanoseconds, with a repetition rate of 100 picoseconds—at what happens in a molecule” as it reacts with another molecule or changes shape. “We can make a ‘movie' of what happens. … This is something entirely new and has never been done before.”

    The coherence of the undulator beams is also allowing researchers to exploit new types of imaging. One, phase-contrast imaging, exploits two coherent beams that have slightly different phases. Superimposing the beams yields an “interference” pattern—the intensity of the combined beam will decrease where the phase difference becomes greater. This phenomenon allows researchers to image body tissues and minute cracks in metals, which do not block x-rays but simply shift their phase slightly. When x-rays that have passed through the region of interest are combined with x-rays that have bypassed it, phase shifts will show up as changes in intensity, forming an image.

    In the past, researchers made such phase-contrast images by separating a beam into two components and sending one through the object while directing the other around it. More recently, researchers have simplified this process with a technique called “inline holography,” in which a small object is placed in the middle of a large coherent beam. The parts of the beam that pass through the object then combine with those that pass around it.

    Researchers are now enhancing phase-contrast imaging with other techniques. In x-ray microscopy, for example, a coherent beam is made to diverge by passing it through a thin-film wave guide. It then casts an enlarged image of the sample on a detector placed some distance away. “We have achieved a microscope that operates … with a resolution of 1200 angstroms,” says Petroff. And by taking multiple images of a sample, such as biological tissue or fibers, from different orientations and combining them in a computer, researchers can reconstruct three-dimensional images of the sample's interior, a technique called phase-contrast x-ray-computed tomography.

    Another technique made possible by undulators' coherent x-ray beams is photon correlation spectroscopy, otherwise known as “speckle spectroscopy.” It can garner information from disordered materials, such as colloids, polymers, alloys, and synthetic multilayers, which have structures that are too chaotic for diffraction studies. A coherent beam scattered from such materials produces a set of speckles characteristic of the sample. The speckle pattern does not provide any insight into the material's structure, but researchers can follow how the intensity of specific speckles change over time, gleaning information on phenomena such as phase transitions and sedimentation. “People have been doing this for many years with lasers,” says ESRF's Gerhard Grübel. “But now you can do it in the x-ray regime, and that means that your spatial resolution has become much finer” because of the much shorter wavelengths of x-rays. “You can now probe the disorder and dynamics of systems on an atomic-length scale,” he says.

    Although researchers at the new sources, in particular at ESRF, are already turning out new results at a prodigious rate, they are also exploring and perfecting new approaches, such as combining two techniques in a single experiment. At the APS, for example, Sinha and his colleagues have plans to combine phase-contrast imaging and scattering: “You would like to use phase-contrast imaging to look at a crack propagating in a material. And around the crack you would use scattering measurements to measure the strain distributions.”

    In some cases, new techniques are emerging from this bustle of innovation faster than the demand for them. For example, Riekel cites a new technique in which the fluorescence of atoms induced by x-rays reveals the presence of trace compounds at a sensitivity 100 times greater than existing methods. Says Riekel, “We still have to look for applications. It is so sensitive that you have to go out and find the clients for this.”


    X-rays Go Where Neutrons Fear to Tread

    1. Andrew Watson
    1. Andrew Watson is a writer in Norwich, U.K.

    Move over, neutrons: The x-rays are in town. Ever since the German physicist Max von Laue's 1913 insight that x-rays could be used to unravel crystal structure, they have been an essential tool for the study of matter. Some neighborhoods, however, have been off limits to x-rays, such as materials' fine-scale magnetic structure and the fleeting molecular alliances within disordered materials such as liquids and glasses. Those have been the domains of neutron beams since the first research reactors were built in the 1950s. With the advent of third-generation synchrotron sources, however, x-ray scattering is making inroads into neutron territory.

    “The special points [of the new sources] are very high brightness, good-quality polarization, and very high energy x-rays,” says Hiroshi Kawata of the Photon Factory at the KEK high-energy physics lab in Tokyo. Because of these properties, “you can start thinking about experiments it would not have been possible to do a few years before,” says physicist Michael Krisch of the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, the first of the new machines. Already, the third-generation sources are teasing out information about magnetic properties and disordered materials that neutrons could not reveal. Because of the beams' exquisite polarization, says ESRF's Pekka Suortti, researchers are beginning to answer such questions as “What is the real nature of the magnetization: Is it local [to an atom] or is it itinerant?” And the x-rays' brightness and tightly controlled energy have opened the way to studies of disordered materials that capture, for example, a high-speed form of sound in water.

    Those results are only the first in what is expected to be a torrent, says David Laundy, an ESRF user from Britain's University of Warwick, because “x-rays give different information from [that of] neutrons.” Neutrons are sensitive to magnetism, for example, because they are scattered not only by collisions with atomic nuclei, but also by magnetic interactions with atoms as a whole. (Neutrons, although they lack charge, nevertheless have their own magnetic field.) But neutrons cannot distinguish the two contributions to an atom's magnetic field, which come from the inherent spin of its electrons and from the magnetic effect of those electrons orbiting the nucleus. “In the case of neutrons … you see the combined magnetic effect, which includes both the spin and orbital contributions,” says Suortti. Separating out the spin and orbital parts “really lies at the heart of understanding magnetic properties,” he says, because while the orbital component is always associated with the atom, the spin component can become “delocalized” and give rise to a “sea” of magnetism like that in metals such as iron.

    X-rays offer a way to untangle these two effects. Scattered by an atom's electrons, x-rays respond mainly to the electrons' electric charge, but the magnetic part of the photon's electromagnetic wave also interacts feebly with the electrons' magnetic field if they are aligned advantageously. This is possible with the new x-ray sources, because they have beams whose polarization—the alignment of their electric and magnetic fields—can be controlled. “With x-rays … magnetic and charge-scattering amplitudes have a different polarization dependence,” explains ESRF's Christian Vettier. “If we measure this, we can tell whether the scattering is magnetic in origin or not.” With a weak x-ray beam, this magnetic scattering is little more than a curiosity, but with a bright beam, it can become a tool for studying magnetic structure.

    And it turns out that x-rays are also sensitive to the two different components of magnetism. “If you do two different experiments in two different geometries,” with the sample in different orientations relative to the incoming x-rays, “you get different proportions of spin scattering to orbital scattering,” says Laundy. In short, adds Suortti, “By [using] x-rays, you can separate the orbital and spin contributions.”

    X-rays offer other advantages over neutrons. “X-rays don't penetrate very far into materials compared to neutrons, so they tend to be more sensitive to the surface of the material than neutrons,” according to Laundy, who adds that surface magnetism is of great interest in the world of magnetic recording devices. Laundy also sees a future for x-rays in studying exotic magnetic structures—for example, those in which the orientation of the magnetism spirals inside the material. “X-rays offer high resolution compared to neutrons for studying these materials,” he says.

    X-rays are also flexing their muscles in another domain that was once the preserve of neutrons—so-called inelastic scattering. Most structural studies carried out with neutrons and x-rays rely on elastic scattering, in which the scattering does not change the energy of the probe particle. Such studies yield a still image of the inner material world. Probing the dynamics of that world requires inelastic scattering, in which the probe particle provokes an excitation, such as a lattice vibration, as it strikes the sample, then continues on its way with reduced energy. Careful analysis of the energy and momentum of the particle before and after scattering allows researchers to find out what sort of excitations are possible in the material.

    Inelastic scattering studies with neutrons have unraveled the dynamics of a wide range of systems, from liquids to metallic glasses. But neutron studies often require samples to be made from rare isotopes, rather than the common ones. Water, for example, has to contain deuterium rather than hydrogen for neutron studies. Neutron beams are also dim, and their energy range is limited. And, like police who can't catch up with villains in sports cars, neutrons can't trace excitations that propagate faster than about 1000 meters per second, roughly the speed of the neutron itself.

    ESRF's x-rays, however, have a wide range of energies and momenta, enabling Francesco Sette and his colleagues to study just this type of fast excitation process. “We are interested in determining the dynamical properties of disordered materials in a region that was not accessible before,” says Sette, who has a special interest in liquids. At large scales, liquids are shifting and chaotic; at the smallest scales they consist of individual particles. Somewhere in between, liquid molecules interact fleetingly with their neighbors, arranging themselves into structures that are gone again in a flash. For brief moments, the water may even appear to be a solid. These transitory get-togethers by molecules are called collective excitations, and they may affect everyday properties of a liquid, such as chemical reactions, thermal properties, and the way sound waves propagate.

    Collective excitations are also thought to be responsible for the phenomenon of “fast sound” in water. Sound usually travels at about 1400 meters per second in water, but about 4000 meters per second in ice. Computer simulations predict that collective excitations in water allow a second, faster form of sound to travel at 3200 meters per second. Paris-based physicist José Teixeira and his colleagues first glimpsed fast sound in 1985 using neutron scattering, although further neutron studies a decade later cast doubt on their findings.

    However, Sette and colleagues from ESRF and from the University of L'Aquila in Italy confirmed the effect by setting off fast sound waves in water with inelastically scattered x-rays. “We were able to show that you can have in a liquid a transition to properties characteristic of the solid when you consider time and length scales which are short and small [enough],” says Sette, who reported the result in the 1 July 1996 issue of Physical Review Letters.

    Solid but disordered materials such as glass are also coming under the gaze of inelastic x-ray scattering. Glass is remarkably good at absorbing heat, for example, and Sette led scattering experiments at ESRF showing that some of this ability “must come from high-frequency acoustic waves.” He notes that he and his colleagues “were able to observe and to measure and to characterize [these waves] in a glass, and this was not possible before.”

    Despite their newfound abilities, x-rays are not about to replace neutrons as a tool for studying matter. Laundy describes the two as complementary, a sentiment echoed by Sette. When it is possible to use them, “neutrons are by far … the best technique to look at the dynamics,” he says. And even though x-rays can probe magnetism in ways neutrons cannot in many situations, “neutrons are still the probe of choice to determine a magnetic structure,” says Vettier, because their magnetic scattering is so much stronger. But researchers at the new synchrotron sources are learning fast, and x-rays may not remain the underdog for long.


    Brightness Speeds Search for Structures Great and Small

    1. Robert F. Service

    To Eva Pebay-Peyroula and other x-ray crystallographers, bacteriorhodopsin has brought 20 years of frustration. Beginning in the mid-1970s, researchers managed to coax this large protein (which helps convert sunlight to chemical energy in bacteria) into crystals, the starting point for x-ray experiments that they hoped would determine the protein's three-dimensional (3D) atomic structure and reveal how it does its job. But while other experiments have unraveled a good deal of the molecule's structure, the x-ray studies never panned out. The crystals were either too small or too disorderly to be useful, making the atomic pictures come out fuzzy at best.

    Last winter, however, Pebay-Peyroula, a crystallographer at the University of Grenoble in France, finally triumphed over the elusive protein. Together with crystal growers Ehud Landau and Jurg Rosenbusch of the University of Basel in Switzerland, Pebay-Peyroula took a newly grown batch of pinhead-sized crystals—no bigger than the ones that had failed in earlier studies—to a new ultrafine x-ray beam at the European Synchrotron Radiation Facility (ESRF) in Grenoble. They walked away with the first high-resolution x-ray picture of the molecule, the details of which she presented at last month's European Biophysics Congress in Orleans, France. The new structure not only reveals new aspects of how the water molecules at the core of bacteriorhodopsin help pump protons across cell membranes to help generate chemical energy; it also highlights the new frontier of molecular biology being made possible at the latest generation of synchrotrons. “It's a really big success that could only have been done on this beamline,” says Stephen Cusack, a crystallographer at the European Molecular Biology Laboratory's facility in Grenoble.

    ESRF, which has been up and running since 1994, is one of three so-called “third-generation” synchrotron sources that turn out highly energetic, or hard, x-rays; the other two are just starting operations in the United States and Japan. The main advantage of these stadium-sized machines, which generate radiation by accelerating charged particles to high energies and sending them along tightly curving paths, is that “their beams are fantastically bright compared with other sources,” says Wayne Hendrickson, a biochemist at Columbia University in New York City. They are at least 100 times as bright, in fact, thanks to the high energies of the particles, as well as the addition of specialized instruments designed to enhance and focus the beams. That puts them “head and shoulders above other machines,” says Edwin Westbrook, a crystallographer who heads a structural biology collaboration at the Advanced Photon Source (APS), the United States' third-generation synchrotron, which officially opened for business in May 1996.

    That brightness, according to Westbrook, Hendrickson, and others, will allow researchers to study biomolecules as never before. For the first time, the ultrasmall crystals of proteins such as bacteriorhodopsin are yielding enough data for researchers to determine their structures. The intense beams are lighting up the atomic landscapes of protein complexes, such as viruses, that are too large to be studied with fainter beams. They are speeding discoveries by turning out, in just seconds, the amount of data that previous machines required minutes or hours to amass. That speed is also helping researchers make high-speed movies of proteins as they undergo shape changes (Science, 27 June, p. 1986). Moreover, because the beams are so bright, they can reveal details hidden in partially ordered samples, such as the molecular events responsible for muscle contractions.

    Starting small. While synchrotrons got their start as scientific toys for physicists, chemists, and materials researchers, today biologists make up the fastest growing set of users, up from 5% just 10 years ago to 30% today. The reason: The machines' hair-thin x-ray beams are ideal for determining the 3D structure of proteins. While such structures can be determined with a number of techniques, the most popular is a method known as diffraction, in which an x-ray beam ricochets off innumerable copies of a protein lined up in a crystal. By analyzing the pattern of deflected x-rays recorded by a detector, researchers can reconstruct a precise 3D map of a protein's constituent atoms.

    At least, that's the theory. Reaching this goal is fraught with difficulties, including the fact that x-rays interact only weakly with the types of atoms in biological crystals. This means that only a tiny fraction of the x-rays are actually deflected from their flight path and provide information about the molecule's structure. Researchers have long tried to get around this by growing relatively large crystals—about the size of a sesame seed—with an enormous number of copies of the protein, in hopes that the added protein copies would deflect a greater percentage of a beam's x-rays. “But there are many cases where you cannot grow large crystals,” says Christian Riekel, a microfocus beamline expert at the ESRF. Many proteins have an irregular shape, preventing them from packing together easily, and some can fold up in more than one 3D shape, which again prevents a regular assembly.

    Bacteriorhodopsin is one such case. Other similar proteins that span cell membranes are just as finicky. While researchers have solved the structure of more than 2000 nonmembrane proteins, they have only managed about 10 or so membrane proteins. Yet the structures of membrane proteins are eagerly sought, as they include receptors, ion channels, and other vital cell components. With the new beams, such as the ESRF's microfocus beamline (which can focus 200 billion photons a second on a spot just 10 micrometers in diameter) researchers are hopeful that the trickle of membrane protein structures will soon swell to a stream.

    The reason for the optimism is that the increased x-ray intensity allows tiny crystals with fewer copies of a given protein to produce a meaningful diffraction pattern. “Smaller crystals suffice,” says Cusack. “That's a good thing, because it means that the structure of more proteins can be solved.” Ada Yonath, a crystallographer with a joint appointment at Israel's Weizmann Institute for Science and the Max Planck Research Unit for Ribosomal Structure in Hamburg, Germany, agrees. “In 1972, we got microcrystals of an antibody that we were interested in,” says Yonath. “But at the time, we just threw them away, whereas today we'd be able to use them.” Adds Tom Irving, a synchrotron expert at the Illinois Institute of Technology (IIT) in Chicago, “In the old days, you studied what you could study. Today you study what you want to study.”

    But the new tightly focused beams are not without their problems: With increased intensity comes radiation damage. Beams at second-generation sources, such as the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory in Upton, New York, can typically focus their beams down to a spot size of 100 micrometers across. Microfocus beams at the APS and ESRF, meanwhile, regularly pack an even greater flux of photons into an even smaller spot, about 30 micrometers and below. As a result, “you get intense radiation damage because you're putting so many photons in such a small area,” says Cusack. “It rapidly damages crystals.”

    In addition to damaging the proteins directly, the high-energy photons can break bonds in the solvent surrounding them, creating highly reactive “free radicals” that eat away at bonds in the protein. Over the past few years, researchers have taken to chilling their crystals to about 100 kelvin with liquid nitrogen in an effort to halt the diffusion of the free radicals, but this is less effective under the more intense beams of ESRF and APS. Westbrook and others have shown in preliminary experiments that cooling the samples to even lower temperatures with liquid helium seems to improve matters somewhat. But “the jury is still out” about whether the improvement will be enough, says Cusack.

    The big picture. Another area benefiting from the new high-power x-ray beams is the attempt to determine the atomic structure of heavyweight proteins and large complexes of proteins, DNA, and RNA, such as viruses and cellular organelles. While x-ray diffraction has long worked wonders for solving the structures of small proteins, measuring 75 or so angstroms in diameter, researchers have had a much harder time using it to map the atomic landscape of large structures, such as viruses, in which the smallest repeating unit in the crystal, known as the unit cell, can measure 1000 angstroms across or more.

    The huge number of atoms in viruses and other large structures is one problem. Because of it, these structures create more diffraction spots on the detectors, each one formed by x-rays glancing off a different plane of atoms in the crystal. If too many spots are created, they begin to overlap, making it hard for researchers to separate them out. A second problem has been intensity. Because the structures are larger, a crystal of a given size will contain fewer copies of the unit cell. And unless a larger number of photons are blasted at the crystal, the spots will be dimmer, making them harder to distinguish from background noise.

    In the new generation of synchrotrons, better x-ray detectors have greatly eased the first problem, while the higher brilliance beams dramatically reduced the second. “If you have more [x-ray] intensity, each spot will be stronger, and it makes it possible to go to larger assemblies,” says Michael Rossmann, a crystallographer at Purdue University who in 1985 solved the first crystal structure of a virus—one far smaller than the viruses being mapped today.

    While work on these macro structures has been progressing at second-generation sources for years, third-generation sources are only now beginning to weigh in. And the results, says Hendrickson, are “spectacular.” At last month's Protein Society meeting in Boston, for example, a British team led by Oxford University molecular biophysicist David Stuart and Peter Mertens, a virologist at the Institute of Animal Health at Pirbright, reported using ESRF to help them map the atomic pattern of the bluetongue virus, which infects sheep and cattle. This structure, the largest ever solved, is made up of about 1000 proteins and has a unit cell that measures about 1100 angstroms on a side and 1600 angstroms high. According to Stuart, “it was a bit of a stretch even at a third-generation source.”

    But Stuart adds that the bluetongue virus is not likely to remain at the top of the heap forever. “As time goes on, the things that are of biological interest will be different and have larger and larger structures and be made up of complexes of proteins,” says Stuart. “Having access to these machines makes looking at these complexes possible.”

    Tim Richmond and his colleagues at the Swiss Federal Institute of Technology (ETH) in Zurich have been hard at work at another heavyweight project. They also recently used ESRF to collect diffraction data on the nucleosome, a protein-DNA complex that is the fundamental repeating unit in chromosomes, and they too expect to publish a high-resolution structure soon. Yonath and her colleagues are also completing work on a new high-resolution image of the ribosome, the cellular machine responsible for building proteins.

    Proteins go MAD. In addition to solving larger structures and probing smaller crystals, researchers at third-generation sources expect to solve structures more rapidly as well. Not only does the higher flux of photons from the machines speed data collection, but the improved ability of the new machines to tune the wavelength of their x-rays allows researchers to expand their use of a new high-speed technique for solving protein structures: multiwavelength anomalous diffraction. MAD, as it's called, cuts down on the number of crystals that researchers must image to piece together an atomic-scale map.

    The technique cuts to the heart of modern crystallography's greatest challenge—the fact that conventional lenses cannot focus x-rays. Robert Stroud, a crystallographer at the University of California, San Francisco, explains that a diffraction pattern is somewhat like the blurry pattern that hits a screen if light is projected through a slide without then being focused by a lens. Without a lens to reveal the image, researchers must use mathematical techniques to reconstruct the pattern of atoms that gave rise to the diffraction pattern.

    That requires knowing two properties of the x-rays. The first is the intensity of the diffraction spots; the second is the relative position of the wave forms themselves, known as their phase. By plugging these numbers into an equation, researchers can work out the 3D atomic pattern of atoms in the crystal.

    But while the intensity of the spots can be measured simply by counting the number of photons that hit the detector at each position, the phases of the scattered waves cannot be determined by looking at a single diffraction image. To find out the phase of the x-rays, researchers normally take multiple images of at least two crystals of the protein, one of which typically has a metal atom inserted into the protein's structure so that it produces subtly different diffraction patterns. Then, by comparing the two sets of patterns, the researchers can determine the precise position of the metal atom, much as surveyors can triangulate their own position by knowing the direction and distance to two separate points. This point of reference then helps the crystallographers infer the relative phases of the waves scattering from other atoms in the protein.

    This technique of analyzing multiple crystals of the same protein has a long track record of success, but in the late 1980s, Hendrickson and his colleagues at Columbia showed that they could achieve the same result with just a single metal-containing protein. They did so by taking multiple diffraction patterns with x-rays of slightly different wavelengths. The metal atom scatters photons differently if their frequency is tuned slightly above or below a characteristic “resonant” value, yielding the multiple diffraction images needed to determine the critical phase information. The technique has since swept through the crystallography community, because it means that researchers now need only crystallize one form of their protein.

    Results from initial MAD studies at third-generation machines are beginning to come in. Christopher Lima, a postdoctoral in Hendrickson's lab, and colleagues at Columbia and at Argonne National Laboratory in Argonne, Illinois, report in the 15 June issue of Structure that they used MAD to determine the atomic map of a putative tumor-suppressor protein, which is thought to be disrupted in lung cancer. This result—the first biological structure announced from the APS—will undoubtedly be the first of many. Because third-generation sources can tune their x-rays over a wider range of wavelengths and because their high power increases the characteristic scattering from the metal atom, use of the technique “is likely to grow exponentially,” says Yonath.

    Looking beyond crystals. While solving protein structures is likely to be a mainstay for the new beamlines, researchers expect the machines to help shed light on other long-intractable biological questions as well. At the APS, for example, teams are gearing up to use the intense x-ray beams to explore everything from how muscles twitch to how plants absorb nutrients from the soil.

    Like protein crystallography studies, most of these projects had their start at earlier synchrotrons, and researchers are hoping that the new high-brilliance beams will help them improve their data. For the past several years, for example, University of Chicago geophysicist Stephen Sutton and his colleagues have been using Brookhaven's second-generation NSLS beams in the hope of determining just how a fungus manages to cause “take-all” disease, which kills up to 20% of all wheat crops.

    Researchers have long known that the key effect of the disease is to make the roots of wheat plants unable to absorb manganese, an essential nutrient—the lack of which in turn makes the plants more vulnerable to the fungus. One possibility is that in the soil surrounding the roots, the fungus alters the electronic state of the metal atoms from manganese 2, a variety easily absorbed by the plant, to manganese 4, a form that the plant cannot absorb. As Sutton explains, “this makes the manganese unavailable to the plant, so it becomes manganese deficient and can't protect itself against the invading fungus.”

    Because the different electronic states of the metal absorb x-rays at slightly different wavelengths, Sutton and his colleagues had attempted to use the NSLS to map the distribution of manganese 2 and 4 in wheat roots and surrounding soil. However, he says, “the resolution [from the NSLS] hasn't been sufficient to look at the details of this reaction.” They are hoping that the APS's tighter focus and higher photon flux will provide them the boost they need. In a related set of experiments at the University of Georgia, environmental geochemist Paul Bertsch and his colleagues are mapping the way certain plants take up plutonium, uranium, and chromium from the soil, in an effort to learn how these plants manage to concentrate the metals without succumbing to their toxic effects.

    Slightly farther afield, IIT's Irving plans to train the APS beam on muscle fibers in hopes of witnessing the molecular events of a twitch close up. With each muscle contraction, Irving explains, interleaving assemblies of protein filaments known as actin and myosin exert force on one another to shorten the overall length of the muscle. As early as 1971, Gerold Rosenbaum, Kenneth Holmes and their colleagues attempted to witness this supramolecular dance using x-ray scattering. But the effort was hampered, in part because earlier synchrotrons delivered too few photons to provide good resolution. Now Irving believes they are finally on the verge of a breakthrough.

    It is a sentiment that many of his synchrotron colleagues around the world share. “With the third-generation sources,” says Irving, “we finally have the flux to do what we wanted to do in the first place.”


    New Synchrotrons Light Up Microstructure of Earth

    1. Daniel Clery

    For geophysicists, “to be able to reproduce the conditions of Earth's interior, from the crust down to the core, [is] something of a Holy Grail,” says Ho-kwang Mao of the Geophysical Laboratory of the Carnegie Institution of Washington. In pursuit of it, researchers like Mao take tiny samples of material typical of Earth's interior—such as the silicates and oxides of the upper mantle or the iron of the core—squeeze it to huge pressures between the tips of two diamonds, and heat it to a few thousand degrees. But mimicking the deep Earth is of no use unless you can see how your sample is behaving. So geophysicists have been adding a new stop on their grail quest: a pilgrimage to a third-generation synchrotron.

    Work with second-generation machines has already shown that scattering x-rays off high-pressure samples can reveal how the crystal structure of minerals in the deep Earth changes under pressure. The new synchrotrons, with their higher intensities, are providing sharper pictures of structure and properties, and their finer beams are allowing researchers to probe the crumb-sized samples required for ultrahigh-pressure experiments. “It's a very exciting time at the moment. … The new facilities are very important,” says geochemist Surendra Saxena of Uppsala University in Sweden.

    Geoscientists are not the only ones noticing the possibilities. Other researchers are compressing high-temperature superconductors to try to gain some clues to how they work. Still others are putting the squeeze on the soccer-ball-shaped carbon molecules called fullerenes to see how their properties and structure change at high pressure. Work at the new machines is also illuminating the interiors of planets other than Earth by probing how hydrogen—which makes up the core of Jupiter and Saturn—behaves at enormous pressures, where it may become metallic, or even a superconductor.

    Perhaps the most contentious area at the moment is the study of iron. Earth's core is the driving force behind all the processes of the planet's interior, so exactly what form iron takes in the core will have a fundamental impact on geophysical models. “Iron will change the whole thing,” says Denis Andrault of the University of Paris. To study iron's properties under such extreme conditions, researchers squeeze a sample to enormous pressures between two flawless diamonds in a device called a diamond-anvil cell, then heat it with a laser. By watching how x-rays are scattered as they pass through the diamonds and the sample trapped between them, researchers can gather clues to how the atoms are arranged in the sample.

    In 1993, a team led by Saxena squeezed a sample of iron to about 38 gigapascals (GPa)—hundreds of thousands of times atmospheric pressure—while heating it to between 1200 and 1500 kelvin. They detected signs that under those conditions the iron is no longer in its normal high-pressure structure, known as hexagonal close-packed (hcp), in which atoms are arranged like racked billiard balls, with the balls in each layer lined up with those above and below. Since then, several teams have been probing iron at second-generation synchrotron sources in attempts to identify the new structure. Saxena believes it to be an arrangement called polytype double-layer hcp, or dhcp—in which each layer is slightly out of step with its neighbors. But he admits that “it's still an open question what happens at high temperature. Most agree there is a phase change, but some groups believe it is a different structure.”

    The third-generation machines could help settle the matter by sweeping aside some of the obstacles earlier studies faced. One stems from the fact that the laser spot that heats the sample is roughly the same size as the x-ray beam. To avoid contaminating the x-ray scattering data with sample areas at the edge of the laser spot that are not at the full temperature, researchers must narrow down the beam. Doing this at a second-generation source reduces the intensity so much that analysis becomes too slow to maintain temperature stability. “You can barely do [this technique] at second-generation sources,” says Mao. Third-generation machines speed up the analysis considerably.

    The new machines also allow researchers to get a more direct view of crystal structure. Traditional high-pressure studies have used a technique called energy-dispersive x-ray diffraction, in which a beam with a range of photon energies is fired at the sample. A detector positioned on the far side of the sample, at a fixed angle from the beam axis, collects a diffraction spectrum, showing how many photons are scattered to that angle at each energy. The spectrum provides key lattice parameters of the crystal, such as the size of its smallest repeating unit, but not its structure.

    The new synchrotrons can produce a beam of a single energy that is intense enough to do angle-dispersive diffraction, which produces an actual diffraction pattern on a flat image plate behind the sample. From that, researchers can discern structure as well as lattice parameters. “With energy-dispersive diffraction, hcp and dhcp are hard to distinguish,” says Saxena. “Angle-dispersive diffraction can resolve low-intensity peaks. It makes life so different.”

    The same goes for researchers doing high-pressure studies of hydrogen. To search for a metallic phase of hydrogen, researchers had to squeeze it to even higher pressures than those in the iron studies. “We needed to go to 100 GPa,” says Mao. The amount of force a diamond anvil can apply has a limit, but because pressure is force divided by area, researchers can approach the required pressures by making the sample smaller—on the order of a few tens of micrometers. That is too small to study with the comparatively coarse beams of second-generation machines. “At high pressure, the sample gets small, too small for ordinary synchrotrons,” says high-pressure specialist Michael Hanfland of the European Synchrotron Radiation Facility (ESRF). “The hydrogen stuff couldn't be done before at these pressures.”

    The 30-micrometer beams of the ESRF have changed all that. When ESRF opened for business in 1994, hydrogen studies were a high priority, and the sense of urgency grew last year with reports that shock-wave experiments on liquid hydrogen had revealed hints of a metallic phase. A team of researchers from the Geophysical Laboratory and the University of Paris reported in Nature last October that they had succeeded in compressing solid hydrogen to more than 100 GPa. The results included the finding that hydrogen is much softer than predicted by theory, but “we did not see metallization,” Mao says.

    Mao, like many high-pressure researchers who spoke with Science, thinks the best is yet to come. “We're still learning with the third-generation sources,” he says. “It's all happening just now,” says Saxena. “We feel like explorers.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution