- HUMAN GENOME PROJECT
And the Gene Number Is ...?
Cold Spring Harbor, New York—Even though a draft sequence of the human genome is nearing completion, biologists still don't know how many genes it contains. Indeed, the range of estimates seems to be growing rather than shrinking. The question lies at the core of our understanding of genetic complexity. If genomes are the books of life, then genes are the words that tell the story of each organism. Biologists have long assumed that microorganisms are short stories and complex organisms such as humans, great tomes.
But last week at a meeting* here, the generally accepted human “word” count of 80,000 to 100,000 took a battering when researchers from Germany, the United States, and France offered revised estimates of the number of genes—all of them well below 50,000. The talks, some of which will be published in the June issue of Nature Genetics, sparked heated debates, with at least one genomicist countering with a new 100,000-plus estimate. The liveliest discussions focused on the terms and estimates for a $1-a-bet gene-count sweepstakes. Early entries ranged from less than 30,000 to more than 150,000, and updates and rules are posted at www.ensembl.org/genesweep.html. The winner will be picked in 2003 and will take home not only the pot but a leather-bound copy of The Double Helix, signed by Cold Spring Harbor Laboratory's James Watson, co-discoverer of the structure of DNA. By then, the number of genes should be clear, or at least clearer, says Ewan Birney, the computational biologist from the European Bioinformatics Institute who started the competition.
Gerald Rubin, head of the Berkeley Drosophila Genome Project, set the stage for the current debate when he pointed out that the fruit fly has some 5000 fewer genes than the supposedly simpler nematode worm. “Complexity is not in any simple way related to gene number,” he warned the audience.
Then the next day, André Rosenthal of the Institute of Molecular Biotechnology in Jena, Germany, concluded his talk about the recent completion of chromosome 21—published in the 18 May issue of Nature—with a reality check on the current gene estimate. Rosenthal posited that the gene distribution of chromosomes 21 and 22, which was finished last December, should be roughly representative of the entire genome, as 22 seems to be gene rich and 21 gene poor. If the human genome is 3.3 billion bases long, then these two chromosomes and their genes (about 770) account for 2% of the genome. By this logic, he said, “we arrive at less than 40,000.”
A few hours later, Hugues Roest Crollius, a molecular geneticist at Genoscope in Evry, France, came up with an even lower estimate, based on comparisons between the existing human sequence and the sequence of a freshwater puffer fish, Tetraodon nigroviridis. Roest Crollius, Jean Weissenbach, and their colleagues are using the puffer fish genome to identify evolutionarily conserved genes in the human genome. After sequencing about one-third of Tetraodon's compact genome, he matched that DNA up against the finished human chromosomes and 800 million bases of rough draft human sequence deposited in GenBank by the end of 1999. Preliminary analysis with a new gene prediction program that identifies shared coding sequences indicated that there are 2.58 to 3.18 so-called evolutionarily conserved regions per human gene—in part because genes consist of multiple coding regions, more than one of which is likely to be present in both genomes. Extrapolating from the number of these regions, he thinks the human genome has between 27,700 and 34,300 genes.
Then Phil Green, a genomicist at the University of Washington, Seattle, who has developed the widely used PHRED and PHRAP programs for assembling sequence data, said that he, too, had come up with a much lower gene count. Green's numbers are based on analyses of expressed sequence tags (ESTs), bits of genes identified during massive screenings of different tissues. Green, among others, has worked out computer programs that look for overlapping sequence between ESTs and connect those that belong to the same gene. Based on the EST representation of the genes in chromosome 22, he thinks the total number of genes is about 35,000. With Rosenthal's and Roest Crollius's data, “we now have three independent calculations that give these low numbers,” says Green. “I'm pretty confident that it's in that range.” Francis Collins, head of the National Human Genome Research Institute, came out on the low side too, placing his $1 bet on a mere 48,011 genes. Recent gene counts have been inflated, he said—particularly by genomics companies who boast that, “My list is bigger than yours.”
But some researchers are holding out for an Anna Karenina-sized word count, arguing that human complexity cannot be explained any other way. John Quackenbush of The Institute for Genomic Research (TIGR) in Rockville, Maryland, put his dollar on 118,259 genes, an estimate based on the genes in TIGR's human gene index. He also bases his estimate on ESTs but uses a different gene-building program from the one Green developed. What's more, he said, in the microbial genomes that TIGR has sequenced over the past several years, extra genes have always popped up. Says Quackenbush, “We still have a lot to learn about what's in the human genome.” Nor was his sweepstakes entry the highest. A 153,478-genes entry came in from Sam LaBrie of Incyte Genomics, a California-based company that in September 1999 announced that the human genome had at least 140,000 genes.
But Rubin and other gene minimalists were undeterred, arguing that complexity comes from how genes are regulated or expressed—not in the number of genes themselves. “We don't need large numbers of genes to be an intelligent species,” Rosenthal explains. Weissenbach agrees, arguing that “once people go through the EST data, they will realize that many of the clusters [cover] the same gene.” His bet: 28,700. And yours?
↵* Genome Sequencing and Biology took place 10 to 14 May at the Cold Spring Harbor Laboratory in New York.
Fly's Eye Spies Highs in Cosmic Rays' Demise
- Charles Seife
Long Beach, California—On an October evening in 1991, an extraterrestrial intruder tore through the sky over North America. The interloper was a cosmic ray, a high-energy particle from deep space. Astronomers at the Fly's Eye observatory in Utah detected it as a flicker of light from the cascade of secondary particles it left in its wake. On analyzing the glow, they realized that the particle had packed a shocking 320 exa-electron volts (EeV) of energy, the same punch as an 88-kilometer-an-hour baseball pitch. Astrophysicists call it the “Oh-My-God” particle.
Now, observations from Fly's Eye's replacement have shown that the 320-EeV monster was not a freak of nature. This month at an American Physical Society meeting here, astronomers from the High-Resolution Fly's Eye detector, or HiRes, announced that they had found a handful of ultrasuperhigh-energy cosmic rays that deepen the mystery about the origin of these strange particles.
Since it began operating in 1997, HiRes has nearly doubled the number of known ultrahigh-energy cosmic rays. It has recorded seven particles with energies greater than 100 EeV, including one that clocked in at an estimated 280 EeV, the second highest cosmic ray energy ever recorded. HiRes has also detected 13 other cosmic rays with energies over 60 EeV. “People are beginning to think these events are real,” says Charles Jui, a physicist at the University of Utah in Salt Lake City.
They had reason to be skeptical. In the 1960s, scientists realized that particles with more than about 60 EeV of energy would tend to smack into the ubiquitous microwave background photons left over from the big bang. Such energetic collisions would produce pions, destroying the cosmic rays in the process. “You shouldn't expect these particles to survive for more than 20 to 50 megaparsecs,” Jui says. As a result, physicists expected that there would be a sharp cutoff in cosmic ray energies at about 60 EeV, and any particles with energies above that level must have come from nearby. Such energetic particles, they thought, should be easy to trace back to their sources. The HiRes data don't show an abrupt 60 EeV cutoff, however, and no obvious sources have turned up for the superenergetic particles. Indeed, astrophysicists have trouble even figuring out what might give them such a punch. Supernovae, for instance, can't accelerate particles to more than about 1/1000 of an EeV of energy. “This is a burning astrophysical problem that needs to be solved,” says University of Chicago astrophysicist Rene Ong.
Astrophysicists are hoping that HiRes, with its high sensitivity and ability to pinpoint a cosmic ray's direction and mass, will eventually provide the data that point to the answer. “Given that they've already doubled the sample, every few years they'll probably double it again,” Ong says. “HiRes could make a very important statement.”
- CONSERVATION BIOLOGY
Orangutans Face Extinction in the Wild
- Dan Ferber*
Lisle, Illinois—Orangutans, our third-closest living relative, are in crisis, reported several leading primatologists at a meeting here last week on apes.* Indeed, the plight of the orangutans, whose range is now restricted to the shrinking forests of Borneo and Sumatra, dominated the 4-day meeting and sparked two late-night sessions at which researchers and zookeepers hatched a conservation plan. Without urgent action, warns Biruté Galdikas, a biological anthropologist and conservationist who teaches at Simon Fraser University in Burnaby, British Columbia, the apes could be extinct in the wild within 20 years. The culprit is wholesale logging and other habitat destruction, which will be difficult to halt.
Exact numbers are hard to come by because orangutans, highly intelligent but mostly solitary apes, hang out high in the forest canopy where they are difficult to spot. But two recent surveys, one of which was presented at last week's meeting, show clearly that populations are crashing. Carel van Schaik of Duke University and his colleagues estimated the size of orangutan populations in a section of the 24,000-km2 Leuser region on Sumatra in 1993 and 1999. Relying on data from satellite imaging and aerial photos of the animals' habitat—combined with their own knowledge from 24 years of fieldwork—the researchers found that the population had dropped from 12,000 to 6500. The survey, in press at Oryx, is likely to be accurate, says primatologist David Chivers of Cambridge University, and “it's great reason to be horrified.” Indeed, the entire world population of the red apes was calculated at just 27,000 in 1998, down from an estimated 315,000 in 1900, according to a book published last year by primatologists Herman Rijksen and Erik Meijaard.
Confounding matters, taxonomists have recently agreed that the Bornean and Sumatran orangutans, which have been separated for thousands of years, differ enough in appearance, behavior, and genetic makeup to be classified as separate species. Each is critical to conserve, but the task will be even harder given the small sizes of both populations.
The Sumatran species is in critical danger because its last stronghold, Gunung Leuser National Park, is being actively and illegally logged, says van Schaik. The situation is not much better on Borneo, says conservation biologist Jatna Supriatna, who directs the Indonesia program of Conservation International. Although the orangutan population on Borneo is larger, habitat is disappearing fast. In the past 20 years, 4 million of the 13 million total hectares of forest were converted to palm-oil plantations, mostly run by friends of former president Suharto, Supriatna says. While still in power Suharto ordered the conversion of another 1 million hectares of peat swamp forest, prime orangutan habitat, to replace the disappearing rice fields on the overcrowded island of Java. The massive Indonesian forest fires of 1997 also took a huge toll, claiming 8 million hectares on Borneo, part of which had been previously logged. Poverty drives habitat destruction on both islands, as local people log the lucrative timber, Supriatna says.
As the forests go, so do the orangutan groups, and with them a unique opportunity to understand the dawn of human culture, van Schaik says. Researchers have learned only recently that different populations of orangutans, like chimpanzees, exhibit signs of what some primatologists call culture: behaviors unique to the population that are learned from other group members (Science, 25 June 1999, p. 2070). For example, all orangutans near the Suaq Balimbing research station in Gunung Leuser National Park make Bronx cheer-like sounds with their mouths as they build their nightly nest high in the canopy. None of the orangutans near the Ketambe station, 70 kilometers away, do. “It's irrelevant behavior that doesn't affect survival,” says van Schaik, but “it's culture in the making.”
Other orangutan cultural behaviors, however, could offer substantial benefits. For example, apes in a swampy forest on the east side of the Alas River harvest the oil-rich seeds of the woody neesia fruit by ripping open the fruit with their hands. But orangutans on the other side of the river use a peeled stick to pry open the fruit. Because those apes can spend up to 70% of their feeding time harvesting the fruit when it is in season, the more efficient stick users save lots of time and energy.
Alarmed by the new survey data, field researchers and zookeepers at last week's meeting hammered out an action plan, launching a group called the “Orangutan Network,” led by van Schaik, to conserve the species. They plan to scatter new study sites in the remaining areas of healthy forest, staffed by researchers and students, because a research presence is known to dramatically boost local conservation efforts. They hope to increase interest in orangutan conservation by enlisting individual zoos to “adopt” (and help fund) a research site.
In the field, Galdikas has expanded her focus from research to saving as many of the apes as she can. A conservation group she directs, the Orangutan Foundation International, operates a rescue and rehabilitation center in Tanjung Puting National Park in Borneo that saves young orangutans orphaned by logging, fires, or hunting. About 500 orangutans are housed at such centers. So far, about 800 apes have been successfully returned to the wild. But the apes need intact forest, so Galdikas uses money from supporters to pay 100 local Dayak and Melayu men—“left out of the economic pie,” she says—to patrol the forest, recycle, plant trees, and educate their neighbors. As a result, the illegal loggers have not touched a 50-km2 patch of forest near her research station.
Although the situation is indeed grim, these efforts, along with recent changes in Indonesia, offer glimmers of hope, says Supriatna. For example, Indonesia has a freer press than it used to, which is publicizing the crisis. In addition, the International Monetary Fund has put conditions on new loans to Indonesia, forbidding extension of oil-palm plantations and loans to loggers. International conservation groups are funneling money to the national parks and pressing the World Bank to tie debt forgiveness to forest protection. They have also established a trust fund to promote ecotourism and boost such forest crops as spices and rattan. “We're looking at a very serious problem,” van Schaik says, “but for the first time in decades, there's reason for optimism.”
↵* “The Apes: Challenges for the 21st Century,” 10 to 13 May, Lisle, Illinois, sponsored by the Brookfield Zoo.
- EUROPEAN SCIENCE
Spain Opens Coffers to Keep Talent at Home
- Michael Hagmann
Like many top Spanish scientists in recent years, Miguel Beato coped with his country's prolonged science funding funk by working abroad. “I'd given up ever going back to Spain,” he says. But now that Barcelona is building a $55 million biomedical research park and planning a biocenter at the University of Barcelona, Beato, a cell biologist at the University of Marburg in Germany, is contemplating the unthinkable: coming home.
Beato is not the only one impressed with Spain's sudden scientific resurgence. Late last month, the government announced a plan to boost science and technology spending over the next 4 years from $2.8 billion, or 0.9% of the gross national product in 1999, to the European average of 2% by 2003. Tasked with pushing this agenda is a new Ministry for Science and Technology, created last month by Prime Minister José Maria Aznar. “There is a real commitment by the government to give a major boost to research and development,” says the ministry's newly appointed science policy chief Ramón Marimón. That's sorely needed, says molecular biologist Margarita Salas of the Center for Molecular Biology in Madrid. “Without this step,” she says, “Spanish science is in serious danger of going downhill.”
Researchers have been waiting for good news for more than a decade: The last significant hike for science funding was in the mid-1980s, when a booming economy allowed the government to more than double the science budget. “The consequences were remarkable,” says Salas. “The number of scientists and the quality of the research shot up in a very short time.” Beginning in the early 1990s, however, Spain's science budget stagnated, the victim of a general belt-tightening aimed at shrinking Spain's budget deficit.
Marimón and his boss, Science Minister Anna Birulés, have revealed little about how they plan to dole out the budget increase. However, they do say that basic research will be the main beneficiary in next year's budget, and that they hope to create 25% more positions for scientists by 2003. The latter initiative would address the worst problem nationwide, scientists say. “We can't even offer positions to the many well-trained young scientists who want to come back to Spain” after completing postdocs abroad, says Salas. Some help is coming from the regional governments, particularly Catalonia. Besides sponsoring the two new biomedical initiatives and for the first time appointing its own regional science minister earlier this year, Spain's affluent northeastern state has freed up funding for about 30 life science faculty positions at the University Pompeu Fabra in Barcelona. “Catalonia is showing Spain the way,” says Beato, who now spends about 2 months a year at Pompeu Fabra as a visiting professor.
Private donors are also stepping up efforts to retain key scientists. In January, the Botin Foundation, run by the Bank of Santander, bestowed a windfall on cellular biologist José Jorcano of the National Research Center for Energy, Environment, and Technology—a grant for $1 million a year for up to 9 years to support his work on the etiology of cancer. And in February, the Juan March Foundation, known for endowing the arts, made a splashy entrance into Spanish science by announcing that it would award one $800,000-plus grant every year to a promising biomedical researcher under the age of 50. An international review panel is expected to select the first winner this fall.
Despite these promising signs, some observers are not convinced that the long drought is over. “There have been too many words and too little action in the past. We have to wait and see what really happens,” says neurobiologist José Lopez-Barneo of the University of Seville Medical School. However, he adds, “simply having the word ‘science’ back on the political agenda and a ministry devoted to it is a giant step forward.”
- CONSERVATION BIOLOGY
California Team to Map Rare Species' DNA
- Richard Stone
To help unravel genetic kinship among mammals, the San Diego Zoo and Amersham Pharmacia Biotech announced last week that they are launching the first systematic effort to decode the DNA of endangered species. Over the next year, the team plans to sequence key portions of the genetic code of one representative of each of the 146 mammalian families. Many of the animals are from a menagerie that most visitors to the San Diego Zoo never get to see—a “Frozen Zoo” of stockpiled DNA, from rare Przewalski's horses to western lowland gorillas.
Using Amersham's capillary sequencing machines, the project aims to generate complete sequences of the DNA in the mitochondria, tiny powerhouses outside the cell's nucleus that produce chemical energy. Because mitochondrial DNA mutates at a fairly reliable rate, scientists can judge how long ago species diverged according to differences in mtDNA sequences. These biochemical clocks will help sort out little-known relationships within families such as the insectivores, says geneticist Oliver Ryder of the San Diego Zoo's Center for Reproduction of Endangered Species (CRES): “I think we'll gain fantastic insights into molecular evolution.”
Conservation experts welcome the project. “It's a positive step toward unlocking important new genetic data that eventually will be useful for managing endangered species,” says David Wildt of the Smithsonian Institution's Conservation and Research Center in Front Royal, Virginia. Indeed, such projects are already under way: Scientists are now using mtDNA sequences to distinguish between populations of southern and eastern black rhinos. But this appears to be the first time that a biotech firm has leaped into the field as a partner. “It's encouraging to see a major molecular biology company putting effort into conservation research,” says William Jordan of the Institute of Zoology in London.
The CRES-Amersham team will exploit what may be the world's biggest collection of DNA samples from endangered species: cell lines from more than 4300 individuals representing 370 species and subspecies. Since 1976, CRES staff have been snipping pea-sized patches of skin from animals in the zoo and extracting fibroblasts, tissue-repairing cells that happily divide in the test tube, even after being stored for years in liquid nitrogen. The research effort has helped solve some puzzles in captive breeding. For instance, CRES researchers, frustrated that a dwarf antelope called the dik-dik often produced sterile offspring, found after examining the animal's chromosomes in the late 1980s that two outwardly indistinguishable dik-dik species at the zoo were attempting to mate. Putting them in separate pens by chromosome type fixed the problem.
The new effort will specialize in the underdogs of the animal kingdom. “We'll choose rare species over common ones,” says CRES geneticist Oliver Ryder. Obvious choices, he says, include the peccary, the okapi, and the three-banded armadillo. CRES's Frozen Zoo, with DNA from more than 100 mammalian families, will provide a strong foundation. “To start from scratch would take years and years,” says Ryder. And his group is forging collaborations with other centers to find DNA from mammals poorly represented in the Frozen Zoo. For instance, Robert Baker's laboratory at Texas Tech University in Lubbock has agreed to provide DNA samples from select rodents and bats.
The team expects it will take about a year to generate the mtDNA sequences, which run to about 16,000 base pairs each. All the data will be made freely available to the public. Shining a spotlight on rare animals could aid conservation efforts, says Wildt: “The project will help increase public awareness of the need for much more biomedical research directed at wildlife species.”
According to Ryder, the sequencing project points up the value of DNA banks, which he and several colleagues have urged the scientific community to expand through an ambitious effort to compile DNA samples of all endangered animal species (Science, 14 April, p. 275). He emphasizes, however, that gathering genetic data on endangered species must go hand-in-hand with measures to preserve habitats. “That is the only way to really save species,” he says.
Amersham declines to reveal how much it plans to invest in the project. But even though Amersham is giving away the data, says Robert Feldman, production sequencing and collaborations manager, the high-throughput DNA sequencing company does have something valuable to gain: experience. “We're looking to work on as many different kinds of DNA as we can get our hands on,” he says. “That will help us understand our customers' needs better”—not to mention the needs of peccaries, okapis, and three-banded armadillos.
Electromagnetic Tiles May Cut Turbulence
- Mark Sincell*
Turbulence is as expensive as it is inevitable. Whether it is a submarine sneaking around the bottom of the ocean, an airplane bouncing overhead, or oil bubbling through a pipeline, the turbulent eddies that form when a fluid streams over a fast-moving surface drag against the surface like sandpaper scraping over wood. Overcoming this drag force requires fuel, and fuel costs money—lots of it. By some estimates, a general method of reducing turbulent drag by 10% could save billions of dollars and eliminate tons of burnt-fuel pollutants.
With that kind of money at stake, many scientists are searching hard for such a method—so far, with little success. But on page 1230 of this issue, mechanical engineer Yiqing Du of the Massachusetts Institute of Technology and applied mathematician George Karniadakis of Brown University propose a novel approach that harnesses electromagnetic fields to nip the eddies in the bud. In computer simulations of turbulent ocean-water flows, their method cuts the drag force by almost 30%. If the simulations are borne out by upcoming experiments, says Richard Philips of the Naval Undersea Warfare Center in Newport, Rhode Island, it will be a tremendous advance in fluid dynamics.
Turbulence is more a process than a thing. As ocean water streams past a submarine's hull, for example, the flow splits up into pairs of fast- and slow-moving ribbons of current called “streaks.” Loops, or eddies, of current circulate between the two halves of each streak. These loops grow rapidly until they burst, exerting a force that vibrates the hull and slows down the submarine's progress.
Turbulence is tough to counteract largely because a turbulent flow is very stable. “The flow does not want to be changed,” explains Mohamed Gad-el-Hak, a mechanical and aerospace engineer at the University of Notre Dame in South Bend, Indiana. “Brute force does not work.” Instead, Gad-el-Hak has been exploring the use of a kind of “smart surface” that senses the presence of turbulent eddies. A control system then activates many tiny, pistonlike actuators that morph the surface, pressing it against the fluid in a way that inhibits the developing turbulence. Computer simulations and laboratory-scale experiments show that this approach could work in the real world, but the actuator power must be carefully rationed, or else “the amount of energy needed is more than you save by suppressing the turbulence,” says Gad-el-Hak.
Others have been trying to reduce drag by vibrating the surface at a specific frequency. This approach is called “predetermined control,” because the frequency of the vibration does not respond to changing conditions in the fluid. Although it reduces turbulence, Karniadakis thinks the cure may be as bad as the disease. “Imagine that you are flying over the Atlantic, and the pilot turns on the ‘shaker’ to damp turbulence,” he says. Either way will make nervous fliers jittery.
Instead of wiggling the walls to push the fluid around, Karniadakis and Du's method applies the force directly to the fluid. In their simulations, predetermined electromagnetic pulses from tiles on the surface of a submarine hull induce a force perpendicular to the direction of the streaming—and electrically conductive—salt water. They found that the additional force prevents “streaks” from forming along the hull, so the explosive current loops never have a chance to form. “We cut the legs off the turbulence,” says Karniadakis.
Du and Karniadakis have demonstrated electromagnetic tiles in the lab but have yet to measure their effect on turbulent drag. “The proof of the pudding is if they can demonstrate this effect experimentally,” says Philips. Karniadakis has recently received a grant to test both predetermined and reactive turbulence control. Gad-el-Hak, for one, is curious to see how it turns out. “In the past, predetermined control methods have not been very successful,” he says, “but George is the best in his field; it may be that he has hit the jackpot.”
- DNA COMPUTING
Hairpins Trigger an Automatic Solution
- Adrian Cho
DNA, the alphabet of life, can also spell out the solutions to tough computations. Not only can its jumbo molecules store huge amounts of information, but, when mixed into a chemical soup, they react in so many ways at the same time that they can perform many calculations in parallel. Spurred by such promise, computer scientists and molecular biologists have already performed DNA- and RNA-based “molecular computations” to solve math and logic problems (Science, 17 October 1997, p. 446; 18 February, p. 1182).
So far the experiments have required lots of old-fashioned, shake-the-test-tube lab work for each step in the calculation. Now, however, as reported on page 1223, biochemist Kensaku Sakamoto of the University of Tokyo and his team have made a DNA computation run more or less by itself by taking advantage of the molecule's penchant for twisting itself into knots. The new method is elegant, says Lloyd Smith, a chemist at the University of Wisconsin, Madison, because “it takes the idea of self-assembly and puts it under control to do something for you.”
Sakamoto and colleagues tackled a version of the satisfiability problem in Boolean logic, a form of reasoning in which “literals”—statements and their opposites—are linked together with or and and to form complicated formulas. In the type of problem they considered, two or more literals link together with or to form a clause, and the clause is true if any one literal is true. Two or more clauses then link together with and to make the complete formula, which is true only if every clause is true. The problem is to find a string of literals that makes the entire formula true.
Any string that makes each clause true is a potential solution. But there's a catch: The string cannot contain both a statement and its negation. For example, consider the formula “(I exist or I sleep) and (I do not exist or I dream).” Even glum, distracted Prince Hamlet knew it was possible to sleep and (perchance) to dream, and the phrase “I sleep and I dream” satisfies the formula. On the other hand, it's logically impossible to be and not to be simultaneously. To solve their problem, a whopping formula of 10 clauses of three literals each, Sakamoto and colleagues set things up so that, of the tens of thousands of potential solutions, all but two dozen tied themselves into just such logical knots.
To translate the problem into molecules, the team began as other researchers have done, with a large assortment of DNA strands, one for each possible solution of the puzzle. But whereas others mixed in one enzyme after another to cut up the strands representing the wrong answers, Sakamoto and colleagues designed their strands so that, when cooled, the wrong answers spontaneously folded over and stuck to themselves to form molecular “hairpins.”
DNA naturally comes in two matching strands, so every stretch of single-stranded DNA zips into a unique complementary strand. The researchers designed a 30-base segment of single-stranded DNA to represent each statement and used the complementary strand to represent its opposite. They then linked 10 segments, one representing a literal from each clause, to produce the more than 59,000 potential solutions to the formula. The problem was formulated so that each wrong-answer strand had to contain at least one statement and its negation, and hence one segment and its complement. Therefore, when the researchers lowered the temperature of their soup in just the right way, complementary sequences locked together, causing all the wrong-answer strands to fold over and form hairpins. The researchers then either cut all the hairpins with a single dose of enzyme, or used a standard technique for copying DNA to reproduce just the remaining unfolded strands, which represented the right answer.
The new method obviates several laboratory procedures by exploiting DNA's knack for forming complicated structures, says team member Masami Hagiya, a computer scientist from the University of Tokyo. But the researchers pay a price to avoid the extra chemistry, Smith says. The logic problem reduces to finding one correct solution out of the 64 possible combinations of six statements and their opposites. In restating the problem so that wrong answer strings all have contradictory literals, however, the researchers make it much larger. As a consequence they wade through thousands of redundant wrong answers. The new technique also lets through many more wrong solutions, notes Laura Landweber, a biologist at Princeton University, in Princeton, New Jersey. “I remain intrigued but skeptical,” she says, “until they can reduce the large proportion of errors.”
Exposure Levels Tracked Around Nuclear Accident
- Dennis Normile
Tokyo—When workers at a nuclear fuel processing plant inadvertently set off a nuclear chain reaction last fall, more than 6 hours passed before the Japanese government set up radiation monitoring equipment at the scene. The time lag left a critical gap in the record of the amounts and types of radiation released in the accident 110 kilometers northeast of the capital (Science, 8 October 1999, p. 207).
That gap has now been filled by a group of Japanese university researchers, whose results appear this week in a special issue of the Journal of Environmental Radioactivity (vol. 50, no. 1–2, May 2000). In 21 reports, the team has reconstructed the aftermath of the accident by collecting over 400 samples of irradiated table salt, sugar, stainless steel cutlery, coins, and gold and silver jewelry. This approach, although not new, builds on the cooperation of company officials to offer the most detailed picture ever of the spread of radiation from a nuclear accident.
Ohtsura Niwa, director of Kyoto University's Radiation Biology Center, says the results are particularly important given ongoing controversies over the effects of neutron radiation, the primary type of radiation in the Tokaimura accident. Previous studies have yielded inconsistent results on the relation between distance from the source and radiation dose, and the possible health effects of exposure to neutron radiation. These questions make “this kind of study very necessary,” he says.
The 30 September incident at a nuclear fuel processing facility in Tokaimura was Japan's worst-ever nuclear-related accident. Dozens of residents close to the plant were evacuated, and hundreds of people in the surrounding area were warned to stay indoors for 18 hours after the event. Two employees of the Tokyo-based JCO Company Ltd., the plant operator, eventually died from complications arising from high radiation doses. Kazuhisa Komura, who heads the university group and is director of the Low Level Radioactivity Laboratory at Kanazawa University, says the study provides an independent check of the official governmental investigation and extends its scope.
The researchers use the fact that neutron radiation makes many substances, particularly metals, radioactive. Gold, for example, captures neutrons to produce the isotope Au-198, in proportion to the amount of radiation (see graph). After examining household items loaned by area residents, the group concludes that the level of accumulated radiation at the edge of the JCO property was about 100 millisieverts. A sievert is a measure of the total radioactive dose, factoring in each type of radiation and its energy. Normal background radiation results in an annual dose of about 1 millisievert, and doses of more than 5 sieverts have typically been fatal. The stricken workers suffered dosages of 17 and 10 sieverts, and 50 other people received up to 100 millisieverts.
Dose levels outside the plant were much lower, and the health implications for the general public are likely to be negligible, Komura says. Another group studying the biological effects of low-level neutron radiation has yet to publish its results.
The journal reports are consistent with previously released government studies, which stopped at the site boundaries. However, the university researchers also plan to study the level of radiation to buildings and other objects beyond the accident site in hope of understanding the shielding effect of various materials, natural and humanmade. The results, says Murdoch Baxter, editor for the special edition of the journal and a former official with the International Atomic Energy Agency in Vienna, could even help scientists looking back at the atomic bombings of Hiroshima and Nagasaki.
- JOURNAL PUBLISHING
Harvard Researcher Named NEJM Editor
- Constance Holden
The New England Journal of Medicine (NEJM) has a new editor, its third in less than a year. Jeffrey Drazen, 53, a Harvard asthma researcher and associate chief for research in the Pulmonary Division at Boston's Children's Hospital, takes on the challenge of trying to set the 188-year-old journal on a smooth course following a year of controversy about both its internal policies and its outside activities.
Last summer, conflict over the journal's commercial activities led to the sacking of Editor-in-Chief Jerome Kassirer (Science, 30 July 1999, p. 648). Then early this year, NEJM confessed to violating its own conflict-of-interest policies (Science, 3 March, p. 1573). In its 24 February issue, the journal listed 19 papers in which one or more authors had accepted money from drug companies. Drazen was one of them: He co-authored a paper, “Treatment of asthma with drugs modifying the leukotriene pathway,” in which he named eight companies that he had advised and from which he had received research funds.
Asked how he would deal with matters involving potential conflicts of interest, Drazen said that he plans to be “as lily-white as possible,” keeping hands off all papers or editorials involving any company that he has had recent ties with. “What I'm planning to do is review each of the companies with whom I've worked and start a 2-year clock at my last interaction with them.” He says that policy could be reexamined in the future.
Drazen also says he's confident that he'll be able to run the NEJM as he sees fit. His predecessor was forced out after disagreements with the owners, the Massachusetts Medical Society, over the use of the journal's name and logo on other products. Marcia Angell, the magazine's longtime executive editor, who has been filling in since Kassirer's departure, says she declined to seek the job on a permanent basis after society officials refused to guarantee her control over the use of the journal's name as well as its content. Although the society says Drazen will have “complete authority” over both elements, Kassirer says he puts no stock in that pledge, because he had been given the same assurances at the start of his 8-year tenure. Angell is not quite so cynical, calling the society's statement “extremely encouraging.”
As editor, Drazen says that he hopes to make the journal more accessible to practicing physicians by shortening the articles and highlighting the practical use of findings. He also wants to upgrade the journal's online content—an electronic copy of the print version—which he calls “pretty 1995.”
- 2001 BUDGET
NIH Headed for Big Boost, Others Struggle
- Andrew Lawler
For R&D advocates, it's a case of the good, the bad, and the ugly. The National Institutes of Health (NIH) was an early, big winner as Congress last week began the long and bitter fight over funding for the 2001 fiscal year, which starts on 1 October. Military research also got off to a strong start. But the outlook is not so rosy for two other key agencies, NASA and the National Science Foundation (NSF), which in the short run can expect only a fraction of their requested increases.
Yet the ugly truth is that the ultimate decisions on the 2001 budget almost certainly won't be made until this fall, at closed-door meetings between Administration and congressional leaders. Those meetings will pit the president's ambitious list of new initiatives, from nanotechnology to education, against a pledge by Republican lawmakers to hold the line on government spending. “The numbers that we see now have no bearing on the final outcome,” says one bemused science agency official. “The whole situation has an unreal quality to it.”
The uncertainty, however, has worried research advocates and added urgency to their efforts. The problem, they say, is that while a newly estimated $40 billion budget surplus for next year should provide enough money for everyone, the House and Senate panels that appropriate funds are laboring under tough constraints imposed by the GOP leadership. Most of those panels have received about the same or even less funding than last year. And it is those levels, and unpredictable election-year politics, that are shaping the bills now moving through Congress. Research supporters fear that science spending could suffer from the squeeze. On 1 May a bipartisan group of lawmakers led by Senators Joe Lieberman (D-CT) and Bill Frist (R-TN) wrote to colleagues about their “responsibility to ensure our nation's continued prosperity through investment and research.” The letter urged members to back increased R&D funding across all disciplines. The senators also praised a 22 March letter from a high-powered group of technology executives to Senate Majority Leader Trent Lott (R-MS) urging greater federal R&D funding for the sake of economic competitiveness.
Those urgings are hardly needed in the case of NIH. The Senate Appropriations Committee last week recommended a whopping $2.7 billion boost to its $17.8 billion 2000 level—$1.7 billion more than Clinton requested for 2001 and the third straight 15% hike. The House subcommittee took a more modest approach, providing only the president's request for a 6% boost to $18.8 billion. Even so, aides to Representative John Porter (R-IL), who chairs the House panel, say he is still determined to match the Senate level and keep NIH on track for a doubled budget by 2003.
Both panels, however, ignored many of the president's priorities in other programs covered by the bill. For example, they made significant cuts to education, health care, and job training programs. As a result, Clinton immediately vowed to veto the bill unless those programs received additional funds.
The House subcommittee that handles the budgets for NASA and NSF, chaired by Representative James Walsh (R-NY), is slated to make its recommendations on 23 May, and the advance news is not good. House aides say that NSF will have to make do with a hike of approximately $150 million. That translates into less than a 4% increase for the $3.9 billion agency, a far cry from the 17% boost the Administration requested. NSF Director Rita Colwell argues that the requested increase is needed to ensure the health of the core disciplines at the same time the country invests in such hot new areas as nanotechnology, information technology, and biocomplexity.
NASA would fare even worse. The House subcommittee is expected to approve a boost in the neighborhood of $100 million for the entire $13.6 billion agency—about one-quarter of the increase the president requested. Most of the additional funding likely would go toward salaries and a space launch initiative, rather than to the series of proposed new space science initiatives, such as one to study the sun using multiple spacecraft. The House is not opposed to the president's request, explains one staffer. But simple arithmetic ties its hands.
“The Administration went hog wild” in its budget request, he says, seeking more than $85 billion for all the agencies funded by Walsh's panel. The subcommittee has been allotted only $76.9 billion—slightly less than last year. Given that situation, any increase is a victory for science, say congressional aides. “They are not going to get the Administration's request,” says the staffer adamantly.
Even so, committee members are clearly frustrated with their piece of the funding pie. Walsh's panel intends to write a bill containing no earmarks, or pork-barrel projects, say sources close to the committee. “It would be hard to take the bill to the floor with a straight face” if the legislation slashes programs while adding $200 million in NASA earmarks, says one aide about what would be an unprecedented step. However, resistance may prove futile: The panel has already received more than 2000 specific requests for pork-barrel spending by members of Congress, and election-year pressure is likely to drive that number higher.
Meanwhile, defense appropriators in the House have added to the president's requested increase for research, development, testing, and evaluation in a bill whose levels are not yet public. And both the House and Senate Armed Services panels, responsible for authorizing military spending, proposed boosts of $1.4 billion, lifting military R&D accounts by 3.7% over the president's 2001 request and by 2.6% over this year's level.
Learning the World's Languages--Before They Vanish
- Bernice Wuethrich*
Linguists rush to study endangered languages, which may hold clues to whether grammar is innate or learned, and how speech influences thought
Pat Gabori loves to talk, and when he does, linguists listen. Gabori, who estimates his age at about 80, is the last native-born male speaker of Kayardild, an aboriginal language spoken only on Bentinck Island off north Australia. Once a skilled dugong hunter, now blind, Gabori has relayed a wealth of stories—and the grammar of his language—to linguist Nick Evans of the University of Melbourne in Victoria. Evans found Kayardild full of grammatical rarities. For example, whereas most languages change only the verb to indicate past or future tense, Kayardild marks tense on other words too, including nouns. Thus Gabori might say “The boy spear-ed the fish-ed.” Or rather, he would probably say “Spear-ed fish-ed the boy,” as Kayardild also allows speakers to explode the traditional structure of phrases.
Whatever the word order, only one other known language marks nouns with tense—a sister language to Kayardild called Lardil that has only one fully competent speaker. “If you didn't know about these two languages, you would say this phenomenon is impossible,” Evans says. And if every known language followed the rule that only verbs express tense, you might conclude, following the famous theory proposed by linguist Noam Chomsky almost 50 years ago, that the rule is an innate part of language, genetically programmed into the brain of every child. But to Evans and a vocal minority of other linguists, the possibilities seen in languages like Kayardild challenge the “universal” rules of grammar, and suggest that far more of grammar is learned and culturally variable.
Even the handful of linguists who study Kayardild and Lardil disagree on the significance of their idiosyncrasies, however. To Ken Hale of the Massachusetts Institute of Technology (MIT), who has studied Lardil for 30 years, the languages, far from undermining Chomsky's theory, actually subtly reinforce many of the “universals” of grammar. “Linguists rarely find things that challenge universals,” he says.
Although linguists spar about the deeper meaning of Kayardild grammar, they agree on one thing: The data they need to settle linguistic arguments are fast disappearing. In Australia alone, aboriginal people spoke about 260 languages at the time of European contact in the late 1700s; today about 160 of those tongues are extinct, and only about 20 have a reasonable number of speakers. The world's 6 billion people speak approximately 6000 to 7000 languages, and most experts expect that at least half—and perhaps up to 90%—will disappear in the 21st century. War or scattering can demolish a linguistic community in a generation or two, so that even a language with thousands of speakers today may be at risk tomorrow; most linguists consider a language “endangered” when fewer and fewer children learn it.
The loss of languages is not only a crisis for many communities, but also presents a major challenge for researchers intent on analyzing the structure of languages and how they convey meaning. Just as biologists study species to understand evolution, so linguists scrutinize grammars and vocabularies to understand what aspects of language are innate and what are learned. “We're losing our natural laboratory of variation, our Galápagos,” says Steve Levinson of the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands.
The causes of this global loss are many: wars, diasporas, education that emphasizes national languages, and assimilation into dominant cultures. To date, with less than 1000 fully described languages, “every language lost is a loss of clues as to how the human language faculty works,” says Jerry Sadock, a linguist at the University of Chicago. This realization is prompting a small community of linguists to increase their fieldwork, compiling grammars and lexicons and putting spoken languages into writing, helping to preserve some languages as they learn more about them. But the work is slow going. “Describing a language is a 10- to 15-year job,” says Evans, who wrote a 600-page grammar of Kayardild.
Linguists argue passionately that each language is precious. The particulars of a language “encapsulate a long history of people in an ecology, a way of living and a way of thinking,” Levinson says. Different languages reflect peoples' perception of the world, capture their prehistory (see sidebar on p. 1158), and may subtly shape thought itself. “When these languages die, we'll lose these glimpses into the capacities of the human mind,” says linguist Marianne Mithun of the University of California, Santa Barbara.
Whether grammar—the structure of language—is innate or learned is the longest running battle in modern linguistics. It was fired up by Chomsky back in the 1950s and hasn't calmed down since. He proposed that universal grammar is a manifestation of linguistic ability that is hard-wired into the human brain, a genetic endowment that allows every child to master language with ease and also dramatically restricts the types of language that are possible. Ever since, linguists have sought to test his ideas in the world's languages. Today, most researchers agree that at least some grammar, such as rules for how questions may be framed, is universal and therefore probably innate. But they clash fiercely over just how extensive this “universal grammar” is.
For example, one of the most basic aspects of grammar concerns sentence structure—the order in which the subject (S), verb (V), and object (O) appear. Most languages are either SVO (like English), SOV (like Japanese), or VSO (like Irish Gaelic). Linguists had thought that other orders were prohibited. But OVS does occur—in fewer than 1% of the world's languages, all of them endangered, notes Norvin Richards of MIT. One such language is Hixkaryana, spoken by some 300 people on a tributary of the Amazon River in Brazil. “If linguists had waited another couple of decades, languages with this construction would all be dead, and we would say it is not possible,” says Richards.
Another recently appreciated realm of variability concerns affixes such as prefixes and suffixes. Affixes added to the beginning or the end of a word typically change its meaning in particular ways, such as prenatal or sturdiness. Some linguists have thought that all languages share a set of affix meanings that are part of universal grammar, and that there is always a clear distinction between stem words and affixes. But the Yup'ik language, spoken by about 10,000 people in Alaska, breaks with that presumption, according to work published by Mithun last year.
Mithun points out that Yup'ik has suffixes with meanings like “eat” and “hunt” that in English would seem like stems rather than add-ons. When speakers add the suffix “hunt” to the word for seal, it means “seal hunt,” added to “egg,” it means “egg hunt.” But in addition to a suffix for hunt, Yup'ik also has a stem for hunt, so that there are different ways to say “seal hunt,” each with its own shade of meaning. “These kinds of differences offer speakers tremendous rhetorical alternatives,” she says.
Whereas some Yup'ik suffixes relate to subsistence or the environment, others refer to ways of being or behavior. There are suffixes that mean “willfully” and “secretively.” One suffix means “to be inept at”: Added to the verb “sleep,” it yields “to have insomnia.” Another suffix means, “finally, after desiring to do so, but being prevented by circumstance,” and can be added to verbs such as “go.” Not only does this complex suffix system challenge the idea that affixes play a restricted role in carrying meaning, it preserves a record of the cultural transmission that creates grammars, says Mithun.
Another linchpin of universal grammar is the distinction between nouns, verbs, and adjectives. Some languages have thousands of verbs and some less than a dozen, but few wholly dispense with that distinction. But David Gil, a linguist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, says that Riau, a dialect of Indonesian, does just that, allowing nearly any word to function as a verb, a noun, or an adjective, depending on the context. The same word can be used to mean “eat” or “food,” and any word can appear anywhere in the sentence. “The key to meaning lies in the larger context of the sentence,” explains Bernard Comrie, director of the institute's linguistics department.
Other languages retain nouns and verbs, but bend the grammatical structures thought to govern their use, says Sadock. For example, in most languages the verb agrees with the subject. But in Aleut—now spoken by only about 100 people in the Aleutian Islands off the coast of Alaska and heard in daily conversation in just a single village, Atka—the verb can agree not only with the subject or the object, but also with the possessor of the subject or object. Thus instead of saying, “Their house is big,” an Aleut speaker says, “Their house are big”—something Sadock calls “virtually unprecedented.” Yet, he adds, “the fact that Aleut is so seriously endangered means that we might never know its true genius.”View this table:
11 Sadock, who chairs the Committee on Endangered Languages and Their Preservation for the Linguistic Society of America, says that such “wild differences” in grammar as seen in Aleut and Yup'ik “challenge our notions about what kind of language the human mind can construct.” He thinks that the realm of universal grammar has been exaggerated and that many grammatical rules are culturally variable. “Most grammar arises out of human cognition and interaction with people and objects, rather than being an innate hard-wired system that has nothing to do with daily life,” agrees Dan Slobin, a psycholinguist at the University of California, Berkeley.
But others argue that although various languages may be unusual, the differences are superficial. For instance, Hale and MIT colleague Richards think that Lardil and Kayardild actually conform to universal grammar rather than breaking the rules. Although these languages allow speakers to jumble the words of a sentence, the words inside a phrase are first modified by tense markers, notes Richards. Thus, he argues, traces of the phrase structure remain.
To take an even more basic example, linguist Mark Baker of Rutgers University in New Brunswick, New Jersey, makes the case that many languages reported to lack distinctions between parts of speech such as nouns and adjectives do indeed make such distinctions—if you know where to look. “I can show that even those languages have adjectives,” he says. For example, Nunggubuyu, an endangered aboriginal language in Australia, has been described as using nouns and adjectives interchangeably. But Baker reports in a monograph to be published by Cambridge University Press that the elusive distinction between these parts of speech becomes apparent in a particular way of forming compound words called noun incorporation.
Nunggubuyu speakers can say, “I bought meat” or “I meatbought,” incorporating the noun into the verb. They can also say “I bought big” (akin to “I bought the big one”), but they cannot say “I bigbought,” incorporating the adjective into the verb. “If big had the same status as meat, it should be able to incorporate the same way,” Baker says, arguing that this is indeed a distinction between the two word categories.
Baker recently applied this kind of analysis to a range of extremely different languages all thought to lack distinctions in parts of speech, including Mohawk and Salish, two endangered Native American languages; Edo, spoken in Nigeria; and Chichewa, spoken in Malawi. His results show that all these languages have noun, verb, and adjective categories, although they may be subtly expressed. “In any language where we can frame these questions precisely enough to measure, we can find a lot that's universal,” Baker says.
As the argument continues, both sides realize that their evidence is vanishing. “In a few years everyone will speak English, Mandarin, and Spanish, and the similarities among languages will exist by accident, not because they reflect limits of human cognition,” says Baker. Adds Mithun: “Without the variability, proposals about universals have been and will be hopelessly naïve.”
From words to thoughts
In addition to adding much-needed data to the question of universal grammar, the diversity of languages also offers insight into the relationship between language and thought—how speakers perceive and understand the world. “Put bluntly, does the language you speak affect the way you think?” Levinson asks.
He has approached this by studying how unwritten, often endangered languages express spatial concepts. Many cognitive scientists have presumed that all languages express spatial concepts in similar ways, because thinking spatially is a necessity for higher animals and therefore is likely to be hard-wired into the human brain. “This is the very last area in which we would expect to find significant cultural variation,” Levinson says. But find it he did.
Many languages, including English, express spatial concepts using relative coordinates established through the planes of the body, such as left-right and front-back. But Levinson found languages that use very different systems. Guugu Yimithirr, an endangered Australian language spoken by fewer than 800 people, uses a fixed environmental system of four named directions that resemble north, south, east, and west. Speakers modify the four words to yield some 50 terms that indicate such things as motion toward or away from a direction. They use the same terminology to describe both landscape and small-scale space, for example: “The school is to the west of the river” and “There's an ant on your eastern leg.”
The Guugu Yimithirr terminology reflects an entirely different way of conceptualizing a scene, says Levinson. It requires laying out all memories in terms of the four directions and continually running a mental compass and a positioning system.
Levinson has now investigated a similar phenomenon in the Mayan language Tzeltal, in work in press and co-authored with Penel-ope Brown, also of the Max Planck Institute for Psycholinguistics. Brown gathered 600 hours of videotape of 15 Tzeltal-speaking children performing various tasks and found that children as young as 4 years old have mastered the positioning system. Children were asked to describe the arrangement of toys on a table, then turn 180 degrees and describe an identical arrangement arrayed in front of them. English-speaking children rotate their coordinate system as they turn—left becomes right. But for Tzeltal-speaking children, north was always north and south remained south.
Levinson concludes that even spatial thinking is learned, not innate. Rather than starting from a biologically set concept of space, children quickly learn the system used in their culture. Concurs psycholinguist Slobin: “The results show flexibility for how we can organize spatial concepts for talking and probably thinking.”
The art of language
As linguists uncover such cultural differences, many find themselves fascinated with the diverse linguistic systems that humans are capable of creating. Each language preserves a society's history, culture, and knowledge and is itself something akin to art, they say. “Language is one of the most intimate parts of culture,” says Mithun.
Languages can gracefully encapsulate a society's values, for example. Mayali has a special set of words that simultaneously indicates the relationship of both the speaker and the hearer to the person being discussed. A speaker uses a different term for mother when speaking to her grandmother, her brother, or her husband. The terms, which are so complicated that Mayali speakers don't learn them until adulthood, “represent the ability to simultaneously take two points of view. Using them forces you to pay attention to where everyone fits into the kinship system and to make incredibly complex calculations,” Evans says.
Mayali culture values proper use of the kinship terms, which indicate consideration for others by subtly factoring in their point of view. But this feature is being lost as younger speakers either adopt English or speak a simplified form of Mayali.
Although linguists delight in such examples, all this richness means that the task of understanding and preserving languages is huge. “Most of the currently receding languages will disappear without even being recorded,” says Matthias Brenzinger of the University of Cologne in Germany. Last February, he convened linguistics experts from all regions of the world to begin the difficult task of rating the degree of danger faced by the world's languages, to help linguists focus their efforts. Dozens of researchers are already working with native speakers to preserve languages. Hale and Richards compiled a dictionary with Lardil speakers, for example, and Richards plans to put together readers for teaching.
Such efforts can be successful: Back in the early 1970s, Mithun worked with the Mohawk community in Quebec, Canada, to codify Mohawk spelling conventions. That text and others later became part of a public school curriculum, and now a Quebec elementary school teaches Mohawk to hundreds of students; older students can even study Mohawk linguistics at McGill University.
Success stories are hard won, however. And as researchers contemplate the impending loss of the world's linguistic diversity, they see their opportunities for gaining insights into human linguistic development slipping away. “If you understand what the constraints of a possible language are,” says Levinson, “you would understand a fundamental part of what it is to be human.”
Peering Into the Past, With Words
- Bernice Wuethrich
Prehistorians typically rely on stones, bones, and DNA to piece together the past, but linguists argue that words preserve history too. Two new studies, both based on endangered languages, offer new insights into the identity of mysterious ancient peoples, from the first farmers to early inhabitants of the British Isles.
Archaeologists have long known that some 10,000 years ago, ancient people in Mesopotamia discovered farming, raising sheep, cattle, wheat, and barley. And researchers knew that by 8000 years ago agriculture had spread north to the Caucasus Mountains. But they had little inkling of whether traces of this first farming culture lived on in any particular culture today. People have migrated extensively through the region over the millennia, and there's no continuous archaeological record of any single culture. Linguistically, most languages in the region and in the Fertile Crescent itself are relatively recent arrivals from elsewhere.
Now, however, linguist Johanna Nichols of the University of California, Berkeley, has used language to connect modern people of the Caucasus region to the ancient farmers of the Fertile Crescent. She analyzed the Nakh-Daghestanian linguistic family, which today includes Chechen, Ingush, and Batsbi on the Nakh side and some 24 languages on the Daghestanian side; all are spoken in parts of Russia (such as Chechnya), Georgia, and Azerbaijan.
Nichols had previously established the family tree of Nakh-Daghestanian by analyzing similarities in the related languages much the way biologists create a phylogeny of species. She found that three languages converge at the very base of the tree. Today, speakers of all three live side by side in the southeastern foothills of the Caucasus Mountains, suggesting that this was the homeland of the ancestral language—on the very fringes of the Fertile Crescent. To get a rough estimate of when the language arose, Nichols used a linguistic method that assumes a semiregular rate of vocabulary loss per 1000 years, and she dated the ancestral language to about 8000 years ago.
Nichols also found that the ancestral language contains a host of words for farming. The Chechen words muq (barley), stu (bull), and tkha (wool), for example, all have closely related forms in the earliest branches of Daghestanian, as do words for pear, apple, dairy product, and oxen yoke—all elements of the farming package developed in the Fertile Crescent. Thus location, time, and vocabulary all suggest that the farmers of the region were proto-Nakh-Daghestanians. “The Nakh-Daghestanian languages are the closest thing we have to a direct continuation of the cultural and linguistic community that gave rise to Western civilization,” Nichols says.
Population geneticist Henry Harpending of the University of Utah, Salt Lake City, has just begun the job of unraveling the genetic ancestry of Daghestanian speakers and is impressed with Nichols's work. “For years I wished linguists would get in the game. Nichols sure is.”
Nichols is now reconstructing the ancestral language, hoping for more clues to the culture of these early farmers. But she has to work fast, for the three Nakh languages are vanishing. Although there are still about 900,000 Chechen speakers left, the other two tongues have fewer speakers, and all three are being eroded by war, economic chaos, and Russian educational practices, Nichols says.
More than 3200 kilometers away, another linguist is mining Celtic languages—which are also all considered endangered—for clues to the inhabitants of the early British Isles. Artifacts show that the islands were occupied long before Celts from the European continent made landfall about 700 B.C. But mysteries remain as to their identity.
So Orin Gensler of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, analyzed Celtic languages, including Irish Gaelic, Scottish Gaelic, Welsh, and Breton. Once prevalent throughout Europe, these languages are now spoken only in the British Isles and Brittany in France. Linguists have noted surprising grammatical differences between Celtic languages and related languages such as French, while at the same time seeing striking resemblances between Celtic and Afro-Asiatic languages spoken for millennia across a swath of coastal Northern Africa and the Near East.
In a forthcoming monograph, Gensler studied 20 grammatical features found in both Celtic and Afro-Asiatic languages. He sought these linguistic traits in 85 unrelated languages from around the world, reasoning that if the features were widespread, their appearance in both Celtic and Afro-Asiatic languages might be mere coincidence. But if the shared features are rare, coincidence is unlikely. Overall, Gensler found that about half the shared features are rare elsewhere. “I think the case against coincidence is about as good as it could be,” he says.
And a closer look at a number of features, including word order, offers a provocative theory for just how the Celtic islanders acquired these linguistic traits. In Gaelic and Welsh—and many Afro-Asiatic languages—the standard sentence structure is verb-subject-object. But Celtic languages spoken in Continental Europe in antiquity have the verb in the final or middle position. The best explanation for the shift to verb-initial order, says Gensler, is that when Celtic speakers made landfall on the British Isles, Afro-Asiatic speakers were already there. As these people learned Celtic, they perpetuated aspects of their own grammar into the new language.
Although others are interested in Gensler's idea, so far “there is no significant northwest African genetic signature … in Celtic populations,” says Peter Underhill, a molecular geneticist at Stanford University in California. But in this instance, he adds, the linguists may be ahead of the geneticists, for researchers need more genetic markers before they can confirm or refute Gensler's idea.
Zebrafish Earns Its Stripes in Genetic Screens
Researchers are using the zebrafish to search for a variety of genes involved in everything from obesity to bone diseases
Cold Spring Harbor, New York—In biology, as in mechanics, one of the best ways to figure out how something works is to break it. For decades, biologists have followed that principle, randomly mutating genes in experimental animals to discover the roles of those altered genes and the proteins they encode. Until recently, most of these studies have been performed on fruit flies, nematode worms, or mice—each of which has its own staunch advocates. But in the last few years a new creature has joined the cadre of laboratory mutants: the zebrafish.
Originally from the tropical rivers of India, the zebrafish has long been a favorite of aquarium keepers. In the scientific arena, developmental biologists first took a shine to the animal because its clear embryos offer an unparalleled opportunity to watch the development of vertebrate tissues and organs not present in the simpler worm or fly. Now, it seems, biologists of all stripes are clamoring for this lab-friendly creature, as was evident at a recent meeting.* Using a variety of clever techniques to identify mutants, scientists are using the zebrafish to probe the genes involved in a wide variety of human maladies, from bone diseases to obesity.
Because it is a vertebrate, the zebrafish is genetically closer to humans than flies or worms, and its small size, quick generation time, and inexpensive care make it possible to keep thousands of fish in a single lab, says Marnie Halpern of the Carnegie Institution of Washington in Baltimore, Maryland. Add to that the transparency of its young, and you have what some consider an ideal lab animal. “The only limit is the creativity and imagination of the scientists,” says Halpern. Hoping to inspire that creativity, officials at the U.S. National Institutes of Health are offering $4.5 million dollars next year to fund efforts to devise new screens for mutants in zebrafish.
The slender fish might not seem like an obvious choice as a model for obesity research, but molecular neuroscientist Wolfgang Liedtke of The Rockefeller University in New York City believes it is. Liedtke is a postdoctoral fellow in the laboratory of geneticist Jeffrey Friedman, who with his colleagues originally identified the leptin protein in mutant mice. Leptin is a key part of the body's system of weight regulation. Although the mouse has been “a tremendous tool” for understanding weight regulation, Liedtke says, the fish is a welcome addition. Because an investigator can create so many mutants in a single laboratory, he or she is likely to turn up new genes involved in the complex pathway, he says, whereas for a similar effort in the mouse “you would need a factory.”
Mice missing the leptin protein or its receptor never seem to feel full. They continue eating indefinitely, growing obese. To find mutant fish with similar behavior, the researchers took advantage of the animals' clear bodies to identify abnormal feeding patterns. They created mutants by exposing zebrafish males to a chemical that causes mutations in sperm-producing cells. They placed the males' inbred offspring in petri dishes with an unlimited supply of brine shrimp, a favorite food of zebrafish with a distinct orange color. Next they transferred the well-fed fish to dishes containing algae, another zebrafish delicacy. Most fish ignored the algae, but a few continued to munch, as was evident from the green algae on top of the orange brine shrimp already visible in their stomachs. The Rockefeller team has identified three families of fish that seem insatiable. One of the mutants, dubbed jumbo, grows considerably larger than normal zebrafish. Another, called fressack (a German term for a hearty eater), is no larger than normal despite its gluttony.
Because mice missing leptin or its receptor also have problems regulating their body temperature, the scientists designed a screen for mutant zebrafish that might have similar defects. Fish are cold-blooded; they regulate their body temperature by swimming in warmer or cooler water. Liedtke and his colleagues rigged a tank with a temperature gradient. Normal zebrafish remain tightly clustered in one region of the tank—between 27° and 28°C. However, one mutant family, dubbed hot body, prefers water several degrees warmer. It also lacks satiety. Friedman cautions that it may be difficult to identify the mutated genes responsible for the abnormal behavior. He notes that in mouse mutants, appetite and satiety characteristics are notoriously variable and sometimes prove difficult to track down. Liedtke agrees, but he is optimistic that his zebrafish work will pay off.
When Shannon Fisher, a developmental biologist at The Johns Hopkins University School of Medicine in Baltimore, began looking for bone mutations in zebrafish, the standard screen for bone structure required killing each potential mutant—often valuable fish that had required careful tending to survive to adulthood. Her training as an M.D., however, gave her an alternative: x-rays. With some initial help from her dentist and Purdur Jagadeeswaran at the University of Texas Health Science Center in San Antonio, and a bit of trial and error, she has designed a screen in which she anesthetizes the fish, takes a quick x-ray, and then returns them to their tank, where they revive in a few minutes. “It's easy and convenient,” she says, and it has already paid off.
One of the first mutants she identified, called chihuahua for its rounded forehead and small jaw, has some of the same symptoms as humans with the disease osteogene sis imperfecta. The mutant initially caught her attention with its small size. When she x-rayed it, she found a mass of broken ribs. On closer inspection of the mutant strain, she found that it forms cartilage normally during early development, but its bones in adulthood are fragile. Although she has not identified the exact mutation at fault, it seems to be near a collagen-encoding gene fingered in human cases of osteogenesis imperfecta. Fisher suspects that this mutant strain could prove valuable for research into collagen's role in bone formation and maintenance.
Whereas broken bones are relatively easy to spot, problems in biochemistry can be harder to detect, even in the see-through zebrafish. But here, too, scientists are hoping the animal will help answer some difficult questions. Steven Farber of Thomas Jefferson University in Philadelphia and Michael Pack of the University of Pennsylvania School of Medicine in Philadelphia and their colleagues have devised a way to observe the biochemical reactions of digestion in living zebrafish. To identify genes that regulate one part of digestion, lipid processing—known to influence the development of colon cancer, heart disease, and other human ills—the team has designed lipid molecules that glow when they are digested by a key enzyme in the intestine. When the scientists feed this molecule to zebrafish larvae, they can see the molecule light up in the digestive tract and liver and then travel to the gallbladder. Although the screen is in its early stages and the scientists have only begun to identify potential mutants, developmental biologist Didier Stainier of the University of California, San Francisco, is impressed. “If you can actually ask questions about how these intestinal cells are processing a substrate, that's very powerful,” he says. The team hopes to design other molecules to probe the digestion of carbohydrates and other molecules. “We're visualizing biochemical processes in living vertebrates,” says Farber. “Zebrafish is the only game in town where you can do that.”
The most difficult part of the process is still tracking down the mutant gene itself, but scientists say that ongoing work in zebrafish genomics is making that task easier. And the likely launch of an effort to sequence the zebrafish genome (Science, 5 May, p. 787) will also ease that task. Genome projects in the mouse and human, says developmental biologist Nancy Hopkins of the Massachusetts Institute of Technology, will only make the zebrafish more important. Those projects will turn up thousands of unknown genes, she says, and it is likely to be easier to figure out what they do in the fish. Says Hopkins: “We've barely begun to tap” the potential of zebrafish.
↵* Zebrafish Development and Genetics, Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, 26 to 30 April.
- DIGITAL ENCRYPTION
Algorithmic Gladiators Vie for Digital Glory
- Charles Seife
As NIST zeroes in on a new cryptographic standard, the competitors scramble to face an unforeseen threat—from lawyers
And then there were five. For 2 years, glory-seeking cryptographers from across the globe have been cracking one another's ciphers, trying to establish their own algorithms as the new standard in encryption. In mid-April the five finalist teams faced off in New York. They subjected each other's algorithms to the withering fire of cryptographic attack after cryptographic attack, while judges observed the melee. But as the smoke cleared, the contestants found themselves facing a menace from a new and unexpected quarter: the realm of patent law.
The five finalist algorithms—MARS, Twofish, Rijndael, RC6, and Serpent—are vying to be the new standard in encryption, replacing the aging Digital Encryption Standard (DES), endorsed by the National Bureau of Standards in the mid-'70s. Thanks to the government's stamp of approval, DES has become perhaps the most widely used encryption system in the world. The new algorithm, selected by the National Institute of Standards and Technology (NIST)—the Bureau of Standards' successor—will replace DES and should assume its mantle of preeminence. No money is at stake in the competition; under NIST's licensing terms, the inventor of Advanced Encryption Standard (AES) will not benefit financially. “The big thing, personally, is the fun of doing it,” says John Kelsey of Counterpane Internet Security in San Jose, California. “If you're in block ciphers, it's the coolest thing you can do, as far as I can tell.”
Outside the arena, however, the stakes are serious indeed. If someone were to crack the AES a few years down the line, all the reams of data encrypted with NIST's standard could be compromised. Medical records, bank transactions, and other confidential information would potentially be wide open to anyone with the know-how, and it would take years for engineers to replace the cracked algorithm in smart cards, computers, and descrambler boxes.
Fears of such a breach are what drove officials to seek a replacement for DES in the first place. DES was designed to take a stream of digital data, split it into 64-bit chunks, and encipher it. In theory, an eavesdropper could not decipher the data without guessing the 56-bit cryptographic “key” that opens the cipher—a secret shared by only the sender and intended receiver.
By the early 1990s, fissures had begun to show in DES's security. Cryptographers such as Eli Biham and Adi Shamir of the Technion-Israel Institute of Technology in Haifa developed new attacks such as “differential” cryptanalysis, in which a cryptographer tries to crack an algorithm by feeding very slightly different data into it and comparing how the encrypted outputs differ. As a result, instead of having to guess 56 bits of a key (which requires a search through 256 possible keys), would-be crackers could decipher the message after trying only 246 keys or so—a 1000-fold improvement. More important, computers got faster. DES was being cracked by brute-force searches in which speedy computers simply tried every possible key. Last year, in response to a challenge by the San Jose-based cryptography company RSA Security, volunteers yoked nearly 100,000 PCs together via the Internet to decipher a DES-encrypted message. They succeeded in less than a day. To beef up security, wary DES users started running the algorithm three times with three different keys. NIST, however, decided that patches were not enough. In 1997, the institute called for a new standard; the AES contest was born.
Cryptographers from all over the world submitted candidate algorithms, from which NIST selected 15. Two years and a lot of code-cracking and skirmishing later, NIST narrowed the field to five finalists—and the international cryptographic community turned its attention to testing, and breaking, them.
On 13 and 14 April, participants in the cryptanalytic demolition derby gathered in New York to present their results at the third AES conference, the final gathering before NIST chooses a standard late in the summer. “It's the last AES conference. I'm thankful,” sighed NIST's Jim Foti. An overhead projector aimed at a screen at the front of the room greeted participants with an apt image: King Kong perched atop the Empire State Building, under attack by a swarm of biplanes.
As cryptographic algorithms go, the five rival contestants are 500-pound gorillas themselves. Each employs a key 128, 192, or 256 bits long—potentially much more secure than the 56-bit key of DES. In broad terms, they all do the same thing, taking a 128-bit chunk of text and jumbling it up and changing the bits so that they become unreadable. Each scrambling algorithm is publicly available; even if an eavesdropper knows exactly how the encrypting machine works, the knowledge is useless without the key.
An important feature of a secure cipher is that a tiny change in the cryptographic key makes a huge difference in the scrambled output. As a result, guessing part of a key doesn't give any information about the text or about the rest of the key. Ideally, even if an eavesdropper guesses 127 of the 128 bits in the key, the message decrypted with this “key” should be just as unreadable as if he used a random key. This small-key-changes-yield-huge-output-changes feature is called “nonlinearity.”
Four of the five algorithms create nonlinearity the way DES does, through devices called “S-boxes.” An S-box takes a string of ones and zeros and returns a different set; it's essentially a look-up table that converts one string to an unrelated one, turning small changes in input into large changes in output. IBM's entry, MARS, boasts an S-box-based cryptographic “core” surrounded by a “wrapper” of lighter weight scrambling subroutines that protect the core from direct cryptanalytic assault. Serpent, designed by Biham and partners in Britain and Norway, passes information through eight S-boxes over and over, cycling the text through many more “rounds” of manipulation than its competitors do before it spits out the encrypted data. Twofish, designed by Bruce Schneier of Counterpane and colleagues, changes the contents of the S-boxes depending on the cryptographic key, unlike the others, which have fixed S-boxes. Rijndael, the entry of two Belgian cryptographers, relies upon elegant mathematical manipulations of the data, arranged into a square; a single S-box adds nonlinearity.
The S-box-free algorithm is RC6, designed by RSA Security's Ron Rivest and other cryptographers in the United States and Britain. It takes slices of data and “rotates” them by cutting a chunk off one end and pasting it back on the other. The amount of rotation depends upon the data being rotated. Changing a single bit in a chunk of data tends to cause a change in rotation, altering the data quite a bit and changing how the data get rotated even more; small differences in data propagate and proliferate as the process repeats over and over, ensuring nonlinearity.
Choosing a winner among the finalists is no easy task. Even figuring out which algorithm runs fastest is almost an intractable problem, because of the huge number of different types of software and hardware the algorithm will run on. Rijndael appears to be the fastest overall, but most designers had to be content with a mixed showing, slow on some platforms and fast on others. Serpent, for example, is the quickest on field-programmable gate arrays—reconfigurable hardware devices—but the slowest on a Pentium. “Looking at all the performance requirements, we generally don't suck,” says Twofish designer Schneier.
More important than speed is security. None of the algorithms has been broken, but some have been bloodied. Rijndael performs its mixing and jumbling operation 14 times before spitting out an answer. Cryptographers have figured out mathematical tricks to crack an eight-rounded variant with a lot less effort than guessing all the possible keys. Nine of 16 rounds in MARS have been cracked, as have 15 of 20 in RC6. The attacks are still theoretical—no computer today could use them to crack a cipher in any reasonable amount of time—but they might be cause for worry in the future. “Attacks are improved all the time,” Biham says. “If an algorithm has a very small security margin, it will be attacked.” Biham's entry, Serpent, edges out Twofish for the distinction of most secure algorithm, although the difference is probably academic. “It's a bank vault versus a bank vault with a bit of kryptonite in case Superman walks by,” jokes Nicholas Weaver, a cryptographer at the University of California, Berkeley.
Some conference participants, meanwhile, worried about a different type of assault: patent attacks. Intellectual property was not an issue when cryptography was mostly a government-only preserve, but now that commercial companies are in the cryptography game, the playing field has changed. Days before the conference opened, the software division of Hitachi Ltd. wrote to NIST claiming that it holds a U.S. patent on techniques used in four of the finalist algorithms. Whether or not the claim holds up, NIST is entering a “quagmire” of potential intellectual property disputes, says Josh Benaloh of Microsoft Research. The more popular and widely distributed a NIST-sponsored standard becomes, the messier and more expensive the legal battles might be. “I think it's far more likely than the cryptographic attack,” Benaloh says—and potentially much harder for cryptographers to handle.
“We are not lawyers up here, you know,” Schneier says. “I'm at a complete loss at how to deal with various patent laws.” Others share his bewilderment. “This is a very real attack. This is a very significant attack,” says James Hughes of StorageTek in Minneapolis. Even a hint of patent trouble should disqualify a contender, Hughes says: “If it happens, I suggest that NIST withdraw its suggestion immediately and pick another.” For NIST, the possibility of patent troubles just serves to make a tough call even tougher. Any choice it makes will satisfy some parties and anger others. Shamir alone claims to have found an easy solution. “No one algorithm is head and shoulders above the rest,” he says. “I suggest having a fair coin flip.”
Lee's Special Status Fuels Academy's Rising Reputation
- Dennis Normile
Nobel Prize-winning chemist Lee Yuan-tseh is using his scientific skills and political savvy to raise the quality of research at the venerable Academia Sinica
Taipei, Taiwan—The teenaged girls stopped, stared, and squealed. Then they pulled out pens and notebooks for an autograph. Their idol wasn't a musician or movie star. Instead, he was a stoop-shouldered, 63-year-old chemist who happens to be Taiwan's only Nobel laureate. Even on a 1996 hike in Taiwan's rugged central mountains, Lee Yuan-tseh was recognized “by every single person we met,” recalls his friend and hiking companion Henry Schaefer, a chemist at the University of Georgia, Athens. And with those teenaged girls, “it was Leonardo DiCaprio treatment!”
It is hard to exaggerate Lee's stature in Taiwan. His 1986 Nobel Prize brought him respect. And since he returned home in 1994 after 3 decades in the United States, that respect has turned to admiration for his stellar public service—pushing educational reform, leading efforts to assist earthquake victims, and speaking out against governmental corruption. Indeed, Lee's reputation is so strong that his last-minute endorsement of Chen Shui-bian in the recent presidential election is seen as an important factor in Chen's victory. Chen subsequently offered Lee the premiership, which he turned down (Science, 7 April, p. 28).
And those are just his extracurricular activities. Lee's real job is running Academia Sinica, the island's premier collection of research institutes. Under his leadership, the academy is earning the sort of recognition in scientific circles that Lee gets on the hiking trails. “I don't know of any institute's scientific prestige going up so far in such a short period of time,” says Schaefer, remarking on a decade that has seen a sixfold jump, from 200 to 1220, in peer-reviewed papers in international journals. Adds P. C. Huang, a molecular biologist at Johns Hopkins University in Baltimore, “Academia Sinica is definitely a major force to watch.”
It hasn't always been so. Founded on the mainland in 1928 and sharing its birthday and Chinese name with the Chinese Academy of Sciences, Academia Sinica is both an honorary society of 177 “academicians” and a collection of 24 research institutes spanning the natural sciences, mathematics, the social sciences, and humanities. For decades it was starved of cash, as Taiwan had little to spend on science. That lack of resources, plus an oppressive military government, drove many promising scientists abroad to pursue advanced degrees and research opportunities.
By the 1980s, however, the scientific climate was improving. Taiwan was evolving into a democracy, its universities had started turning out Ph.D.s., and its fast-growing economy was providing sufficient wealth for the government to invest in research. Academia Sinica, then headed by physicist Wu Ta-you, responded by modernizing facilities and the research agenda, often with the advice of prominent Chinese-American scientists. When Wu was ready to retire, President Lee Teng-hui offered Lee the job. He accepted because the academy was, in his view, the only scientific organization on Taiwan “that would be able to catch up to a world-class level in a relatively short time.”
Making his move. The son of a well-known artist and a schoolteacher, Lee was born in Hsinchu in 1936 and left Taiwan in 1962 to pursue a Ph.D. in chemistry at the University of California, Berkeley. He returned briefly in 1972 as a visiting professor at Tsinghua University, but his hopes of strengthening Taiwan's scientific capabilities ran up against inadequate levels of support. Although his work on chemical reaction dynamics was getting international attention, Lee felt he was too young “to make things move.”
So he returned to the United States, where in little more than a decade his efforts were crowned with a Nobel Prize. By the early 1990s, he was ready to give his homeland another try. “We had a lot of exceptionally bright youngsters working here,” he says about the academy's workforce. “But they needed some guidance.”
His timing was good. Many senior Taiwanese scientists had made their marks abroad and were open to nurturing Taiwan's scientific efforts. Despite never having run anything larger than his 25-member team of postdocs and grad students, Lee was viewed as a vital catalyst in achieving scientific respectability. “I thought Lee was just what Academia Sinica needed,” says chemist Lin Sheng-Hsien, who left Arizona State University in Tempe to become head of the Institute of Atomic and Molecular Sciences once Lee accepted the job. Lee even appealed to scientists lacking homegrown ties. Sunney Chan, a chemist Lee lured from the California Institute of Technology in Pasadena to be vice president for academic affairs, was born in the United States of parents who hailed from the mainland. “I simply believe in Lee and what he's trying to do here,” Chan says.
That faith seems to stem from Lee's approachable nature and an air of concern that reflects his genuine interest in people. “He's a very plain, very ordinary person, without anything [resembling] arrogance,” says James Shen, who left the University of California, Davis, to become director of the Institute of Molecular Biology in 1994. These qualities have extended his influence far beyond Academia Sinica. As chair of an educational reform commission, Lee left the capital of Taipei to crisscross the island, visiting schools and listening to parents and teachers. That kind of dedication has led the Taiwanese media to tag Lee as the island's “conscience.”
Lee's explanation for his popularity is consistent with his nature. “I think people respect me because they know I'm a very honest, sincere person trying to do my best for society,” he says. “I feel tremendous responsibility to be a role model. After being under a repressive regime for such a long time, people are still looking for leaders.”
Lee turned down the premiership partly to maintain the purity of that role model. “You have to make a lot of compromises [as a] politician,” Lee says. Instead, he will serve as a special adviser on Taiwan's delicate relations with the mainland. But his real job will continue to be running Academia Sinica, which he hopes to make “one of the world's leading research organizations.”
People power. To succeed, Lee must keep the money flowing (see graph). And that means dealing with a legislature still controlled by the party whose presidential candidate he opposed. “There may be some resentment,” he admits, but he doesn't think it will be serious. His political foray also hasn't derailed his ability to garner private-sector support: The academy expects to announce soon an agreement under which five Taiwanese tycoons will put up $80 million over 5 years for a functional genomics program, giving the academy an important presence in an emerging field. And despite Lee's endorsement of Chen, who was fiercely opposed by Beijing, work continues on the largest ever collaboration between mainland and Taiwanese scientists. The teams hope to study neutrinos captured at a distance from one of Taiwan's nuclear reactors.
Lee must also find a way to keep people on their toes. Tenure, once almost automatic, is now reserved for principal investigators who clear a series of hurdles. Salaries and research budgets are set after rigorous performance reviews, and international panels pass judgment on the portfolios of programs and institutes. But there are still a number of older researchers who moved up the ladder before review standards were tightened.
Lee's biggest concern, however, is to attract and retain the best talent. One sticking point, says Kenneth Wu, a former director of the Institute of Biomedical Sciences, is “the difficulties that productive investigators [face in] expanding their laboratories.” There is deep-seated cultural resistance to giving one person too much support, a holdover from the postwar years when scarce resources had to be shared. This attitude, Wu says, was a factor in his decision last year to return to the United States and join the University of Texas at Houston Health Science Center.
Lee recognizes the problem. “Especially for those who are doing well, we have to give them more resources, more assistants,” he says. But he doesn't have a free hand. Salary scales have to be vetted by the president's office, which hasn't wanted Academia Sinica to get too far ahead of the national universities. And the academy's charter does not permit a career path for engineers and technicians, making it hard to retain good support personnel.
Lee says he is making gradual progress in addressing these issues. He recently won authority to appoint locally hired scientists to the higher paying “distinguished fellow” positions originally intended to lure researchers back from overseas. He also hopes to create an English-language Ph.D. program. The program would generate a ready supply of graduate students for larger research teams, while English instruction would enlarge the talent pool by attracting students from around the world. Lee hopes to win final government approval in time to enroll the first class next summer.
Such challenges are likely to fill the remaining 4 years of Lee's self-imposed 10-year tenure. In the past, academy presidents were appointed for life, but characteristically, Lee is changing the charter to limit a president to two 5-year terms. “I'll definitely be the youngest retiring Academia Sinica president,” he says. “But I don't want to be one [of those scientists] who becomes a stumbling block in the path of the younger generation.”
Researchers Say New Institutes Offer Them a Chance 'To Do Good Science'
- Dennis Normile
Taipei, Taiwan—Academia Sinica's 24 institutes have counted on Lee Yuan-tseh to raise standards and to keep the money flowing. But they are on their own when it comes to research agendas and strategies. That combination has been a potent force in creating world-class science at two of Academia Sinica's newer and more productive institutes.
The Institute for Molecular Biology (IMB), set up in 1986, is the child of a dedicated team of Chinese-American scientists. Several of these scientists—including James Wang, a molecular biologist at Harvard University, and Ray Wu, a plant scientist at Cornell University—also served as part-time directors during the institute's early years. The big attraction, says Chien Cheng-Ting, who joined IMB after earning his Ph.D. at the State University of New York, Stony Brook, and completing a postdoc at the University of California, San Francisco, is the ability “to do good science” without leaving Taiwan. Director James Shen confesses that he planned to stay for 3 years and then return to the University of California, Davis. “But I'm still here [after 6 years], because it's very exciting scientifically,” he says. And Stanley Cohen, a molecular biologist at Stanford University, says that several teams are doing work “on a par with first-rate laboratories in the U.S., Europe, and Japan.”
Chien is one of IMB's up-and-coming researchers, earning attention for his work on Drosophila neural development. Others include Henry Sun, who has identified genes regulating Drosophila eye development, and Li Shiu-min, whose work elucidating how protein is imported into plant chloroplast earned her a special “Frontier of Science Grant,” Taiwan's most prestigious research grant.
Those planning the Institute for Astronomy and Astrophysics (IAA), established in 1993, faced a different problem. Top astronomers are attracted to leading-edge instruments, yet funding agencies want to see a core group of qualified astronomers in place before underwriting big-ticket observatories. The solution for IAA, says director (Fred) K. Y. Lo, is to focus on “unique forefront projects,” where a modest investment promises significant scientific returns.
The institute's first major achievement was to join the submillimeter-wave interferometric array (SMA) planned by Harvard University's Smithsonian Astrophysical Observatory atop Mauna Kea in Hawaii. The SMA, to start up later this year, will be the first to use interferometry at these wavelengths, which penetrate the interstellar gas and dust and promise a glimpse into the formation of stars and planetary systems. IAA made a timely offer to add two antennas to the originally planned six. “It is a very shrewd investment,” says David Jewitt, an astronomer at the University of Hawaii, Manoa. He says IAA's contribution will “significantly expand the scientific capabilities” of the array and give institute scientists a role in running the facility.
Closer to home, the institute is building an array of three automated optical telescopes to count the smaller objects in the Kuiper belt, the area of the solar system beyond Neptune where many comets originate. The data should shed light on the mass of matter that evolved into the sun and the planetary system and sharpen our knowledge of the threat to Earth from comet collisions. In addition, the institute has just won government approval to build the Array for Microwave Background Anisotropy. This compact array of millimeter-wave antennas will put the IAA “at the forefront of a global race to get more precise indications of the age of the universe,” says Masato Ishiguro, a radio astronomer at Japan's National Astronomical Observatory.
While waiting for these instruments to come on line, the institute's 35-person staff has borrowed time on other observatories and published more than 150 papers in international journals. But the real attraction is the chance to plow fresh ground with a new instrument. “I wouldn't have come here if [the SMA] wasn't going ahead,” says Lo, who came to the institute from the University of Illinois, Urbana-Champaign, despite the lack of ties to Taiwan.
Ishiguro says that the SMA is just the beginning. Once IAA scientists get their own instruments up and running, he predicts, “we can really look forward to some exciting results.”
Everglades Restoration Plan Hits Rough Waters
- Keith Kloor*
An 11th-hour proposal to boost water flow into the Everglades has a Native American tribe—and some scientists—up in arms
Miccosukee Reservation, Entral Everglades—Zigzagging by airboat through a maze of saw grass, Ron Jones spies a scene of ecological carnage. He cuts the propeller's engine to drift in for a closer look. A teardrop-shaped island, once covered with gumbo-limbo and other hardwoods, is a tangle of rotting snags. Battered by years of high waters, this and other tree islands in the central Everglades have become shadows of their former selves. That's bad news, says Jones, a wetlands ecologist at Florida International University in Miami, as the islands are havens for several endangered species, including snail kites, snakes, and panthers.
If tree islands are like canaries in a coal mine, gauging the vitality of the Everglades, then these canaries are drowning. The culprit is the biggest water engineering project in U.S. history: a network of levees, pumps, and canals built since 1948 to protect cities from flooding and to ensure that central Florida's sugarcane farmers get enough water for irrigation. Now the same agency that installed the plumbing, the U.S. Army Corps of Engineers, wants to rip much of it out and restore the so-called River of Grass to something resembling its natural state. After years of bickering over how to carry out the unprecedented project, the Clinton Administration last month sent to Congress a proposal to spend $7.8 billion over 20 years on a Comprehensive Everglades Restoration Plan.
Threatening to derail the plan, however, are concerns over funding and the science driving the massive project. One issue now complicating an already difficult congressional debate over the plan is an 11th-hour proposal to allow 20% more water—245,000 acre-feet—to flow into Everglades National Park (ENP) each year to nourish the park's ailing marshes and provide better habitat for wading birds and fish. Some scientists have pushed hard for this extra water. “In many years, the park is absolutely bone dry and the River of Grass dries up,” says Stuart Pimm, an ecologist at Columbia University. “If you don't have any water, you don't have any water birds.”
But the extra water may come at a cost to the central Everglades, which will have to bear the increased flow. A few years ago, scientists who helped draft the plan had mulled delivering this extra water to ENP. But they vetoed the idea because the ecological trade-off seemed too costly. “We didn't solve the problem of how we would get that extra water to the park without damaging the central Everglades” and its fragile tree islands, says John Ogden, an ecologist with the South Florida Water Management District (SFWMD), which is paying half the tab for the restoration and spearheading the state's Everglades research effort.
The idea's sudden resurrection has sparked a rebellion among a few erstwhile restoration backers whose support is deemed vital to the plan's political acceptance. Leading the charge is the Miccosukee Indian Tribe, a major player in Everglades politics. The Miccosukee say they use tree islands for hunting and for ceremonies. At a Senate hearing last week, a tribe lawyer accused the federal government of giving the central Everglades “second-class status” and thereby jeopardizing the restoration.
For years, the Everglades—which stretches from Lake Okeechobee to the Florida Keys—has been edging toward ecological collapse. Fragmentation of the wetland has parched some areas and flooded others, disrupting seasonal water flow. Invasive Brazilian pepper and melaluca have supplanted native plants, while agricultural runoff has diminished water quality. Wildlife are swooning: Since the turn of the century, populations of wading birds such as wood storks, herons, and egrets have plummeted 90%.
Alleviating the Everglades's ecological ills is only one of several goals of the federal-state Everglades plan, known as the “restudy.” It calls for the removal of 400 kilometers of dikes and levees over 20 years and the construction of new filtering marshes, canals, and underground reservoirs. That would allow water to flow more naturally through the central Everglades and into the park. To compensate for the increased flow through the park, the plan would shore up flood controls and funnel more water to South Florida's booming cities and surrounding farms. Although essentially an overhaul of the existing flood control system, most conservationists have rallied behind it. “It's our last hope to rescue the nation's most endangered ecosystem,” says Stuart Strahl, executive director of Audubon of Florida.
But the new wrinkle—a proposal by the Army Corps to provide even more water to the ENP—has upset the delicate consensus among the stakeholders and threatens to undermine support for the restudy in Congress. At the heart of the dispute is how to heal the park without degrading the central Everglades to the north. For decades, the central Everglades has been treated as little more than a holding pen for water diverted from the agricultural fields and South Florida's East Coast cities. It's divided by levees into five pools, together nearly the size of Rhode Island, from which water is shunted south into the park. The plan would tear down many of the levees while increasing flow through the central Everglades.
Nobody has objected more loudly to that idea than the Miccosukee, whose 100,000-hectare land holdings in the central Everglades include a large swath of remnant saw grass marshes, wet prairies, and tree islands, as well as a housing expansion and a $50 million casino that opened last summer. In a lawsuit and in congressional testimony last week, the tribe asserts that the extra water would result in continued flooding and destruction of their property. The Miccosukee also contend that the additional water, which will come from urban runoff, will pollute their marshes. They adamantly oppose any more water being figured into the restoration. “We're willing to kill the restudy if we have to,” says Gene Duncan, the tribe's water resources manager. “We cannot allow the central part of the Everglades to be sacrificed to the park.”
Some scientists are siding with the Miccosukee—not to defend the tribe's gambling interests, but to support its contention that a central Everglades brimming with water will aggravate the plight of tree islands. “The tribe has legitimate concerns,” says Lorraine Hisler, a U.S. Fish and Wildlife Service biologist who studies tree islands. She also thinks that the urban runoff would pollute the marshes. The bottom line, says Bradley Hartman, environmental services director for the Florida Game and Fish Commission, is that “if you have to jam an extra 245,000 acre-feet of water through the system, that would result in higher waters and more damage to the tree islands.” Others disagree, arguing that with the levees gone, water won't get stacked up in the central Everglades.
But it would be bad news if tree islands truly are in danger, as their ecological stock is clearly on the rise. Hisler and others have identified tree islands—rock or peat mounds that comprise about 8% of the Everglades—as keystone habitat that nurtures more plant and animal life than any other habitat in the central Everglades. Tree islands provide the only refuge for land animals in the central Everglades and provide crucial roosting grounds for endangered snail kites, wood storks, and other wading birds. According to a recent SFWMD report, “tree islands may provide the best and surest measure of the overall health of the Everglades.” To protect this fragile habitat, state biologists are about to initiate a major restoration effort to shore up the most degraded islands in the central Everglades. The extra water flow could wipe out this effort, says SFWMD avian ecologist Dale Galwik.
While acknowledging the ecological importance of tree islands, ENP officials say the extra water is necessary to heal the park's ailing marshes. The Army Corps proposal translates to boosting water flow from 70% to 90% of historical levels, which park superintendent Dick Ring says would allow key habitats such as marl prairies to rebound. Without that 20% increase in flow, animals that rely on the prairies, such as the endangered Cape Sable seaside sparrow, and their predators would not recover. Critics, meanwhile, contend that ENP is ignoring findings suggesting that severe soil erosion in the central Everglades over the last half-century lowers the threshold for ecological harm from the extra water. “It would be too deep for too long if you put this water in there,” says hydrologist Thomas MacVicar, a consultant to the Florida Department of Agriculture.
Army Corps officials say the extra water is not a fait accompli. Although the proposal is part of the Administration's request for initial funding for the restudy, set to begin next year, the Corps will put the kibosh on the extra flow if a feasibility study over the next 4 years shows that it can't be pumped into the park without harming the central Everglades. “We made a commitment to consider, not deliver, the water,” says Stu Applebaum, ecosystem restoration chief for the Army Corps in Jacksonville.
Although the restoration plan won strong bipartisan support in Congress when it was unveiled last fall—even after the extra water provision was tacked onto the plan—it's now hitting some unexpected rapids. In a hearing before the Senate Committee on Environment and Public Works last week, panel member George Voinovich (R-OH) argued that the restudy “was rushed to this Congress for its consideration.” His feelings are echoed by powerful colleagues on the House side. “We need to make sure that everything we do is based on good science,” says Representative Ralph Regula (R-OH), chair of the House Appropriations Committee. “I'm not sure that's the case.”
Congressional staffers say their committees plan to apply pressure on the Army Corps and the state of Florida to justify the science behind the plan and its decisions about which ecological trade-offs to make. It will be a challenge for ecologists to avert the fate of the Everglades over the last 50 years, predicts SFWMD ecologist Fred Sklar, in which “we save one area and destroy another.”