News this Week

Science  23 Jan 1998:
Vol. 279, Issue 5350, pp. 470

    Green Light for Long-Awaited Facility

    1. Andrew Lawler

    The Spallation Neutron Source will be the largest new science project in the U.S. budget. It will give neutron-scattering scientists a welcome boost after the demise of the Advanced Neutron Source

    It's been a tough decade for U.S. researchers who rely on powerful neutron beams for diverse work in physics, chemistry, biology, and materials science. They sat frustrated on the sidelines in the late 1980s, unable to win support for an up-to-date facility. And when they were on the verge of securing approval for the $3 billion Advanced Neutron Source (ANS) in 1995, the Administration withdrew its support for the giant reactor, citing rising costs and nuclear proliferation concerns. Yet another blow came last year, when a tritium leak shuttered—perhaps permanently—an important reactor at Brookhaven National Laboratory. U.S. scientists have been forced to grit their teeth and board planes to work at more modern neutron sources in Europe. “The community is kind of numb,” says one leading researcher.

    Now those same scientists are feeling a rush of adrenaline. Vice President Al Gore was expected to announce this week that the White House will back plans to build a $1.3 billion Spallation Neutron Source (SNS) at Oak Ridge National Laboratory in his home state of Tennessee. The facility, which would generate neutrons with an accelerator instead of a reactor (see sidebar), will be the biggest new science project in the budget to be submitted to Congress next month. Prospects that construction could start by 2000 appear good, say those familiar with the effort. A bipartisan group of lawmakers supports it, the federal budget crunch is easing, and the diverse neutron community appears solidly behind the new project. And unlike previous DOE facilities, which primarily were the domain of individual labs, this one will be built as an ambitious collaborative effort of five labs. This approach not only makes technical sense but will also create a political juggernaut, its supporters say. Skeptics, however, worry that the management structure—which a few complain smells more of pork than practicality—could prove too cumbersome and expensive. And they caution that federal funding remains tight.


    If it is built, the SNS will be the most powerful accelerator-driven neutron source in the world. Its neutron beams will be used to probe the structure of liquids or solids in ways that other particles cannot, and the data it generates will allow materials scientists to design new materials such as superconductors, chemists to develop new polymers, and biologists to trace the flow of proteins in blood. Despite these concrete applications, however, “it's been a long and exhausting task to generate enthusiasm in politicians” for such a facility, says one senior neutron researcher.

    Nuclear fizzle

    When the ax fell on the ANS in early 1995, it devastated supporters. “I'm still not over it,” says Frank Bates, a chemical engineer at the University of Minnesota, Minneapolis, and president of the Neutron Scattering Society of America. “Eight years of work for nothing.” Oak Ridge officials, however, quickly shifted gears. Within days of the ANS cancellation, DOE ordered a team of labs led by Oak Ridge to come up with plans for an alternative facility that would cost $1 billion or less. DOE and lab officials knew that a neutron source based on a nuclear reactor, like ANS, would be out of the question, so the new facility would have to be an accelerator source. “I live in the real political world, and nothing we can do will get us a reactor,” says Oak Ridge director Al Trivelpiece.

    DOE officials said in a statement that Oak Ridge was the “preferred site” for the project. It was, as DOE and lab officials frequently say, Oak Ridge's turn to build a major new scientific facility. But the decision rankled some, because Los Alamos National Laboratory in New Mexico, Brookhaven in Upton, New York, and Argonne National Laboratory outside Chicago all have greater experience in accelerators than the reactor-oriented Oak Ridge has. “Oak Ridge was a Johnny-come-lately” to the spallation field, sniffs one lab official. While cooperating with Oak Ridge, those labs quietly worked on their own alternative projects.

    The odds of any spallation source being built, however, looked slim 3 years ago, given the push to balance the federal budget and the low priority assigned the project within DOE after the ANS debacle (Science, 9 August 1996, p. 728). But Trivelpiece and Bill Appleton, former ANS chief and now SNS associate director at the lab, set out to create a single project that could draw on the expertise and political firepower of five labs. “Our problem was we didn't have the technical know-how, so we went to each lab and asked about their capability,” recalls Appleton. “Some labs held out and said they could do it cheaper, but the agreement formed quite naturally.” Ultimately, other lab officials acquiesced when it became clear that infighting would kill the project in the cradle. “There was no hard sell—they saw the future as well as we did,” says Appleton.

    The collaboration that emerged is a technical winner, Appleton maintains, because it allows each lab to contribute its expertise. Lawrence Berkeley National Laboratory in Berkeley, California, is responsible for the ion source, while Los Alamos will oversee the linear accelerator. Brookhaven will design the accumulator ring, while Argonne will handle instruments and experimental facilities. Oak Ridge officials will oversee the entire effort and provide the target apparatus.

    While Appleton pieced together the design team, Trivelpiece worked on the politics. He set up a meeting between Tennessee Governor Don Sundquist (R), staffers for Gore, and Jack Gibbons, President Bill Clinton's science adviser. Sundquist and Gibbons were old friends, because Gibbons had worked for years at Oak Ridge and Sundquist had served on the board of Gibbons's Office of Technology Assessment during his tenure in Washington as a congressman. To emphasize Tennessee's commitment to the spallation source, Sundquist pledged $8 million in state funds for a Joint Institute for Neutron Sciences at Oak Ridge. Meanwhile, both Tennessee senators, along with Representative Zach Wamp (R-TN), who represents the Oak Ridge district, agreed to back the effort.

    While Trivelpiece emphasizes that the five-lab SNS collaboration is the best way to develop the technology, he also allows that “it's a great way to deal with the politics.” Los Alamos's involvement, for example, has won over Senator Pete Domenici (R-NM)—who chairs the DOE funding panel—to the project. “Senator Domenici has to be a key player,” says Appleton. “The fact that Los Alamos is on the critical path certainly helps.”

    Crossed fingers

    The SNS's management structure worries some, however. “There are a lot of positive aspects to it—such as the politics—but there are complexities, too. You have to rely on five labs to get it built,” says Jack Rush, who manages the materials science and engineering lab at the National Institute of Standards and Technology in Gaithersburg, Maryland. “We're all keeping our fingers crossed.”

    Jim Decker, DOE deputy energy research director, admits that at first he was skeptical of the multilab structure because “this is more difficult to manage” than a typical one-lab program. But he's convinced that the new approach—allowing designers to work largely at their home labs—makes more fiscal sense than moving accelerator experts from other labs to Oak Ridge. “That's very expensive,” he says. New communication technology will allow widely separated team members to share design drawings and teleconference in ways impossible just a decade ago, he notes. The key to success, he adds, will be to ensure that Oak Ridge maintains full responsibility for the overall effort.

    In putting together the project, says Decker, the lab is doing “a terrific job.” He cites two independent SNS cost reviews that came in with numbers nearly matching those of Oak Ridge. The lab estimated that the facility would cost $1.26 billion over 6 years; both cost reviews suggested adding at least a year to construction, and one recommended adding some instrumentation as well, bringing the total to $1.33 billion. The original $1 billion DOE figure, say Oak Ridge officials, did not take into account the added year, additional instruments, design changes that will allow the facility to be upgraded cheaply, and inflation.

    Supporters say the cost—far less than that of the ANS—should make it more palatable to members of Congress. But one congressional aide warns that “any big new project is in question,” despite talk of a rosier budget picture. “DOE energy programs like this are pretty low on the food chain” of appropriations, the staffer adds. In addition, the House Science Committee was rankled last year when DOE officials refused to consider possibly cheaper alternatives to the Oak Ridge site. DOE managers explained privately that any attempt to yank the SNS out of Tennessee would anger the White House and state delegation, jeopardizing the project.

    In the meantime, neutron-scattering researchers are cautiously optimistic that the promise of an SNS will assuage the pain of the past few years. “There are still a lot of feelings of depression and anxiety, but things have brightened a bit,” says Rush. Adds James Jorgenson, who heads Argonne's neutron-scattering team: “We're still alive—and that's really fantastic.”


    An Accelerator to Boost Neutron Science

    1. James Glanz

    Neutrons are packed inside the nucleus of every atom heavier than hydrogen. It's getting the darn things out, so they can be used to probe materials and molecules, that has been the problem for frustrated American neutron scientists. Not that they don't know how, but they have lacked any prospect of a cutting-edge neutron facility since the demise, 3 years ago, of a proposed reactor in which nuclear fission would have provided a copious, steady flood of neutrons. Most of those scientists agree, however, that the intense bursts of neutrons from the accelerator-driven Spallation Neutron Source (SNS), which may be built at Oak Ridge National Laboratory (see main text), would be every bit as good a remedy for their frustration—and, all in all, might be better.

    Reactor partisans hold that their technique of choice is still superior for some studies. But “the world is going in the direction of spallation sources,” says Frank Fradin of Argonne National Laboratory outside Chicago. Lower cost is one reason, and there's a key technical advantage as well. The energy range of the neutrons scattered from a sample is a crucial clue to its structure. In a reactor, researchers generally must sieve one neutron energy at a time from the mixture that continually streams from the reactor. But in a spallation source, the energies of all the neutrons scattered from a microsecond neutron pulse can be gauged at once, by their different “times of flight” to a detector.

    SNS would produce its neutron pulses by first collecting bunches of protons in an “accumulator ring” after they had been accelerated to an energy of a billion electron volts. Sixty times a second, bunches would be extracted to blast flowing, liquid mercury, shattering and boiling its nuclei to release bursts of neutrons. The neutrons would pass through one of several “moderators,” depending on the experiment: water to produce hot, or high-energy, neutrons for a wide range of studies in materials science, structural biology, and polymer science, and liquid hydrogen for the cold neutrons used in studying especially large spatial scales or the fine vibrations of molecules.

    Thanks to its 1-megawatt accelerator and assorted technical advances, SNS would have a neutron flux at least 10 times larger than that of the best comparable spallation source—ISIS in the United Kingdom. And its data-collection rate should be more than 100 times higher than that of ISIS, which is already ahead of the best reactors, says Gabriel Aeppli of the NEC Research Institute in Princeton, New Jersey. The high data rate would aid studies of small, complex chunks of material, which require long “exposure” times today. Fradin says it would also open the way to mapping out three-dimensional pictures of stresses in large volumes of material—say, in a turbine rotor made of advanced ceramics.

    Researchers hope to bring the high neutron flux and the time-of-flight technique to bear on what Aeppli calls “the single most exciting problem of condensed-matter physics”: how electrons pair to produce superconductivity at relatively high temperatures in some materials. Researchers at existing neutron facilities are struggling to knock the electrons apart and chart their binding energies, but “we need a bigger hammer in order to succeed,” says Aeppli. For him and many other researchers, that hammer is the SNS.


    'Fountain of Youth' Lifts Biotech Stock

    1. Robert F. Service

    The biotechnology world received a reminder last week of Wall Street's fickle love of new research results-and researchers got a lesson in how that passion can play havoc with efforts to orchestrate the release of scientific information. On Tuesday, 13 January, the price of shares in Geron Corp., a California-based biotech company, rose 44% after news leaked that scientists there and at the University of Texas Southwestern Medical Center (UTSW) had managed to stem the aging process in cultured human cells. By week's end, however, as it became clear that income-generating products of the research are distant, Geron's share price had glided back down.

    The research, which confirmed that the enzyme telomerase can affect cellular aging, was scheduled for publication in the 16 January issue of Science. Following what has become standard practice for many journals, Science had sent information on the upcoming paper to reporters on 9 January, under a strict news embargo until 4 p.m. on Thursday, 15 January. A group called the Alliance for Aging Research (AAR), which promotes research on age-related diseases, also scheduled a news conference with some of the authors and other experts on aging to help explain the findings. The news conference was originally set for 1:30 p.m. on Thursday, with the information also embargoed until 4 p.m. But Geron's lawyers asked that the event be pushed back to 4 p.m. so that company officials couldn't be accused of trying to hype the company's stock price before publication.

    This careful choreography began to fall apart on Monday, however. AAR sent an embargoed media advisory offering some details of the study to a newswire service that distributed copies not only to news outlets but also to investors. The entire advisory was published online by a database service late Monday afternoon. And on Tuesday, the same advisory was reportedly published on America Online's popular “Motley Fool” investment chat page. Reports that telomerase had been identified as a possible “fountain of cellular youth'' were soon all over the Internet, and Geron's stock price took off on Tuesday morning. At that point, Science lifted the embargo on the paper, and stories appeared in most major media on Tuesday evening and Wednesday morning.

    David Molowa, a biotechnology stock analyst with Bear Stearns in New York City, notes that biotech stocks are particularly prone to wide price swings because most companies don't have any products or make money, so their stocks trade largely on hopes of future earnings. “Investors get really excited and don't realize [any product] is decades away,” he says. Indeed, investors caused a similar spike in Geron's stock price last August following publication of another paper in Science that identified key segments of the human telomerase gene (Science, 15 August 1997, p. 955).

    But even scientists at Geron and UTSW downplayed immediate commercial implications of the company's research. The idea that the research will lead to new drugs “is clearly going out on a limb,” says Woodring E. Wright, a cell biologist at UTSW, who helped lead the new study. “What [the latest] study shows is that we can control the process of cellular aging, not in the body but in tissue culture.” It's a long way from there to affecting the body, he says. Even so, for a short time, it managed to add some youthful vigor to Geron's stock.


    U.K. Cooks Up Food Standards Agency

    1. Nigel Williams

    LONDON—Britain's Ministry of Agriculture, Fisheries, and Food (MAFF) has always suffered from a conflict of interest, catering to the needs of both food producers and consumers. But criticism that it puts farmers first has grown in recent years in the wake of several damaging episodes, including the epidemic of bovine spongiform encephalopathy (BSE) and outbreaks of food poisoning caused by Salmonella and the often fatal Escherichia coli 0157. The number of such outbreaks has tripled over the past 10 years. Last week, the British government outlined plans to end MAFF's split personality by creating an independent agency to take control of food safety surveillance and research.

    The proposed Food Standards Agency will have an initial research budget of about $40 million, transferred from other government departments, and it will champion consumers. “It is time to see a shift in the way science sees the food chain, in a way that benefits consumers,” says nutritional researcher Philip James, director of the Rowett Institute in Aberdeen, who authored a report that led to the new proposals. The new agency will advise the government on food safety, conduct research and surveillance, and set and monitor standards for enforcement of food safety laws. It will also carry out public education and information campaigns.

    When the Labour Party, then in opposition, asked James to look into food safety policy last year, “the first thing I found … was the gap between the research interests of vets, farmers, meat processors, and consumers,” he says. “There was no mechanism for looking at questions along the whole food chain … [yet] the potential risks from fragmented decisions on the evidence from one part of the food chain can be amplified hugely in later stages,” he says. “E. coli 0157 thrives without symptoms in some animals, but as few as 10 microbes can kill a human. We need very broad-ranging research to tackle problems like this.”

    James also hopes the new agency will help shift the focus of research on novel foods developed from genetically modified organisms. “Industry is researching and developing these products for its consumers—farmers—on the basis of maximum yield and profitability, and [members of] the public are never remotely consulted on these priorities,” he says. “I believe there will be a shift to a new research agenda based on public interests rather than industrial interests over the next 5 to 10 years, and I hope the agency can help achieve that.”

    Plans for the new agency have drawn cautious support from researchers. “I welcome the fact that it puts food standards on the political agenda,” says nutritionist Tom Sanders at King's College London. “But a key to its success will be to see just what research it pursues and how it goes about commissioning it.” Public interest lobby groups have also given the idea a preliminary thumbs up. After weighing comments, the government is expected to draw up legislation and implement the proposals by 1999.


    Medline Searches Turn Up Cases of Suspected Plagiarism

    1. Eliot Marshall

    When he began collecting data last year for a book about scientific misconduct, cancer researcher Marek Wroñski had no idea that he would set off a bomb in the scientific enclaves of his native Poland. But in the past few weeks, Wroñski's queries about an obscure misconduct notice in a Danish journal have exposed what Wroñski claims is a widespread case of plagiarism. He has also raised questions about the Polish scientific establishment's ability to investigate itself. These allegations have shaken two major universities, made headlines in Polish newspapers, and aroused the concern of Poland's science funding chief, Andrzej Wiszniewski.

    Wroñski claims that Andrzej Jendryczko, a chemical engineer and former professor at the Medical University of Silesia (MUS) in Katowice, Poland—along with a dozen or so co-authors who may or may not have understood what the main author was doing—published at least 30 biomedical research papers that repeat verbatim passages from other authors without giving credit. Five Polish researchers listed as co-authors on some of the papers hold coveted full professorships or are heads of university departments, according to Wroñski. One striking example he discovered is a paper on cancer of the cervix, published in English in 1991 in Zentralblatt für Gynäkologie, a German journal, that duplicates whole passages—with a few critical changes—of a 1979 paper in English on cancer of the larynx published in the Journal of Maxillofacial Surgery, also German.

    Jendryczko has not responded to letters Science sent by fax to his home and office, and Science could not reach him by telephone. However, on Science's behalf, Jan Latus, an editor at the New York City Polish language paper Nowy Dziennik, interviewed Jendryczko and his wife Barbara by phone at home. Jendryczko maintained his innocence and said he was not able to respond to specific charges because they had not been provided to him in written detail. Speaking for her husband, attorney Barbara Jendryczko said that Jendryczko intends to fight the allegations and media reports in court if necessary. On 14 January, Jendryczko also published a letter in the newspaper Rzeczpospolita of Katowice, denying Wroñski's charges and suggesting that Wroñski's attack was motivated by a private grudge. He also criticized Wroñski for not seeking an explanation directly from the accused before going public, adding that even the most dangerous criminals are allowed to defend themselves before being judged.

    Wroñski—a Polish-educated M.D.-Ph.D. who studies cancer therapy outcomes at the Staten Island University Hospital in New York—says he had never heard of Jendryczko before he read his papers last year. His inquiry began last June, he says, after he came across a cryptic note in the Danish Medical Bulletin of September 1996. Under the heading, “Work originating from Denmark, translated into Polish and published in a Polish journal. Four Polish scientists guilty of scientific dishonesty,” the Danish Committee on Scientific Dishonesty reported that it had confirmed a case of plagiarism. The incident had come to light when Danish authors of a paper published in 1989 found a duplicate of their abstract on Medline under other names. The abstract related to a paper published in 1992 in the Krakow-based journal Przeglad Lekarski. The committee added that the “principal Polish ‘author’ … admitted the plagiarism and apologized,” while “the head of the principal author's department, who co-authored the article, also apologized for the act and felt that his trust had been abused.” No names were given.

    Science has obtained copies of letters to the Danish committee signed by Jendryczko and his former department chair, Marian Drózdz—two of the four co-authors on the Polish paper. Drózdz's 13 February 1995 note thanks the committee for its work and says that, “The act of Professor Andrzej Jendryczko who, as an independent scientist, enjoyed a great extent of trust at that time is hurting even more because of having abused the entrusted confidence.” Drózdz wrote that he was turning the case over to the university ethics committee. In an undated note that month, Jendryczko wrote to the Danish panel, “Although incontestably I am to blame, I would like you to accept my excuse. …” He explained that he admired the work of the Danish group, had translated it, used its techniques to explore a similar topic, and because of “disorder and one of the author's neglect” [not his own] had published the translation. He said it wouldn't happen again.

    Wroñski was most upset by what he views as the mild initial reaction of Polish authorities to the Danish findings. After a closed, 13-month inquiry during which Jendryczko spent 6 months on paid leave, Polish university authorities concluded that the statute of limitations had run out and that Jendryczko could not be punished. Although Jendryczko resigned from MUS in early 1997, he was appointed a professor at the Polytechnic Institute of Czestochowa, where he now works.

    Wroñski says many people knew about this case, “but nobody said a word.” Misconduct, he claims, was “protected by the old guys' network.” When Wroñski began asking questions about it last summer, he adds, “many people told me to be quiet—they said I was going to destroy Polish science. … But during my 8 years' stay in America, I learned a completely different way of behaving.” In the United States, Wroñski says, “people are disciplined in the light of all their colleagues and the public.” Polish scientists, he says, are just as capable as Americans and should hold themselves to the same standards.

    After looking into the Danish case, Wroñski began examining the rest of Jendryczko's oeuvre. He began searching Medline for Jendryczko's work and says he was amazed to find that about 125 medical papers by the engineer were indexed. “I found that he had published 125 papers in Medline in 13 years—60% of them original work,” says Wroñski. “And in one year—1993—he published 16 original papers.” Moreover, these papers cover a wide range of medical specialties, reporting new findings on mitochondrial DNA and aging, estrogen and myocardial infarction, neonatal growth, zinc and copper in cancer tissue, cholesterol and hypertension, antioxidant enzymes in the placenta, intracellular responses to cancer, menopause, the effects of selenium, the effects of ionizing radiation, and many others. In one year, says Wroñski, Jendryczko—who is not an M.D.—published two papers reporting data from 300 patients in one case and 1000 in another, without crediting the numerous physicians who must have treated them.

    Wroñski says he wrote first to Zbigniew Religa, a personal acquaintance and a renowned cardiologist who is president of MUS, in August, urging that the investigation of Jendryczko be opened to the public. In the meantime, he asked friends in Poland to obtain photocopies of Jendryczko's papers, and in less than a week, he says he received a package of 90 papers. In September, Wroñski compared the texts of Jendryczko's published papers with suspected source papers, which he had identified through the “find related articles” search function of Medline (see sidebar). By mid-September, Wroñski says, he had found 20 papers that he views as clear instances of plagiarism; by November, he had found nine more. In one case, Wroñski says, a section of a paper published in 1989 in the British Medical Journal (BMJ) was combined with part of a 1992 paper from The New England Journal of Medicine (NEJM) to create a composite article published in 1993 in Zentralblatt für Gynäkologie. Editors at both BMJ and NEJM say they agree with Wroñski's interpretation and are awaiting word from Zentralblatt. The editor of Zentralblatt, H.P.G. Schneider, has not responded to Science's queries sent by fax and e-mail.

    Wroñski claims that his initial letter to Religa—and more specific allegations of plagiarism he sent to Religa and the university's vice president for scientific affairs, Tadeusz Wilczok, in September—were ignored until mid-November. Religa could not be reached for comment, but Wilczok responded on 15 January with a note denying any hesitation. Wilczok says that Religa “acted immediately and ordered the main library to produce the originals” of the suspect publications. Wilczok says the university has appointed three investigative panels to clear up this case. Although these investigations have not yet been completed, on 17 December, the MUS senate passed a resolution—which has been obtained by Science—stating that charges of plagiarism against Jendryczko “and various co-authors” are “fully substantiated.” It voted to “most severely condemn” the alleged misconduct. The Polytechnic Institute of Czestochowa, meanwhile, has also appointed an investigative committee to look into alleged plagiarism by Jendryczko, according to its president, Janusz Szopa.

    While these investigations run their course, a more immediate result of Wroñski's sleuthing could be the establishment of a formal mechanism for investigating allegations of misconduct in Poland. In a telephone interview with Science, Wiszniewski, president of the state committee for scientific research, said this experience has convinced him that Poland needs a national committee of “respected names” to review such allegations.


    The Internet: A Powerful Tool for Plagiarism Sleuths

    1. Eliot Marshall

    It's a safe bet that Polish chemical engineer Andrzej Jendryczko could have retired quietly from a long research career without facing charges of plagiarism had it not been for the Internet. It was thanks to the Net's remarkable power to link scholars and libraries across continents and to serve up instantaneous comparisons of texts that Jendryczko's accuser, cancer researcher Marek Wroñski of Staten Island University Hospital in New York, was able to unearth a trove of 30 allegedly plagiarized medical papers last year (see main text).

    The Internet delivered the first clue in 1994, when a Danish researcher named Jan Fallingborg looked up articles on selenium on Medline, the U.S. National Library of Medicine's (NLM's) computer service that searches and retrieves medical abstracts for a fee. He was surprised to find that, along with his own 1989 abstract on this topic, the computer coughed up a nearly exact duplicate version published in 1992 by four Polish authors. Danish officials investigated and concluded in 1995 that the Fallingborg paper had been plagiarized. They published the finding in 1996, but with no names.

    Last June, Wroñski saw the Danish note, obtained Jendryczko's name, and began surfing the Net for evidence of other potential instances of plagiarism. After finding what he considered to be a startling number of articles by Jendryczko—125 over a 13-year career—he set out to find source articles from which their texts might have been borrowed.

    By chance, Wroñski's investigation received a boost from Vice President Albert Gore in June. Gore persuaded the NLM to open its Medline service to the public, free of charge, through an easy-access gate known as PubMed. One of PubMed's most valuable features, designed by the National Center for Biotechnology Information (NCBI), is a push-button function labeled “find related articles.” NCBI director David Lipman explains that this “neighboring” function was developed by John Wilbur, an M.D. with a Ph.D. in mathematics. It uses statistical algorithms to identify root words in a selected article and scans the entire Medline database for other records that use the same words and are likely to cover the same topic. After its first pass through the database, it concentrates the search by giving extra weight to root words that appear more than once in the initial batch of candidate records. It's a powerful tool if you're hunting for suspected plagiarism. After poking around in PubMed during the evening and on weekends, Wroñski identified an additional 29 suspect papers.

    Lipman says he does not know of anyone else who has used PubMed to hunt for plagiarism this way. He recalls, however, that fraud hunter Walter Stewart, a staffer at the National Institutes of Health, once approached him asking for help in devising algorithms that would compare texts and give a numerical culpability rating for plagiarism. Lipman declined. But in PubMed, now accessed by 39,000 individuals a day, NCBI has handed a weapon to would-be fraud police like Wroñski.


    Study Suggests New Way to Gauge Prostate Cancer Risk

    1. Marcia Barinaga

    Prostate cancer is many men's worst fear. The most common cancer to strike nonsmoking men, it has felled a long list of public figures, including rock star Frank Zappa and 1960s guru Timothy Leary. Other aging baby boomers are now demanding blood tests to see if their prostate specific antigen (PSA) levels are high, a sign that cancer may already lurk in their prostates, and many men can rattle off their PSA levels with as much familiarity as their cholesterol counts.

    The PSA test detects the signal of cancers already under way, but it can't identify men at high risk of getting prostate cancer before they develop the disease, in the way cholesterol serves as a heads-up for heart disease. But now, a team of researchers from Harvard and McGill universities has come up with a molecule that may provide just such an early warning. They report on page 563 of this issue that men whose blood contained high levels of a protein called insulin-like growth factor-I (IGF-I) were four times more likely to develop prostate cancer than were men with the lowest IGF levels. “Of all the associations [with prostate cancer] we have looked at, this is one of the strongest,” says Ann Hsing, an epidemiologist who studies prostate cancer at the National Cancer Institute (NCI). Assuming that these results are confirmed, they “have implications for prevention, detection, and treatment,” says NCI cancer epidemiologist Joseph Fraumeni. An IGF-I test might be used to identify high-risk men who need close monitoring or to recognize potentially aggressive tumors while they're still small, and it could also lead to ways of lowering men's risk.

    The authors stress that such potential applications are still years away. “We are not suggesting that this report allows us to determine clinical practice,” says senior author Michael Pollak, a clinical oncologist at McGill University in Montreal, “but there are lines of investigation that it opens up.”

    Any new approaches that might help manage this slippery disease would be welcome. Just last week, the American Cancer Society called for increased research and education on prostate cancer, especially for African-American men, who are twice as likely to develop the disease as whites are. As awareness grows, experts are struggling with thorny issues of whom to screen and whom to treat, because some argue that mass screening is not cost-effective. They note that the PSA test may help spot some cancers early, but because many cancers don't advance to life-threatening disease even if left untreated, the test probably leads to many unnecessary surgeries.

    Seeking a new angle on these screening and treatment dilemmas, the researchers turned to IGF-I. They already knew from laboratory studies that the molecule is a powerful growth factor that stimulates the growth of both cancerous and normal prostate cells. To see if it was linked to disease in people, the team studied nearly 15,000 men enrolled in the Physicians' Health Study at Harvard. Those men gave blood samples in 1982 and then were monitored for a wide variety of diseases and conditions. By 1992, 520 had been diagnosed with prostate cancer. Of these, 152 had blood samples large enough to be assayed for IGF-I; Harvard School of Public Health graduate student June Chan compared these 1982 IGF-I levels to those of 152 men without cancer who were matched for age and other factors. “The basic question we addressed,” says team member Edward Giovannucci of Harvard Medical School, “is if you look at the normal range of [IGF-I] variation in men, [does] having relatively high levels render a man at higher risk for prostate cancer?” The answer was a resounding yes. When the men were sorted by their 1982 IGF-I levels, the 25% who fell at the high end of the IGF-I range were 4.3 times more likely to have developed prostate cancer than were the men at the low end.

    In most cases, it appears that the men didn't have cancer in 1982, because their PSA levels were normal and their cancer wasn't diagnosed until an average of 7 years later. That means “IGF is telling you something before the disease occurs,” says Giovannucci, “just like if you have a cholesterol [level] of 300, you are more worried [about heart disease] than if you have a [level] of 180.” Drugs can lower IGF-I levels, so if the molecule does indeed help cause cancer rather than merely being associated with it, then it might be possible to lower a man's risk of prostate cancer.

    The team's findings also offer some hope that IGF levels may help physicians determine which tumors are most likely to grow large enough to be invasive and life-threatening. The researchers tested all the 1982 blood samples for PSA and found 60 men with high levels, suggesting that they already had undetected early-stage cancer. As in the group as a whole, these men showed a range of IGF-I levels, but those with the highest levels were four times more likely to develop full-blown cancer than were those with low levels, suggesting that high IGF-I increases the chance that prostate cancer will grow.

    Because the numbers are small, that finding is far less certain than the paper's main conclusions, says Pollak. But it has already sparked interest in further studies. A Stanford research team has spent years sectioning surgically removed prostates to define tumor traits that predict aggressive disease, notes Donna Peehl, a tumor biologist at Stanford Medical School who studies prostate cancer. She's eager to see whether IGF-I levels correlate with those traits, and so might be an indicator of a tumor's aggressiveness.

    Indeed, the biggest effect of this paper may be to stir up new research. “A paper like this, which is so provocative, raises a whole spectrum of questions,” says cancer epidemiologist David Schottenfeld of the University of Michigan School of Public Health. For example, the men in the Physicians' Health Study were overwhelmingly white, and he'd like to examine IGF-I levels in African-American men, given their higher risk of prostate cancer. And because IGF is a growth factor with influence on many tissue types, the researchers are also looking at other cancers. In as-yet-unpublished work, Pollak and colleagues have found an equally strong association between IGF-I levels and the risk of breast cancer, and they are now seeking links to colon cancer. This next round of research may help determine whether people will someday watch their IGF-I levels as carefully as they plot PSA and cholesterol today.


    Bose Credited With Key Role in Marconi's Radio Breakthrough

    1. Jeffrey Mervis,
    2. Pallava Bagla
    1. Pallava Bagla is based in New Delhi.

    DELHI, INDIA—The Italian physicist Guglielmo Marconi holds a secure place in the history books for decoding the first wireless message sent across the Atlantic Ocean. That achievement, on 12 December 1901, ushered in the modern era of electronic communications. But it also triggered a century-long debate over who should get the credit for developing the receiving device that captured the famous message, sent from England to Newfoundland via Morse code.

    This month, an article in a special issue of The Proceedings of the IEEE, marking the 100th anniversary of the diode and the 50th anniversary of the transistor,* makes a definitive case for Jagadis Chandra Bose, an Indian biologist and physicist. Bose announced the invention in an 1899 paper presented at the Royal Society in London, writes Probir Bondyopadhyay, a satellite and communications engineer at Johnson Space Center in Houston and also an amateur historian. In contrast, says Bondyopadhyay, Marconi “was like a honeybee collecting honey from different flowers” to improve his wireless transmitter. “And he never gave credit to those who deserved it.”

    The device, called a self-recovering coherer, contained a sequence of iron-mercury-iron in a vacuum tube that was able to receive a long-distance message by continually resetting itself before each pulse. Bondyopadhyay says Marconi may have deliberately tried to divert attention from Bose's contribution by leaving the impression that it came from others, including an Italian naval officer.

    Ironically, Bondyopadhyay was drawn into the dispute more than a decade ago at the request of Marconi's daughter, G. Marconi Braga, who was upset about media reports (including a 1984 article in The New York Times) stating that Marconi should share credit with Nikola Tesla and others for inventing wireless radio. Braga, who died last year, “asked me to look into the matter,” says Bondyopadhyay. But instead of buttressing Marconi's claims, his investigations led him to Bose's role in advancing the technology. “I'm a historian. I find the facts and publish the facts. … By clarifying this thing, all I am trying to do is to set the record straight.”

    Amplifiers were not available in the early days of radio telegraphy, so the reception of messages depended on receiver sensitivity. Although Marconi and Bose succeeded in communicating across a few kilometers in separate experiments during 1895, a better version was needed for long-distance signals.

    Questions about the coherer's true origin arose shortly after Marconi announced his results. The editor of a prominent Italian technical magazine, L'Elettricista, made the case for an Italian navy signalman, P. Castelli. In response, Marconi said the receiving device he used was a gift from the Royal Italian Navy through his childhood friend, Luigi Solari, a Navy lieutenant. But in a July 1902 letter to the editor of The Times of London, Solari wrote that the idea came to him “in some English publication which I found myself unable to trace.” One year later, in the same newspaper, he declared that he “did not invent the coherer.” This sequence of events was first pointed out by a British historian, Vivian Phillips, in a 1993 paper in IEEE Transactions. But Phillips didn't mention Bose or speculate about the identity of the real inventor—the author of the mysterious publication to which Solari referred.

    The solution, however, was readily available in the literature. Bose, a maverick scientist working out of a one-room laboratory in Calcutta, offered it in a paper that appeared in the April 1899 issue of the Proceedings of the Royal Society. Titled “On a Self-Recovering Coherer and the Study of the Cohering Action of Different Metals,” the paper described the use of an iron-mercury coherer for detecting radio waves, then called electric radiation.

    “For very delicate adjustments of pressure,” Bose wrote, “I used in some of the following experiments an U-tube filled with mercury, with a plunger in one of the limbs; various substances were adjusted to touch barely the mercury in the other limb. … I then interposed a telephone in the circuit; each time a flash of radiation fell on the receiver the telephone sounded.” After a series of experiments, Bose concluded that “there can be no doubt that the action was entirely due to electric radiation.”

    In his IEEE Proceedings paper, Bondyopadhyay describes how Marconi, in the years after the experiment, “shifted attention” away from Bose's contribution through a “careful choice of words … and clear diversionary tactics.” And he suggests that the obfuscation was deliberate. “Marconi didn't disclose immediately what he used in receiving his message,” says Bondyopadhyay, noting the inventor's vagueness about the device in a New York speech 1 month after his landmark experiment and later that spring in London. “There was a bad motive involved, I suspect, but I don't come down too hard on him for that,” the engineer adds.

    Bondyopadhyay also explains why the controversy wasn't nipped in the bud, pinning some of the blame on Bose's scientific colleagues. “It is embarrassingly obvious that the British learned men of the day … never discovered Bose's work, [despite its being] so prominently displayed in the most prestigious publication of the British empire. It is clear that they never read this esteemed publication [or] did not connect Bose's work with Marconi's use of the device.”

    Prasanta Kumar Ray, a biochemist and current director of the Bose Institute in Calcutta, applauds Bondyopadhyay for correcting “a grave historical injustice” that robbed Bose of a share of Marconi's 1909 Nobel Prize. “No one can deny that it was Marconi who used and utilized this discovery for the larger benefit to mankind, but Bose made the actual scientific discovery,” says Ray. As to why Bose himself didn't clear up the mystery, Ray notes that “Bose was in a search for true knowledge, and he shunned crass commercialization of inventions.”

    Even Italy's former science minister, Umberto Colombo, says he's glad for the new information. “I am not surprised about this revelation against Marconi,” he told Science. “But it will not undermine Marconi's solid position in the history of science and in commercializing wireless telegraphy.”

    • * January 1998, Vol. 86, No. 1.


    Cell Division Gatekeepers Identified

    1. Elizabeth Pennisi

    The stretches of DNA known as kinetochores not only link chromosome pairs to the fibers that separate them in dividing cells but also regulate the timing of that separation

    This News report accompanies a special issue on the cytoskeleton that begins on page 509.

    Just as a group of young schoolchildren must all get in line and be counted before they can get on the bus and head home from a field trip, all the chromosomes inside the cell's nucleus must also line up before the cell can divide. The object, of course, is to make sure that no chromosome gets left behind when the daughter cells separate. Otherwise, one daughter cell could wind up with too much genetic material, and the other with too little.

    Checkpoint on.

    This chromosome pair can't split because MAD2 proteins leaving the kinetochores link to p55CDC/cdc20 and inactivate the cyclosome/APC, which in turn can't break down the kinase/cyclin complex that blocks cell cycle progression. But when microtubules attach to the chromosome, CENP-E changes its interaction with BUB1, thus altering MAD2 and preventing it from inhibiting the APC. It takes just one unaligned chromosome (blue, in inset) to delay the cell cycle.


    For decades, cell biologists have wondered just how the dividing cell keeps track of its genetic charges. Now a convergence of research in yeast genetics, cell biology, and biochemistry suggests an answer: Molecular nannies called kinetochores keep a sharp eye on the chromosomes' status. These specialized bits of DNA and protein were already known to have a mechanical role in cell division. They attach the duplicated chromosomes to the fibers of the mitotic spindle, which eventually pulls the chromosomes apart. But the new work, some of which was presented last month at the annual meeting of the American Society for Cell Biology in Washington, suggests that the kinetochores are far more than just anchors for the spindle fibers. “The kinetochore is very active in both the mechanics and the control of chromosome movement,” says Richard McIntosh, a cell biologist at the University of Colorado, Boulder.

    Acting in conjunction with a handful of proteins that tie them to the spindle fibers, the kinetochores make sure that the chromosomes are not pulled apart until every one of them is lined up and attached to the spindle. Kinetochores that have not yet hooked up to the spindle fibers apparently release a protein signal that works with other proteins to put the brakes on cell division. But when the spindle attaches, a protein involved in that linkage somehow disables the wait signal, thereby releasing the brakes on mitosis.

    The work is intriguing, says Don Cleveland, a cell biologist at the University of California, San Diego (UCSD), because it adds to recent evidence that the cytoskeleton—the cables, struts, and anchor points inside the cell—doesn't just carry out orders issued elsewhere in the cell. Instead, it is an integral part of the cell's command structure, generating signals of its own. In this case, says Cleveland, “[the work] links the cytoskeleton to the continuation of the cell cycle.”

    Understanding that connection can in turn reveal how the cell cycle goes awry. Errors in chromosome separation can lead to aneuploidy—daughter cells with too many or too few chromosomes—which underlies disorders such as Down syndrome and may play a role in aggressive cancer by causing cells to “go genetically crazy,” says McIntosh. “Aneuploidy is a fairly important problem.”

    The current view of kinetochore action began to solidify about 4 years ago when cell biologist Conly Rieder of the New York State Department of Health's Wadsworth Center in Albany decided to test a hypothesis that McIntosh had suggested a few years before. McIntosh wanted to explain why cell division does not occur until every one of the duplicated chromosomes lines up along the cell's midline and attaches to the spindle, which consists of filaments called microtubules that grow into the cell interior from the opposite ends of the cell. “The failure of a single chromosome to attach is sufficient to keep the cell cycle from going forward,” Cleveland says.

    Some researchers thought that this “checkpoint” in the cell cycle couldn't be released until each chromosome had attached to the spindle fibers and individually given its OK. But McIntosh suggested that it would be simpler if unconnected kinetochores simply gave off a “wait” signal that was somehow silenced by spindle attachment. Although the idea was well received, Rieder worried that no one had demonstrated that there really was such a delay or that it was tied to the kinetochores.

    To try to find out, Rieder and his colleagues filmed more than 100 cells as they divided. They found the predicted delay. While the time it took for the chromosomes to line up varied, they always separated 20 minutes after the last one was in position. What's more, Rieder's results indicated that unattached kinetochores are the source of the wait signal: If he destroyed the last unattached kinetochore with a laser, the cell proceeded to divide as if that chromosome pair were already in line. At Duke University in Durham, North Carolina, Bruce Nicklas, by mechanically manipulating chromosome pairs, also showed that waiting is an essential part of the cell cycle. “That led to the universal idea that the action is at the kinetochore,” says Andrew Murray, a cell biologist at the University of California, San Francisco.

    Another experiment conducted at about the same time strengthened that conviction. Working together, cell biologist Gary Gorbsky of the University of Virginia, Charlottesville, and Nicklas demonstrated that some proteins in the kinetochore change their character, losing phosphate groups, once the spindle fibers attach. “They were the first to correlate a change in kinetochore chemistry with a change in attachment [dynamics],” says Rieder. “The field entered warp speed.”

    To find the proteins causing these changes, researchers decided to take a hard look at two sets of genes that yeast geneticists had identified in 1991 as being essential for making that species's chromosomes wait until they all lined up before separating. M. Andrew Hoyt and his colleagues at Johns Hopkins University found one set including three genes that they named bub1, -2, and -3. And Murray found a different set of three, dubbed mad1, -2, and -3. Yeast cells and their chromosomes are too small, however, for researchers to identify where the proteins produced by the genes are concentrated.

    Researchers had better luck in the much larger cells of mammals and frogs, where they were able to find and track the equivalents of the yeast proteins. The kinetochore proved to be the center of the proteins' activities. Murray's team found MAD2 on unattached kinetochores of frogs, and Yong Li and Robert Benezra at the Memorial Sloan-Kettering Cancer Center in New York City identified the human MAD2 (Science, 11 October 1996, pp. 242 and 246). The human protein, Li and Benezra later found, concentrates at the kinetochore before the chromosomes become attached to the spindle but leaves by the time they are aligned. And Frank McKeon at Harvard Medical School, who identified BUB1 in mice last year, found that that protein also accumulates at the kinetochore during mitosis.

    Now researchers are unraveling the molecular choreography—staged at the kinetochore—that enables these proteins and some newly discovered ones to regulate chromosome separation. “All the proteins that have been identified interact in one way or another at the kinetochore,” says Gorbsky. “And many of the components are very dynamic, coming on and off the kinetochore.”

    One of those dynamos is MAD2, which is involved in the wait signal. At first, the high concentration of MAD2 found on the kinetochores until microtubules attach seemed to be necessary for arresting cell division. When Gorbsky injected antibodies that combine with MAD2 and prevent it from functioning into rat kangaroo kidney cells growing in culture, “we had premature separation, when some of the chromosomes were not well connected, or tangled up,” he said at the cell biology meeting.

    But now it seems that MAD2 does more than simply accumulate on unattached kinetochores. Copies of this protein are also continuously migrating into the cytoplasm, where they somehow broadcast the wait signal throughout the cell. Gorbsky has evidence that in mammals, MAD2 transmits its signal by associating with a protein known as p55CDC, while Murray sees it link up with the equivalent protein in yeast, which is called cdc20.

    These duos then apparently head for a large protein complex called the cyclosome or APC (anaphase promoting complex). When active, the complex helps the cell initiate anaphase—the stage of the cell cycle when chromosomes separate and move toward opposite ends of the cell—by catalyzing the breakdown of cyclins and other proteins that otherwise put the brakes on mitosis. For example, the APC likely gets rid of the proteins that glue chromosome pairs together. But the presence of MAD2 and its partners keeps the APC in check, stalling out cell division. “As long as you have free kinetochores, the [wait] signal is continuously generated,” says Gorbsky.

    Then, as each kinetochore connects to microtubules, the signal from that kinetochore is silenced, until eventually no more wait signals are being generated in the cell. For this to happen, each kinetochore must have some way of recognizing when it has linked up with the mitotic spindle. At the cell biology meeting, Tim Yen of the Fox Chase Cancer Center in Philadelphia described how this recognition and silencing could occur. It seems that the BUB proteins, together with a protein called CENP-E that helps tether the microtubule to the kinetochore, trigger a series of changes along the checkpoint pathway that ultimately shuts down transmission of the “wait” command, at least in human cells.

    Seven years ago, while working together, Yen and UCSD's Cleveland had found that CENP-E, a molecular motor that helps transport cellular components along microtubules, associates with kinetochores. In test tube experiments and in live mammalian cells, they also found evidence that CENP-E plays a role in achieving accurate chromosome alignment during mitosis. For example, when the researchers removed CENP-E from the system, mitosis occurred before the chromosomes had a chance to line up.

    Both Yen's and Cleveland's labs have now shown that CENP-E is important for linking kinetochores to microtubules. In addition, Yen's team has tied this protein to human BUB proteins, which were discovered recently in his lab by Gordon Chan and Sandra Jablonski. As chromosomes begin to pair off, the human BUB proteins move to the kinetochore. There, a BUB1 protein, in conjunction with BUB3, links up with CENP-E. “This kinetochore association puts [these proteins] in the right place” for monitoring spindle assembly, says Hoyt.

    Yen thinks that when the microtubule links up with the CENP-E, it somehow changes the structure of the molecular motor protein. This change may in turn alter BUB1's activity, possibly by removing phosphate groups from it. Whatever the nature of the change, it apparently leads to the disabling of MAD2, making this protein incapable of binding to p55CDC and shutting down the APC. As a result, mitosis can proceed.

    This scenario jibes with new findings in yeast as well. At the meeting, Hoyt's team reported that BUB1 is also an early component of the checkpoint pathway in yeast. That group also finds that BUB1 works with BUB3 while the wait signal is being generated. “Not only is the kinetochore a structural part of the chromosome that contains molecular motors, but you have regulatory components [there], all in the same place,” Yen explains.

    Of course, McIntosh and others emphasize that there are still gaps in the picture they are developing. They don't know all the molecules involved. Nor have they traced the connections among all the ones they do have in hand. It is unclear, for example, what changes occur in MAD2 as the kinetochore attaches to the spindle or what the links are between MAD2 and the BUB proteins. Moreover, cell biologist Douglas Koshland from the Carnegie Institution of Washington in Baltimore, Maryland, worries that too much emphasis is being placed on the kinetochore. “There is a fair amount of evidence that everything is going through the kinetochore,” he says. “But it's not necessarily hard fact” that it is the sole source of the “wait” signal.

    Yet kinetochore researchers are encouraged by their glimpses of how some of these molecules do interact. “When you know the players, you can construct hypothetical pathways to test,” McIntosh explains. “It's all falling into place.”


    Did Galaxies Bloom in Clumps?

    1. Govert Schilling
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    WASHINGTON, D.C.—The beginnings of the great clusters and walls of galaxies seen in today's universe may date back practically to the big bang. By searching the neighborhood of distant quasars—galaxylike objects so bright they can be seen shining from a time when the universe was less than a billion years old, or 10% of its current age—astronomers have found that nearly every one has a fuzzy companion galaxy or two. These small gatherings in the infant universe, says team leader George Djorgovski of the California Institute of Technology in Pasadena, are “the possible cores of future rich clusters of galaxies.”

    They are also a challenge to the notion that the clumpiness of today's universe emerged fairly recently. If the universe contains as much mass as some theorists believe, the formation of dense clusters would have been retarded by the gravity of the surrounding universe. But the belief in a dense universe has already taken a blow from the discovery of great walls of galaxies when the universe was just 2 billion years old (Science, 4 April 1997, p. 36). Now, Djorgovski thinks he can discern large-scale structures even earlier in cosmic history.

    Like all astronomers wanting to probe the farthest reaches of the universe, the team had to rely on quasars, because they are so much brighter than ordinary galaxies. At great distances (also referred to as high redshifts because light originating there is drastically reddened by the expansion of the universe), observers have cataloged dozens of quasars. But because only a small fraction of galaxies flare up as quasars, these objects by themselves can't reveal clustering in the early universe. That would be like trying to learn where cities are concentrated by mapping only the ones that have a Q in their name. So Djorgovski's team used the light-gathering power of the 10-meter Keck telescope at Mauna Kea, Hawaii, to search the surroundings of the most distant quasars, at redshifts of between 4 and 5, for neighboring faint galaxies at the same distance.

    “This is very much work in progress,” says Djorgovski, who presented the preliminary results early this month at a meeting of the American Astronomical Society here. “Only some 10 quasar fields have been studied so far, but in nearly every case, we found at least one companion galaxy at the same redshift as the quasar. This is the first clear detection of primordial large-scale structure at redshifts larger than 4.” Djorgovski points out that the quasar companions found by his team are not yet full-fledged galaxies. “There hasn't been enough time [since the big bang] for these things to be anything else than primordial protogalaxies,” he says.

    Charles Steidel of CalTech, who identified galaxy groupings at redshifts of about 3, when the universe had reached the 2-billion-year mark, isn't sure that Djorgovski's team really has discovered the precursors of the structures he sees. “They're using a different approach, observing only very small fields,” he says. And Neta Bahcall of Princeton University is troubled by Djorgovski's finding that the distant quasars seem to have more companion galaxies than quasars at lower redshifts. “This is not what you expect, since further clustering [over time] would only increase their numbers,” she says. But Bahcall, who advocates a low-density universe, agrees that Djorgovski has found strong evidence for very early clustering.

    The observations suggest that only scattered regions of the early universe were dense enough for galaxies to form, so the first galaxies naturally appeared in clumps. “These are very special places in the universe,” says Djorgovski. “Chances are that we miss most of them when we observe random spots on the sky.” If so, astronomers looking for action in the early universe need to follow the bright lights of quasars.


    Mimicking an Enzyme in Look and Deed

    1. Robert F. Service

    Making model airplanes and ships look like the real thing requires a delicate touch. But getting those models to actually fly or sail requires another level of sophistication entirely. So it is with efforts to create durable small molecules that look and act like enzymes, the biological catalysts that carry out a multitude of chemical reactions in living things—and, increasingly, in industry. Over the past few decades, numerous research teams have come up with small molecules that resemble the heart of one enzyme or another. But even when these models catalyze the same chemical reactions as the enzymes, they rarely do so in the same way. And those models that faithfully duplicate an enzyme's function rarely resemble their mentor.

    Now, a new model catalyst, reported on page 537 of this issue, represents “one of the first good examples to bring together both the structural and reactivity aspects,” says Tom Sorrell, an organic chemist and enzyme modelmaker at the University of North Carolina, Chapel Hill. This model, the work of a five-member team at Stanford University led by inorganic chemist Dan Stack and structural expert Keith Hodgson, mimics the active site of an enzyme known as galactose oxidase. Like the enzyme, it works at ordinary temperatures and pressures to transform one set of organic compounds—alcohols—into other compounds called aldehydes, which serve as intermediates for making still other essential compounds. It “is quite an impressive piece of work,” says Harvard University inorganic chemist Richard Holm.

    The new galactose oxidase mimic works too slowly to be useful in industrial processes, where the alcohol-to-aldehyde conversion is a first step in making everything from drugs to perfumes. But Holm and others say it may point the way to other enzyme mimics that could reduce the complexity and cost of many industrial reactions—as well as lower their output of unwanted, polluting byproducts. Properly designed model catalysts could work on a wider range of starting materials and in harsher conditions than enzymes can. And models that have the same basic structure as the corresponding enzymes are most likely to be efficient enzyme mimics, says Stack. “Nature has already solved the problem of how to run certain reactions. All we need to do is copy her.”

    Key to the function of both galactose oxidase and the new model is a single copper atom at the core of each compound. Copper, like other metals, is good at snatching electrons and giving them up to other atoms. But it excels at doing so only when positioned just right. The metal ordinarily hitches itself to four other chemical groups arrayed in a flat square with the copper atom at the center—a geometry that restricts copper's ability to interact with other compounds, because they can't get close enough. Galactose oxidase gets around this by forcing copper to bind to five other compounds in a pyramid-shaped arrangement—with copper at the center of the pyramid's base—that keeps it a bit unsatisfied, looking for more action.

    In this case, the copper is linked to four separate amino acids and a water molecule, which takes up one of the prime reactive spots on the pyramid's base. Alcohols have a strong preference for these baseline positions as well. They kick out the water and take its spot. The copper then wrests electrons and protons—hydrogen nuclei—from the alcohol and transfers them to molecular oxygen, creating a molecule of hydrogen peroxide along with the aldehyde.

    To create the same reactive geometry in their model catalyst, Stack and graduate student Yadong Wang designed a set of organic arms that would bind to the copper atom and mimic the role of galactose oxidase's key amino acids. One group, called binaphthol, takes up two of copper's binding sites and warps its bonds into a pyramidal arrangement. Another two arms contain groups known as phenols. Finally, in the activated form of the model, a water molecule inserts itself in the pyramid's base, just as in the enzyme.

    The result is a compound that binds alcohols and then goes to work on them, much like the enzyme. In a series of spectroscopic experiments, Stack and Wang found that the compound duplicates the enzyme's reaction steps: After an alcohol molecule binds to the copper, the phenol arms help the copper swipe electrons and protons from the alcohol, producing the aldehyde, which drops off the catalyst. A molecule of oxygen then jumps into the free spot and snags the electrons and protons, forming hydrogen peroxide and regenerating the active catalyst in the process. X-ray structure studies of the model compound frozen in the initial step of this reaction—done by Hodgson, Jennifer DuBois, and Britt Hedman—helped support this picture.

    While the new model is one of the first small molecules to truly mimic an enzyme, Stack says it's likely that many will soon follow. X-ray imaging experts are getting ever sharper pictures of the heart of other metal-containing enzymes, giving the modelers crucial guidance. Nitrogenase, which takes nitrogen in the air and converts it to a biologically useful form by tacking on hydrogens, is among the modelers' most hotly pursued prizes. Concludes Stack: “The time is right for rapid progress in this area.”


    Irish Bridge Sheds Light on Dark Ages

    1. Sean Duke
    1. Sean Duke is a science writer in Dublin.

    The vaunted engineering skills that the Romans spread across Europe are supposed to have vanished during the “Dark Ages”—from the collapse of the Roman empire in the fifth century until about A.D. 1000. But a new find in the west of Ireland is challenging that assumption. A pair of underwater archaeologists has discovered the remains of a huge wooden bridge across the river Shannon. At 160 meters long, it may be the largest wooden structure from the early medieval period ever found in Europe, and its technical complexity has surprised archaeologists. Researchers now believe that the bridge, dated at A.D. 804, was the work of monks from the nearby town of Clonmacnoise, who kept Roman expertise alive over the centuries.

    “The Clonmacnoise bridge fills an important gap,” says archaeologist John Bradley of the National University in Maynooth. “There was no evidence of large bridges in Europe between the Roman era and about A.D. 1000.” It is unlikely to be the last such discovery, adds Morgens Schou Jorgensen of the National Museum of Denmark, an expert on the large wooden bridges built by the Vikings several centuries later. “I think that other similar bridges will now be found in Ireland, as happened in Denmark after the first Viking long bridge was uncovered in 1932,” says Jorgensen. If so, the finding could mean that a sophisticated land communications network may have been in place across Ireland in the 9th century.

    Donal Boland and Mattie Graham, divers who specialize in underwater archaeology, had begun their survey of the river Shannon after coming across an intriguing reference to a bridge in the Annals of Clonmacnoise, written in 1158. They concentrated on a 500-meter stretch of the river near the remains of the monastery. In 1994, with archaeological guidance from Fionnbarr Moore of the National Monuments Service of Ireland, they found what they were looking for: an ancient oak post sticking out of the muddy riverbed. By last fall, Boland and Graham had discovered a total of 130 timbers, all neatly arranged in pairs 5 meters apart, spanning the entire 160-meter width of the Shannon. They also found nine oak dugout canoes, from which workers may have driven the pilings deep into the riverbed, and the remains of an elaborate horizontal cross-bracing system that once supported a roadway.

    The line of the posts ran directly into the ruins of a 13th century Norman castle, leading the researchers to suspect at first that the bridge was also a Norman construction. But this theory was ruled out after they sent samples of the oak timbers to Queen's University Belfast for dating by tree-ring analysis. The Belfast researchers, led by Mike Bailie, said the timbers were felled in 804, a full 365 years before Norman invaders arrived from France.

    The focus of archaeologists then turned to the thriving 9th century monastic settlement at Clonmacnoise. The town of several thousand inhabitants straddled the point where an east-west route across Ireland known as the Eiscir Riada, or Esker Road, crossed the Shannon. “The bridge was built to attract commerce,” says Aidan O'Sullivan, the archaeological director in charge of Clonmacnoise, “and the leadership for the project was probably provided by the monks.”

    The discovery of the Clonmacnoise bridge has led archaeologists such as Bradley to question whether knowledge was really lost in the aftermath of the fall of Rome, at least in distant parts of Europe that were spared the chaos of the Dark Ages. “We know the Irish preserved Roman texts, and this find suggests that they may also have preserved Roman technology and bridge-building skills,” says Bradley. “Perhaps the Dark Ages were not so dark after all.”


    Getting a Handle on the Molecules That Guide Axons

    1. Evelyn Strauss
    1. Evelyn Strauss is a free-lance writer in San Francisco, California.

    Some nerve cells in a growing embryo act like people with an all-consuming crush. They zero in on the object of their attraction, the midline of the embryo, with seemingly single-minded focus. But, like many lovers, the neurons are fickle. Once their axons, the long projections they send out to contact the midline, have achieved intimacy with it, many ignore—or even spurn—the object of their obsession. Instead of remaining at the midline, the axons continue to grow, searching for their ultimate destiny elsewhere in the nervous system, where they will make specific connections with other cells.

    This switch in affections is critical because it allows axons to cross the midline so that the two sides of the nervous system can talk to each other, but it has mystified scientists. Now, work from several labs is revealing some of the molecular logic that enables axons to change their conduct so abruptly.

    Developmental biologists have discovered in the last several years how growing axons find their way to the midline in the first place: They are drawn in by attractive molecules, among them the proteins called netrins, released by midline cells. But realizing this only deepened the mystery of how the axons manage to continue on their journey. For example, certain kinds of axons grow across the midline and then turn sharply to run along the length of the body on the other side—and they never cross back. “These cells never think about the midline again,” says neurobiologist Tom Kidd, a postdoctoral fellow in Corey Goodman's laboratory at the University of California, Berkeley. “They forever ignore the signal that most strongly attracted them early in life.”


    Axons that have not crossed the midline (FP) of the rat-brain strip (green) turn toward a second midline tissue (eFP), while those that have crossed the midline (red) ignore the extra piece.


    The new results, which appear in five papers published this month and last in Cell, Neuron, and Science, are now revealing how axons complete their journeys through and beyond the midline. While many pieces of the axonal guidance puzzle remain to be found, together the papers show that a dynamic interplay of both attractive and repellent signals between the midline and the nerve cells themselves directs axon movements. For example, axons carry a surface protein called Robo that can prevent them from crossing the midline, apparently because it is the receptor for an as-yet-unidentified repellent molecule. But the midline itself makes another protein, known as Comm, which reduces Robo's concentration, allowing the axons to traverse the midline. Afterward, Robo's concentration shoots up, preventing the axons from retracing their steps.

    The midline also makes axons lose their ability to respond to the alluring signals that brought them there. Indeed, there are even indications that axons can completely change the nature of their responses—being repulsed instead of attracted by signals such as netrin-1. “These papers have shown very nicely that even a simple decision—whether or not to cross the midline—is actually very complex,” says developmental neurobiologist Lynn Landmesser of Case Western Reserve University in Cleveland. “The axon has to integrate multiple kinds of guidance cues, some positive and some negative.”

    Researchers hope that what they are learning about the molecules that guide axon behavior at the midline will eventually lead to a better understanding of how nerve cells know where to go as they set up the entire nervous system, including the brain and neuronal connections in the periphery. It could also lead to ways to tackle important medical problems, such as why damaged spinal cords don't regenerate. “If we knew more about what makes axons grow or inhibits them, we could understand why axons in the nervous system of a fully developed human being won't grow,” says developmental neurobiologist Tom Jessell, of Columbia University's College of Physicians and Surgeons in Manhattan.

    Researchers have focused on the midline because axonal behavior there is so easy to follow. “Axons behave in very dramatic and very stereotyped ways at the midline. It's easy to see when something is wrong,” says Marc Tessier-Lavigne of the University of California, San Francisco. Indeed, in the early 1990s, Guy Tear and Mark Seeger, who were both postdoctoral fellows in Goodman's lab at the time, took advantage of that stereotypical behavior to identify the genes that feature in the current work.

    They screened tens of thousands of mutant fruit fly embryos, looking for those in which either too few or too many axons crossed the midline. Two superstar mutants emerged from those experiments: roundabout (robo), so named because the axons in animals meander back and forth across the midline, flagrantly disregarding the barrier that normally separates the animal into its two halves, and commissureless (comm), which displays the exact opposite behavior. Instead of crossing the midline and creating bridges, or commissures, between the halves, axons in comm mutants run straight up and down on both sides.

    By 1996, Seeger and Goodman's teams had cloned the comm gene and shown that its protein product resides on the surface of midline cells. They had also found that while Comm's presence there is normally essential for axons to cross the midline, it is not required when the robo gene is defective as well. In that case, axons in fruit fly embryos weave back and forth just as they do when only robo is faulty. That observation suggested that Comm and Robo somehow collaborate to keep axons on the right track. But how the two proteins might cooperate wasn't clear until now.

    In work described in the 23 January issue of Cell, Kidd, Tear (who is now at Imperial College in London), Goodman, and their colleagues have cloned the fruit fly's robo gene. Sequence analysis suggests that the protein encoded by the gene is a receptor, translating signals from the environment into decisions about how the axon should move. The researchers have not yet identified that signal, however.

    To get a better indication of what the protein does, Goodman and Tear's groups stained axons with antibodies that either detect Robo or bind to proteins specific to subpopulations of axons that have distinctive trajectories. That procedure enabled them to see that the rare axons that normally never cross the midline carry large amounts of Robo on their surfaces. Those that do cross display low levels of the receptor until after they have reached the other side. At that point, the axons dramatically turn up Robo production. These results, along with the observation that axons freely cross and recross the midline in mutants lacking Robo, suggest that the protein detects a chemical that deflects the axon tip from the midline.

    Comm enters the picture because it apparently turns down Robo's concentration so that axons can cross the midline, the researchers report in the January issue of Neuron. When they genetically engineered embryos to produce Comm in large quantities, they found reduced levels of Robo on axons, which crossed the midline unrestrained, just as they do in embryos missing the robo gene itself. “This is very exciting, because it shows that cells at the midline can influence how an axon can change its whole personality as it crosses,” says neurobiologist David Van Vactor of Harvard University. Those axons destined to remain on one side of the midline presumably have so much Robo from the outset that Comm can't overcome the large quantities present.

    The researchers are now trying to figure out exactly how Comm works. They don't know, for example, whether it interacts with other factors that help turn down the amount of Robo on the axon surface. Also, the predicted chemical, presumably made at the midline, to which Robo responds is still at large.

    But whatever its identity, other work indicates that the Robo-Comm axonal guidance system is widespread in the animal kingdom, suggesting that evolution developed this system for allowing axons to cross the midline and then exploited it over and over again. In an independent investigation, also described in the 23 January Cell, Jennifer Zallen, a graduate student in Cori Bargmann's lab at the University of California, San Francisco, studied nematodes—tiny roundworms—to find mutant strains in which axons stray from their normal trajectories during development. One such strain also turned out to resemble robo fruit fly mutants, in that some axons zigzagged back and forth across the midline. When Bargmann and her colleagues isolated and sequenced the gene at fault, which they call Sax-3 because it normally helps keep sensory axons on track, they found that the sequence of the protein it encodes closely resembles that of the fruit fly's Robo.

    “It's very encouraging that work from two very different kinds of mutant screens in two different organisms came up with the same key gene, especially given that flies and worms have nervous systems that superficially look quite different,” says Goodman. His team and graduate student Katja Brose in Tessier-Lavigne's lab have now also cloned the corresponding genes in the rat and human, which encode proteins that look very much like Robo. “It's not a big leap of faith to imagine that the homologous genes will function in the same way in mammals,” Goodman adds.

    How neurons can move through the midline at all, given the continued presence of the attractive chemicals that drew them there in the first place, has also been a puzzle. A possible answer comes from Fujio Murakami's group at Osaka University in Japan: The midline itself somehow causes rat axons to become unresponsive to the enticing properties of these attractants as the nerves cross over.

    In work described in the 2 January issue of Science, postdoc Ryuichi Shirasaki placed strips of brain with the midline removed near a second piece of brain containing the midline. He found that fluorescently labeled axons from the first strips grew toward the second, indicating they were being attracted by a chemical there. But when he left the midline in the first piece, the axons that crossed it completely ignored the second piece of midline tissue, even though they were close enough to sense the chemicals emanating from it. One of the chemicals that the axons no longer sensed seems to be netrin-1, because when Shirasaki repeated the experiment using netrin-1-producing cells in place of the second midline, the axons ignored those cells just as they had ignored the midline tissue itself.

    Neurons may do more than just lose their responsiveness to netrin-1 when they cross the midline; they may even become repelled by it. When Mu-ming Poo's and Christine Holt's groups at the University of California, San Diego, in collaboration with Tessier-Lavigne, subjected isolated nerve cells in culture to a gradient of netrin-1, they found that the axons normally migrate toward higher concentrations of the signaling molecule. But when the researchers inhibited an enzyme called cAMP-dependent protein kinase A, which plays an important role in the cell's internal signaling pathways, the axons changed their behavior, veering away from netrin-1 instead of moving toward it. At this point, the researchers don't know whether the same thing happens in the developing nervous system, but the result, which appeared in the December issue of Neuron, shows how flexible axons' responses to guiding molecules can be. “The same axon can find a cue attractive or repulsive, or it can ignore it,” Tessier-Lavigne says.

    Researchers are still a long way from having the complete list of the molecular interactions that choreograph developing axons. But they say that the progress they've made so far has brought them to the edge of the next big frontier: sorting out how individual nerve cells synthesize the many signals they receive. This will require delving into the inner workings of the axon tip to understand how receptors such as Robo influence the filaments of the neuron's internal skeleton (cytoskeleton) to cause an axon to turn. “Once we have a handle on the signaling machinery that links receptors to the signaling machinery, we will be able to understand where and how the effects of these signals are integrated,” Tessier-Lavigne says. In the meantime, unlike most fickle humans, nerve cells are revealing at least some good explanations for their quirky behavior.


    The Mediterranean Beckons to Europe's Oceanographers

    1. Nigel Williams

    The Mediterranean Sea is critical to the economies and environments of the 20 countries bordering it, but studies of its circulation and biology have been patchy and uncoordinated. Four years ago, the European Union launched a Mediterranean Targeted Program of joint research projects, and researchers met in Rome recently to discuss what they've learned in the program's first phase. They outlined a picture of an ocean that may be acutely sensitive to environmental change.

    Too Much Salt in the Sea

    Small and nearly isolated from the stabilizing influences of the waters beyond the Straits of Gibraltar, the Mediterranean is among the most vulnerable of the oceans to human activity. As results discussed at the Rome meeting showed, environmental change is affecting not only the Mediterranean's temperature and composition but even its grand circulation pattern—a change that could have global impacts.

    Although many researchers suspect that the temperature of the world's oceans is increasing, only in the Mediterranean has a warming of deep waters, perhaps in response to the overall warming of the globe, been clearly detected. In measurements over the past 40 years, at depths of 2000 meters to 2600 meters in the northwest Mediterranean, Jean-Pierre Béthoux and his colleague Bernard Gentili of the Oceanography Observatory at Villefranche-sur-Mer, France, detected a 0.13-degree rise. They also found a signal that reflects environmental changes nearer to home: a small increase in salinity.

    To account for the increase in salt levels, the two researchers analyzed changes in the sea's water budget in recent years. They found that the construction of key dams, notably the Aswan High Dam on the Nile in Egypt and a dam on the Ebro in Spain, has drastically reduced the flow of fresh water into the sea since the 1940s. Sea level might have dropped by 10 centimeters—if additional inflow of salty water from the Atlantic and from the even saltier Red Sea had not made up the deficit. That has led to a general increase in the salinity of Mediterranean seawater at most levels. And because salt boosts water density, a change in salinity can alter circulation patterns, says Béthoux.

    Wolfgang Roether of the University of Bremen in Germany and his colleagues may have detected signs of those changes, in the eastern Mediterranean. They found hints that rising salinity has altered a vertical circulation pattern in which salty, dense surface water sinks into the depths. Previous studies had suggested that the process takes place mainly in the Adriatic Sea. But earlier this decade, the team detected a huge input of water from the Aegean region into the depths, probably because of the salinity increase.

    Other researchers think that rising salt levels in a middle layer of Mediterranean water formed south of Greece pose a threat of another, more worrisome circulation change—this one in the Atlantic. This middle layer, called the Levantine Intermediate Water, spreads uniformly throughout the Mediterranean and forms around 80% of the water that leaves the sea through the lower levels at the Straits of Gibraltar. Saltier than the Atlantic, the former Mediterranean water flows west and influences the circulation of the North Atlantic. The salty plume helps shape the course of the Gulf Stream, which carries heat northward to Europe.

    The increasing saltiness of the Mediterranean plume, some researchers fear, could somehow affect this interaction. Robert Johnson of the University of Minnesota believes that the saltier Mediterranean water might deflect the Gulf Stream westward toward the Labrador Sea, drastically cooling northern Europe. Eelco Rohling of the University of Southampton in the U.K. thinks the Mediterranean water could have the opposite effect, pushing the Gulf Stream farther toward Europe and turning up the heat there. Either way, says Béthoux, “the potentially hemispherewide effects highlight the worryingly broad impact that changes in the Mediterranean may have on climate.”

    Well-Watered Desert

    The eastern Mediterranean may not look like a desert, but to a marine biologist's eye, it is one of the most impoverished regions in the world ocean. At the Rome meeting, researchers reported that the scarcity of nutrients in the sea's eastern end has skewed its ecology, favoring bacteria, which are more efficient than larger organisms at exploiting the microscopic green algae—phytoplankton—at the base of the food chain.

    Rising indicators.

    Temperature and salinity at depths of 2000 to 2600 meters in the northwestern Mediterranean.


    That could help explain why fishing grounds in the eastern Mediterranean are notoriously sparse. “Dominance of the lower part of the food web in the east [by bacteria] is of particular economic significance,” says Carol Turley of the Plymouth Marine Laboratory in the United Kingdom, leader of one group carrying out the studies.

    Nutrients are scarce in the Mediterranean, compared with the rest of the world ocean, because the sea's main input comes from the surface waters in the Atlantic, which flow in through the Straits of Gibraltar. Atlantic plankton have already depleted these waters of nutrients, and the nutrient supply in the open sea declines further as the Atlantic water moves east, in spite of the nutrient-rich pollutants discharged by major rivers. Turley and her colleagues have found that the phytoplankton are on average only one-third as abundant in the Mediterranean's eastern basin as in the west. And bacteria consume a far greater proportion of this plant material in the eastern Mediterranean than in the west.

    When Turley and colleagues analyzed phytoplankton and bacterial growth in shipboard experiments, they found that 55% of phytoplankton production flows to the microbial food web in the west; in the east, this figure rises to 85%. “The fierce competition for scarce nutrients favors the smaller organisms, which are able to utilize them most rapidly,” explains Frede Thingstad of the University of Bergen in Norway.

    Because bacteria hog the phytoplankton, other organisms are disproportionately scarce. Fish production is just a third of that in the western basin. And because the bacteria also degrade organic detritus, the rain of organic particles into deep waters from the sunlit layers above is nine times lower than in the western basin, starving bottom-dwelling organisms. Their biomass is 46 times lower than in the western Mediterranean, according to Turley's analysis of experimental results.

    At the National Center for Marine Research in Athens, Efstathios Balopoulos and his colleagues have traced a similar picture in the Aegean Sea, part of the eastern Mediterranean, finding that bacteria account for more than 56% of the organic particles in shallow layers. And because the bacteria consume nearly all the sinking waste matter, he found that the deep Aegean is one of the most meager habitats anywhere in the world ocean.


    The Transistor With a Heart of Gold

    1. Alexander Hellemans
    1. Alexander Hellemans is a writer in Naples, Italy.

    The dream of superconducting circuits has never quite died. More than 10 years ago, most researchers abandoned hope for one kind of superconducting transistor, based on structures called Josephson junctions. But their disappointing performance didn't end the allure of circuits that would operate without electrical resistance—and hence might run much faster than conventional circuits and fit into a smaller space without overheating. Now, a team of researchers at Groningen University in the Netherlands has tried to revive the dream with a new design for a superconducting circuit.

    Golden gate.

    A thin layer of gold regulates the flow of supercurrent in the new circuit.

    While Josephson junctions consist of two layers of superconductor sandwiching an insulating layer, the new transistor replaces the insulator with a thin layer of gold. Its speed, like that of the Josephson junctions of the 1970s and '80s, still falls short of the best conventional devices. But the novel design, which the researchers describe in a paper to be published in Applied Physics Letters, gives it a key advantage over Josephson junctions: It can not only act as an on-off switch, but can also perform the other function of normal transistors—amplifying an incoming current. “The combination of the known physics and the potential technical application is new,” says Gerd Schön of Karlsruhe University in Germany. “It's nice work,” adds Michel Devoret of France's Atomic Energy Commission at Saclay, noting that any transistor that can function at extremely low temperatures also has the advantage of low inherent noise.

    Josephson junctions allow electrons to “tunnel” through the insulating layer from one superconductor—usually a metallic, low-temperature superconducting material—to the other. The electrons, which are bound together in pairs in the superconductor, can tunnel through the insulator as a weak zero-voltage supercurrent and as a single-electron current. The single-electron current, however, flows only when a voltage is applied across the junction that is strong enough to break apart the electron pairs for their passage through the insulator. When the voltage is reduced below this critical level, the single-electron current is switched off.

    In the Groningen device, the insulating layer is replaced by a thin gold layer 0.1 micrometer wide. Electrons do not need to tunnel in the new device; they are simply conducted through the gold layer. The electron pairs are still split up, but the electrons remain “correlated,” says Teun Klapwijk of Groningen University: “They are separated, but they ‘remember’ each other sufficiently to keep the correlation active.” As a result, the supercurrent is resurrected at the far side of the gold barrier.

    What controls the supercurrent across the device is a conventional current that flows perpendicularly through the gold layer. Because it is so thin, the gold layer behaves as a structure called a “quantum well.” In a quantum well, electrons are confined in a layer so narrow that it affects their quantum-mechanical properties, forcing them to reside only in specific energy levels. When a small current flows along the gold layer, it “heats” the electrons, which fill up many of the available energy levels and impede the current through the superconductors. “This is why we call it a ‘hot-electron’ tunable supercurrent,” says Klapwijk. The effect can shut off the supercurrent entirely, allowing the circuit to act as a switch. But the supercurrent can also be modulated by regulating the current flow through the gold, says Klapwijk, allowing the device to act as an amplifier. So far, the team has achieved a modest voltage gain, of about 2.

    To make the device usable as a transistor, says team member Alberto Morpurgo, “the circuit has to be optimized and studied in detail.” Even so, Konstantin Likharev of the State University of New York, Stony Brook, thinks the device is unlikely to be practical. Likharev, who is pursuing his own approach to superconducting electronics based on Josephson junctions, says that in order to make these circuits competitive, “you should provide enormous speed advantages. I don't see it here.” The Groningen researchers estimate that the switching speed of their superconducting device is about 10 picoseconds. They hope to improve that figure, but so far, says Likharev, the device is slower than the fastest semiconductor devices.

    Although the new junctions may not see use as transistors anytime soon, their tunability could increase the versatility of ultrasensitive magnetic detectors called SQUIDS, which consist of Josephson junctions incorporated in loops of superconductor. The group is also studying the possibility of using the junctions as amplifiers in superconducting infrared detectors for astronomical telescopes. Explains Klapwijk: “These devices may have a higher sensitivity and speed compared to the currently used gallium-arsenide amplifiers.”

Log in to view full text