Analyzing the Erdös Star Cluster
Just as actors have their Kevin Bacon, mathematicians have their Paul Erdös. People in each field love to calculate their “degree of separation” from the stars. Now two scientists have shown that all the mathematicians who have won the prestigious Fields medal have a finite “Erdös number.” And so do a lot of Nobel laureates.
What's your Erdös number? If you've written a joint paper with the prolific Hungarian mathematician, it's 1; if you've collaborated with an Erdös-1 mathematician, you've earned a 2; and so on. The lower your number, the more closely you're associated with an elite group of extremely well-connected mathematicians (507 Erdös-1 types, at last count).
Mathematicians, being what they are, are fond of analyzing Erdös numbers. The latest effort, appearing in the December issue of the journal of the Colombian Academy of Sciences, shows that winners of the Fields medal—math's Nobel equivalent—“all have a 5 or less, and only one or two have a 5,” says co-author Jerrold Grossman of Oakland University in Michigan. In addition, more than 60 Nobel laureates, many in fields far removed from mathematics, can brag of single-digit Erdös numbers. Watson and Crick, for example, have numbers of 7 and 8, says Grossman.
Erdös died in 1996 after publishing close to 1500 papers. But his nonstop travels left a legacy of unpublished work, so his name keeps appearing on papers to this day. According to Grossman, 25 new Erdös papers entered his database this year, promoting 12 new mathematicians to the exalted Erdös-1 status.
Vive la Femme
“Half the girls born today [in the developed world] will live through the entire 21st century.”
—James Vaupel, executive director of the Max Planck Institute for Demographic Research in Rostock, Germany, at a 25 January conference in Washington, D.C., on “The Graying of the Industrial World.”
A Taste for MSG
Scientists have for the first time found the molecular hook on the human tongue that grabs a substance which generates a taste. The finding, reported in the February Nature Neuroscience, means that a fifth taste, glutamate, can be added to the four major tastes of sweet, salty, sour, and bitter.
In 1908, Japanese physiologist Kikunae Ikeda, working with seaweed, isolated a chemical that gives many protein-rich foods—such as soy sauce and Marmite, the powerful yeast paste beloved of Australians—their hearty flavor. He called the taste umami and identified the chemical as glutamate, an amino acid that is also a major brain neurotransmitter.
It has taken another 90 years, however, for scientists to find a receptor protein for any taste on the tongue. About 4 years ago, neuroscientist Nirupa Chaudhari and her team at the University of Miami School of Medicine found hints of a receptor for glutamate in rat taste buds. But the tongue cells required far more glutamate to be activated than brain cells, which raised doubts about whether the protein was indeed an umami receptor.
On closer analysis, though, Chaudhari's team determined that the two types of receptors were the same, differing only in that glutamate receptors in taste buds are much less sensitive because they lack some of the glutamate-binding structure that exists in the brain receptor. With the discrepancy explained, the pruned glutamate receptor is “the most likely candidate for the umami receptor,” and glutamate now looks like one of the basic tastes, says Chaudhari.
The finding is “sensational,” says Bernd Lindemann, a physiologist at Saar University in Hamburg, Germany, because it is the first instance in which taste can be seen operating on a molecular level.
Algorithms for the Ages
“Great algorithms are the poetry of computation,” says Francis Sullivan of the Institute for Defense Analyses' Center for Computing Sciences in Bowie, Maryland. He and Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory have put together a sampling that might have made Robert Frost beam with pride—had the poet been a computer jock. Their list of 10 algorithms having “the greatest influence on the development and practice of science and engineering in the 20th century” appears in the January/February issue of Computing in Science & Engineering. If you use a computer, some of these algorithms are no doubt crunching your data as you read this. The drum roll, please:
1946: The Metropolis Algorithm for Monte Carlo. Through the use of random processes, this algorithm offers an efficient way to stumble toward answers to problems that are too complicated to solve exactly.
1947: Simplex Method for Linear Programming. An elegant solution to a common problem in planning and decision-making.
1950: Krylov Subspace Iteration Method. A technique for rapidly solving the linear equations that abound in scientific computation.
1951: The Decompositional Approach to Matrix Computations. A suite of techniques for numerical linear algebra.
1957: The Fortran Optimizing Compiler. Turns high-level code into efficient computer-readable code.
1959: QR Algorithm for Computing Eigenvalues. Another crucial matrix operation made swift and practical.
1962: Quicksort Algorithms for Sorting. For the efficient handling of large databases.
1965: Fast Fourier Transform. Perhaps the most ubiquitous algorithm in use today, it breaks down waveforms (like sound) into periodic components.
1977: Integer Relation Detection. A fast method for spotting simple equations satisfied by collections of seemingly unrelated numbers.
1987: Fast Multipole Method. A breakthrough in dealing with the complexity of n-body calculations, applied in problems ranging from celestial mechanics to protein folding.