The Runners-Up

Science  21 Dec 2007:
Vol. 318, Issue 5858, pp. 1844-1849
DOI: 10.1126/science.318.5858.1844a

The runners-up for 2007's Breakthrough of the Year include advances in cellular and structural biology, astrophysics, physics, immunology, synthetic chemistry, neuroscience, and computer science.

The News Staff


See Web links on reprogramming cells

The riddle of Dolly the Sheep has puzzled biologists for more than a decade: What is it about the oocyte that rejuvenates the nucleus of a differentiated cell, prompting the genome to return to the embryonic state and form a new individual? This year, scientists came closer to solving that riddle. In a series of papers, researchers showed that by adding just a handful of genes to skin cells, they could reprogram those cells to look and act like embryonic stem (ES) cells. ES cells are famous for their potential to become any kind of cell in the body. But because researchers derive them from early embryos, they are also infamous for the political and ethical debates that they have sparked.

New program.

With the addition of four genes, human skin cells are prompted to act like embryonic stem cells.


The new work is both a scientific and a political breakthrough, shedding light on the molecular basis of reprogramming and, perhaps, promising a way out of the political storm that has surrounded the stem cell field.

The work grows out of a breakthrough a decade ago. In 1997, Dolly, the first mammal cloned from an adult cell, demonstrated that unknown factors in the oocyte can turn back the developmental clock in a differentiated cell, allowing the genome to go back to its embryonic state.

Various experiments have shown how readily this talent is evoked. A few years ago, researchers discovered that fusing ES cells with differentiated cells could also reprogram the nucleus, producing ES-like cells but with twice the normal number of chromosomes. Recently, they also showed that a fertilized mouse egg, or zygote, with its nucleus removed could also reprogram a somatic cell.

Meanwhile, the identity of the reprogramming factors continued to puzzle and tantalize biologists. In 2006, Japanese researchers announced that they were close to at least part of the answer. By adding just four genes to mouse tail cells, they produced what they call induced pluripotent stem (iPS) cells: cells that looked and acted like ES cells.

This year, in two announcements that electrified the stem cell field, scientists closed the deal. In a series of papers in June, the same Japanese group, along with two American groups, showed that the iPS cells made from mouse skin could, like ES cells, contribute to chimeric embryos and produce all the body's cells, including eggs and sperm. The work convinced most observers that iPS cells were indeed equivalent to ES cells, at least in mice.

Then in November came a triumph no one had expected this soon: Not one, but two teams repeated the feat in human cells. The Japanese team showed that their mouse recipe could work in human cells, and an American team found that a slightly different recipe would do the job as well.

The advance seems set to transform both the science and the politics of stem cell research. Scientists say the work demonstrates that the riddle of Dolly may be simpler than they had dared to hope: Just four genes can make all the difference. Now they can get down to the business of understanding how to guide the development of these high-potential cells in the laboratory. In December, scientists reported that they had already used mouse iPS cells to successfully treat a mouse model of sickle cell anemia. The next big challenge will be finding a way to reprogram human cells without using possible cancer-causing viruses to insert the genes.

Politicians and ethicists on both sides of the debate about embryo research are jubilant. Supporters hope the new technique will enable them to conduct research without political restrictions, and opponents hope it will eventually render embryo research unnecessary. Indeed, several scientists said the new work prompted them to abandon their plans for further research on human cloning.

Officials at the National Institutes of Health said there was no reason work with iPS cells would not be eligible for federal funding, enabling scientists in the United States to sidestep restrictions imposed by the Bush Administration. And President George W. Bush himself greeted the announcement by saying that he welcomed the scientific solution to the ethical problem.

But it's much too early to predict an end to the political controversies about stem cell research. Some researchers say they still need to be able to do research cloning to find out just what proteins the egg uses for its reprogramming magic. And now that science has come a step closer to the long-term goal of stem cell therapy, mouse models won't be adequate for animal studies. Rather, researchers will need to test cell transplantation approaches with primates, a move that will inevitably stir up resistance from animal-rights activists.

See Web links on reprogramming cells


See Web links on high-energy cosmic rays

What's smaller than an atom but crashes into Earth with as much energy as a golf ball hitting a fairway? Since the 1960s, that riddle has tantalized physicists studying the highest energy cosmic rays, particles from space that strike the atmosphere with energies 100 million times higher than particle accelerators have reached. This year, the Pierre Auger Observatory in Argentina supplied key clues to determine where in space the interlopers come from.

Debris trail.

High-energy cosmic rays streaking into Earth's atmosphere shed clues to their source.


Many physicists had assumed the extremely rare rays were protons from distant galaxies. That notion took a hit in the 1990s, when researchers with the Akeno Giant Air Shower Array (AGASA) near Tokyo reported 11 rays with energies above 100 exa-electron volts (EeV)—about 10 times more than expected. The abundance was tantalizing. On their long trips, protons ought to interact with radiation lingering from the big bang in a way that saps their energy and leaves few with more than 60 EeV. So the excess suggested that the rays might be born in our galactic neighborhood, perhaps in the decays of super-massive particles forged in the big bang. But researchers with the Hi-Res detector in Dugway, Utah, saw only two 100-EeV rays, about as many as expected from far-off sources.

The Auger team set out to beat AGASA and Hi-Res at their own games. When a cosmic ray strikes the atmosphere, it sets off an avalanche of particles. AGASA used 111 detectors spread over 100 square kilometers to sample the particles and infer the ray's energy and direction; Auger comprises nearly 1500 detectors spread over 3000 square kilometers. The avalanche also causes the air to fluoresce. Hi-Res used two batteries of telescopes to see the light; Auger boasts four. In July, the Auger team reported its first big result: no excess of rays above 60 EeV.

Auger still sees a couple of dozen rays above that level, however. Last month, the team reported that they seem to emanate from active galactic nuclei (AGNs): enormous black holes in the middles of some galaxies. The AGNs lie within 250 million light-years of Earth, close enough that cosmic radiation would not have drained the particles' energy en route. Auger researchers haven't yet proved that AGNs are the sources of the rays, and no one knows how an AGN might accelerate a proton to such stupendous energies.

Expect the controversy to continue. Hi-Res researchers say that they see no correlation with AGNs. With Japanese colleagues, they are completing the 740-square-kilometer Telescope Array in Millard County, Utah, which has 512 detectors and three telescope batteries. But with a much bigger array, the Auger team will surely be first to test its own claims.

See Web links on high-energy cosmic rays


See Web links on receptor visions

Just when some crystallographers were fretting that the task was impossible, researchers nabbed a close-up of adrenaline's target, the β2-adrenergic receptor. Its structure has long been on the to-do list, but the feat also got pulses racing because of the molecule's family connections. The receptor is one of roughly 1000 membrane-spanning molecules called G protein-coupled receptors (GPCRs). By detecting light, odors, and tastes, the receptors clue us in to our surroundings. GPCRs also help manage our internal conditions by relaying messages from hormones, the neurotransmitter serotonin, and myriad other molecules. From antihistamines to beta blockers, the pharmacopoeia brims with medicines aimed at GPCRs—all of which researchers discovered without the benefit of high-resolution structures. A clear picture of, say, a receptor's binding site might spur development of more potent, safer drugs. But scientists had cracked only one “easy” GPCR structure, for the visual pigment rhodopsin.


Researchers have worked out the architecture of the adrenaline receptor.


Getting a look at the β2-adrenergic receptor took the leaders of two overlapping crystallographic teams almost 2 decades. The effort paid off this fall with four papers published in the journals Science, Nature, and Nature Methods. The lab ingenuity that other experts call a technical tour de force shows in the way the teams restrained the molecule's flexible third loop. They either replaced it with the stolid enzyme lysozyme or tacked it down with an antibody.

But this snapshot of the receptor is just the beginning. Before researchers can design compounds to jam the molecule, they need to picture it in its different “on” states. And the other GPCRs awaiting analysis mean that for crystallographers, it's two down and 1000 to go.

See Web links on receptor visions


See Web links on oxide interfaces

Sixty years ago, semiconductors were a scientific curiosity. Then researchers tried putting one type of semiconductor up against another, and suddenly we had diodes, transistors, microprocessors, and the whole electronic age. Startling results this year may herald a similar burst of discoveries at the interfaces of a different class of materials: transition metal oxides.

Transition metal oxides first made headlines in 1986 with the Nobel Prize-winning discovery of high-temperature superconductors. Since then, solid-state physicists keep finding unexpected properties in these materials—including colossal magnetoresistance, in which small changes in applied magnetic fields cause huge changes in electrical resistance. But the fun should really start when one oxide rubs shoulders with another.

If different oxide crystals are grown in layers with sharp interfaces, the effect of one crystal structure on another can shift the positions of atoms at the interface, alter the population of electrons, and even change how electrons' charges are distributed around an atom. Teams have grown together two insulating oxides to produce an interface that conducts like a metal or, in another example, a superconductor. Other combinations have shown magnetic properties more familiar in metals, as well as the quantum Hall effect, in which conductance becomes quantized into discrete values in a magnetic field. Researchers are optimistic that they may be able to make combinations of oxides that outperform semiconductor structures.

Tunable sandwich.

In lanthanum aluminate sandwiched between layers of strontium titanate, a thick middle layer (right) produces conduction at the lower interface; a thin one does not.


With almost limitless variation in these complex oxides, properties not yet dreamed of may be found where they meet.

See Web links on oxide interfaces


See Web links on quantum spin Hall effect

Chalk one up for the theorists. Theoretical physicists in California recently predicted that semiconductor sandwiches with thin layers of mercury telluride (HgTe) in the middle should exhibit an unusual behavior of their electrons called the quantum spin Hall effect (QSHE). This year, they teamed up with experimental physicists in Germany and found just what they were looking for.

The effect is the latest in a series of oddball ways electrons behave when placed in external electric and magnetic fields. In 1980, researchers in Germany and the U.K. discovered one of these anomalies, called the quantum Hall effect. When they changed the strength of a magnetic field applied perpendicular to charges moving through thin layers of metals or semiconductors, they found that the conductance changed in a stepwise, or quantized, manner. One upshot was that charges flowed in tiny channels along the edges of the materials with essentially no energy loss. The finding triggered hopes of new families of computer chip devices. But because the effect required high magnetic fields and low temperatures, such devices remained pipe dreams.


Electrons with spins oriented in opposite directions flow along different paths.


Luckily for physicists, electrons harbor not only electric charge but also another property known as spin. In recent years, theorists have predicted that materials with the right electronic structure should interact with electric fields to result in the QSHE—and a spin-driven version of near-lossless conduction. Such materials would also do away with the need for high magnetic fields and perhaps even for low temperatures. This year, one of them—the HgTe sandwich—showed telltale (although not ironclad) signs of the effect at temperatures below 10 kelvin. If researchers can do the same trick at room temperature, the discovery could open the door to new low-power “spintronic” computing devices that manipulate electrons by both charge and spin.

See Web links on the quantum spin Hall effect


See Web links on T cell division

Fresh evidence illuminating how immune cells specialize for immediate or long-term protection had researchers a little feverish this year. When a pathogen attacks, some CD8 T cells become short-lived soldiers, while others morph into memory cells that loiter for decades in case the same interloper tries again. The new work demonstrates how one cell can spawn both cell types.

A T cell remains passive until it meets a dendritic cell carrying specific pathogen molecules. The liaison between the two lasts for hours. As the cells dally, receptors and other molecules congregate at each end of the T cell. A U.S.-based team tested the proposal that if the T cell then divided, its progeny would inherit different molecules that might steer them onto distinct paths. Such asymmetric divisions are a common method for cell diversification during development.

Separate and unequal.

As a T cell divides, the upper and lower cells sport distinct molecules.


In March, the team reported experiments showing that different specialization-controlling proteins amassed at each pole of a T cell during its dance with a dendritic cell. When the researchers nabbed newly divided T cells, they found that progeny that had been adjacent to the dendritic cell carried receptors typical of soldiers, whereas their counterparts showed the molecular signature of memory cells.

Unequal divisions could also help generate diversity among CD4 T cells, immune regulators that differentiate into three types. Practical applications of the discovery will have to wait until researchers know more about memory-cell specialization, but eventually they might be able to tweak the process to give vaccines more kick.

See Web links on T cell division


See Web links on direct chemistry

Society may finally be embracing energy efficiency and waste reduction, but these attributes have always been prized among synthetic chemists. Extra plaudits and stature go to chemists who carry out desired reactions in the simplest and most elegant ways. One reason: Fewer synthetic steps almost always saves cash. And although such economizing is a perennial goal, this year an impressive array of synthetic successes showed that chemists are gaining a new level of control over the molecules they make and how they make them.

Achieving this control has not been easy. Many desired molecules, such as pharmaceutical and electronic compounds, consist of a backbone of carbon atoms with hydrogen atoms or other more complex functional groups dangling off the sides. When chemists convert a starting compound into one they really want, they typically aim to modify just one of those appendages but not the others. They normally do so either by adorning the starting material with chemical “activators” that prompt the molecule to react only at the tagged site or by slapping “protecting” groups on the sites they want left untouched.

This year, researchers around the globe made major strides in doing away with these accessories. One group in Israel used a ruthenium-based catalyst to convert starting compounds called amines and alcohols directly into another class of widely useful compounds called amides. A related approach enabled researchers in Canada to link pairs of ring-shaped compounds together. Another minimized the use of protecting groups to make large druglike organic compounds. Yet another did much the same in mimicking the way microbes synthesize large ladder-shaped toxins. And those are just a few examples. For chemists, it was an efficient year.

See Web links on direct chemistry


See Web links on memory and imagination

In Greek mythology, the goddess of memory, Mnemosyne, gave birth to the Muses, spirits who inspire imagination. Some modern scientists have seen the kinship as both literal and practical. Remembering the past, they propose, helps us picture—and prepare for—the future. The notion got a boost this year from several studies hinting at common neural mechanisms for memory and imagination.

Something to Muse on.

In the brain as in Greek mythology, memory and imagination may be related.


In January, researchers in the United Kingdom reported that five people with amnesia caused by damage to the hippocampus, a crucial memory center in the brain, were less adept than healthy volunteers at envisioning hypothetical situations such as a day at the beach or a shopping trip. Whereas healthy subjects described such imagined events vividly, the amnesic patients could muster only a few loosely connected details, suggesting that their hippocampal damage had impaired imagination as well as memory.

In April, a brain-imaging study with healthy young volunteers found that recalling past life experiences and imagining future experiences activated a similar network of brain regions, including the hippocampus. Even studies with rats suggested that the hippocampus may have a role in envisioning the future: One team reported in November that when a rat faces a fork in a familiar maze, neurons in the hippocampus that encode specific locations fire in sequence as if the rat were weighing its options by mentally running down one path and then the other.

On the basis of such findings, some researchers propose that the brain's memory systems may splice together remembered fragments of past events to construct possible futures. The idea is far from proven, but if future experiments bear it out, memory may indeed turn out to be the mother of imagination.

See Web links on memory and imagination


See Web links on solving checkers

Computer scientists finally took some of the fun out of the game of checkers. After 18 years of trying, a Canadian team proved that if neither player makes a mistake, a game of checkers will inevitably end in a draw. The proof makes checkers—also known as draughts—the most complicated game ever “solved.” It marks another victory for machines over humans: A mistake-prone person will surely lose to the team's computer program.

Proving that flawless checkers will end in a stalemate was hardly child's play. In the United States, the game is played on an eight-by-eight grid of red and black squares. The 12 red and 12 black checkers slide diagonally from black square to black square, and one player can capture the other's checker by hopping over it into an empty space just beyond. All told, there are about 500 billion billion arrangements of the pieces, enough to overwhelm even today's best computers.

So the researchers compiled a database of the mere 39,000 billion arrangements of 10 or fewer pieces and determined which ones led to a win for red, a win for black, or a draw. They then considered a specific opening move and used a search algorithm to show that players with perfect foresight would invariably guide the game to a configuration that yields a draw.


Reported in July, the advance exemplifies an emerging trend in artificial intelligence. Human thinking relies on a modest amount of memory and a larger capacity to process information. In contrast, the checkers program employs relatively less processing and a whole lot of memory—the 39,000-billion-configuration database. The algorithms the team developed could find broad applications, others say, such as deciphering the information encoded in DNA.

See Web links on solving checkers


Navigate This Article