News this Week

Science  23 Mar 2012:
Vol. 335, Issue 6075, pp. 1422
  1. Around the World

    1 - Antwerp and Brussels, Belgium
    Schmallenberg Vectors Found
    2 - Gran Sasso D'Italia, Italy
    Adieu, Superluminal Neutrinos
    3 - Paris
    NASA Out, Russia in for ESA's ExoMars Missions
    4 - Washington, D.C.
    Highway Bill Puts Brakes On Some Research
    5 - London
    Gene Therapy Trial for Cystic Fibrosis Saved
    6 - New Delhi
    India's Budget Plan Disappoints Scientists

    Antwerp and Brussels, Belgium

    Schmallenberg Vectors Found

    Tiny pest.

    A biting midge (bottom) is dwarfed by a mosquito.

    CREDIT: © ITG

    Researchers in Belgium have identified three species of biting midges as the vectors for the newly recognized Schmallenberg virus, which has been causing birth defects in livestock across western Europe (Science, 2 March, p. 1028). The midges, commonly called no-see-ums, also transmit bluetongue, another livestock virus that emerged in Europe a few years ago. Scientists at the Institute of Tropical Medicine (ITG) in Antwerp and the Belgian Veterinary and Agrochemical Research Centre (VAR) analyzed the heads of midges caught in September and October as part of a bluetongue surveillance project. (The researchers looked only at the heads because the virus must reach the salivary glands to be transmitted.) They detected the virus in Culicoides obsoletus, C. dewulfi, and C. pulicaris, three of the five species that have been shown to transmit bluetongue. The researchers say they are continuing their survey.

    Gran Sasso D'Italia, Italy

    Adieu, Superluminal Neutrinos

    A new measurement appears to torpedo the claim that subatomic particles called neutrinos travel faster than light (Science, 30 September 2011, p. 1809).

    That spectacular claim came from physicists working with a particle detector called OPERA in Italy's subterranean Gran Sasso National Laboratory. They found that neutrinos fired from CERN, the European particle physics laboratory 730 kilometers away in Switzerland, arrived 60 nanoseconds faster than they should if they traveled at light speed. However, last month the OPERA team also reported that a loose cable could have produced a spurious timing shift (Science, 2 March, p. 1027). Now, researchers using a detector called ICARUS, also at Gran Sasso, have timed neutrinos from CERN and found that they travel exactly at light speed, they announced last week.

    “There is no superluminal effect, what else can I say?” says CERN's Carlo Rubbia, spokesperson for the ICARUS team. Antonio Ereditato of University of Bern, who is a spokesperson for the OPERA team, says “the result from ICARUS is well in line with our recent findings about a possible malfunctioning of some components of the experimental apparatus.”

    Paris

    NASA Out, Russia in for ESA's ExoMars Missions

    Up in the air?

    ESA's Trace Gas Orbiter remains on track, but funding is unsure.

    CREDIT: ESA

    The European Space Agency's (ESA's) governing council decided last week to press ahead with two Mars exploration missions, known as ExoMars. The agency had planned to collaborate with NASA, but NASA pulled out last month after President Barack Obama proposed steep cuts in the agency's Mars funding for next year (Science, 24 February, p. 900). The Russian Federal Space Agency will replace NASA in providing the launchers and a number of experiments, according to ESA spokesperson Franco Bonacina.

    The ExoMars project will send two missions to Mars. In 2016, the Trace Gas Orbiter will study the planet's atmosphere, and an environmental-science lander will test entry, descent, and landing technologies. The 2018 mission will carry a rover, originally expected to be delivered by NASA's Sky Crane craft, to look for signatures of past or present life.

    Revamping the missions will likely push their cost beyond the €1 billion that ESA has allocated for ExoMars. The governing council instructed the agency to see whether it could shift funds from other accounts or raise the additional money from member states or from Russia.

    Washington, D.C.

    Highway Bill Puts Brakes On Some Research

    Less money, more competition. That's what archeologists and environmental scientists who want funding from a little-known transportation program will face if a massive highway bill approved by the U.S. Senate last week becomes law. The measure, passed on a 74-22 vote, authorizes the government to spend $109 billion on road and transit projects over 2 years. It includes up to $400 million per year for mainstream transportation research, but reorganizes and reduces funds for an “enhancements” program which has pumped more than $50 million into hundreds of archeology and environmental projects since 1992 (Science, 18 November 2011, p. 884). Overall, funding for those efforts would fall by some 20%, to about $833 million. But much of that money is expected to go to walking and biking trails, leaving less for researchers. The action now moves to the House of Representatives, which has so far failed to agree on its version of the bill. http://scim.ag/highwaybill

    London

    Gene Therapy Trial for Cystic Fibrosis Saved

    A gene therapy trial for cystic fibrosis (CF) has found funding from the U.K. government and will go ahead this spring. The phase II trial faced cancellation last summer after the researchers conducting it ran out of funding. But the U.K. Medical Research Council (MRC) and National Institute for Health Research are now providing £3.1 million to support the trial, with an additional £1.2 million from MRC for basic research on using a viral vector to deliver the gene. CF causes a buildup of mucus in the lungs and digestive tract that leads to life-threatening health problems. Scientists will use lipids to deliver a working copy of the gene that causes CF into the lungs of 130 patients, who will inhale the therapy once a month for a year.

    New Delhi

    India's Budget Plan Disappoints Scientists

    The announcement of India's maiden mission to Mars is not cheering Indian scientists who are disappointed with proposed spending increases for research in a new government budget plan. The annual budget proposal presented to India's parliament on 16 March by Finance Minister Pranab Mukherjee calls for the operating budgets for science to rise, on average, by about 5% in 2012–2013—less than many scientists expected.

    Prime Minister Manmohan Singh had raised expectations in parts of India's scientific community when he said earlier this year that India needed to double the share of its gross domestic product spent on research to 2% over the next 5 years.

    But the new budget “is not good news” because the increases don't keep pace with inflation, which has been running at about 10%, says physicist Ajay K. Sood, president of the Indian Academy of Sciences in Bangalore.

    “Things may look up in the coming years,” says space scientist Krishnaswamy Kasturirangan, a member of a government planning commission that is preparing a 5-year science spending plan for the government. But he warns that “some serious prioritization needs to be undertaken by the scientific departments.” http://scim.ag/Indiabudget

  2. Random Sample

    Grazed Grasslands Biodiverse, Too

    CREDIT: JÜRGEN DENGLER, FROM J. B. WILSON ET AL., JOURNAL OF VEGETATION SCIENCE (MARCH) © 2012 INTERNATIONAL ASSOCIATION FOR VEGETATION SCIENCE

    Tropical rainforests may boast the highest number of plant species per hectare, but at smaller scales, the grasslands of Eastern Europe (above) and Argentina top the biodiversity list. A mountain grassland in central Argentina packs 89 species into a single square meter and several meadows in Romania and the Czech Republic are on par, researchers reported last week in the Journal of Vegetation Science. The researchers scanned millions of published and unpublished plant surveys in different sized plots to learn how maximum diversity changes across different spatial scales.

    Species-rich grasslands tend to be over limestone, and such grasslands were once quite common in Europe. However, changes in land-use practices have made these meadows quite rare. Unlike in rainforests, where preservation requires minimizing human activity, protecting these grasslands involves ongoing management: mowing or grazing—in some cases for centuries—is part of the secret to the grasslands' richness.

    Charting the Culture of Science

    What's hot?

    Bookworm reveals the popularity of topics in a research area, such as high-energy physics (hep). Graphene is booming, but faster-than-light neutrinos were a flash in the pan.

    CREDIT: ARXIV.CULTUROMICS.ORG

    Graphene research scooped the 2010 Nobel Prize in physics—but when did it really begin to take hold in the scientific community? What about carbon nanotubes? Or string theory? A team of scientists based at Harvard University has created a tool, called Bookworm, which can reveal such historical trends in the language of research with the click of a button.

    The research papers were provided by Paul Ginsparg, a physicist at Cornell University, and the founder of arXiv, the online archive of preprint articles for mathematical sciences. arXiv is the perfect data set for this tool because its articles are open access, which “implies being able to treat articles as computable objects, to ingest them into a database and number-crunch them,” Ginsparg says. And as of today, Bookworm for arXiv is available for anyone to use at arxiv.culturomics.org.

    Converting the 7 billion words from 730,000 arXiv research articles into useable data isn't trivial. But the team is led by Jean-Baptiste Michel and Erez Lieberman Aiden, the Harvard scientists who converted millions of books digitized by Google into useable data, spawning a field called culturomics (Science, 17 December 2010, p. 1600). So the Bookworm team—including Benjamin Schmidt, Neva Cherniavsky, and Martin Camacho—were able to build from the same underlying code.

    The term “graphene,” it turns out, was no more common than the more general term “fullerene” until 2006. Then it exploded onto the scene, sucking attention away from earlier trendsetting fullerenes such as “carbon nanotubes.”

    By the Numbers

    $10 million — Price tag to name Max Planck Florida Institute's new medical research lab, part of its campaign to raise $50 million in private donations over the next 5 years.

    21 km — Height above Earth from which Austrian skydiver Felix Baumgartner jumped on 15 March. The jump was a test run for a 37-kilometer, record-breaking jump this summer—of interest to NASA engineers working on astronaut escape systems.

    10 km3 — The volume of sand—enough to bury Manhattan by 160 meters—spewed by undersea geysers in the North Sea hundreds of thousands of years ago, according to a study reported online 19 March in Geology.

    ScienceLIVE

    Join us on Thursday, 29 March, at 3 p.m. EDT for a live chat on the future of personalized genomics. Talk with experts about how full genome scans could revolutionize personal medicine. http://scim.ag/science-live

  3. Newsmakers

    Three Q's

    Birgeneau

    CREDIT: JOHN BLAUSTEIN/UC NEWS SERVICE

    Physicist Robert Birgeneau announced last week that he will step down at the end of 2012 after 8 years as chancellor of the University of California, Berkeley. His tenure coincided with an economic crisis in California that resulted in drastically reduced state funding for the university.

    Q:How well has Berkeley managed to maintain its strength in the sciences in the face of these cuts?

    The funding we get from the state to run Berkeley has dropped by more than a factor of two, but our research funding has gone up by 40%. They've almost cancelled each other out. … But I would say most importantly, as measured by Sloan Research Fellowships [which provide funding for early-career scientists] we've hired really, really talented young faculty. So I don't think I'm being Pollyannaish to say that we have largely weathered this storm.

    Q:But I've heard faculty say it's getting harder to attract and retain top talent. Isn't that true?

    That's one of these urban myths. In the past year our faculty retention rate is the highest it's been in the past decade.

    Q:What are you most looking forward to after you step down?

    Doing physics! I have a small research group and our focus has been on these new iron chalcogenide and iron pnictide superconductors, which are fascinating materials. It's an unexpectedly rich new area of solid state physics.

  4. Avionics

    A Flapping of Wings

    1. Dana Mackenzie*

    Robot aircraft that fly like birds could open new vistas in maneuverability, if designers can forge a productive partnership with an old enemy: unsteady airflow.

    CREDIT: FESTO AG & CO. KG

    In a packed conference hall, all heads are turned to the back of the room. The crowd murmurs as a man holds up what appears to be an enormous model of a seagull with its wings outspread. The murmurs turn to silence as the wings begin to flap while it is still in his hands. Then he tosses the bird forward … and with a whir, SmartBird takes off.

    The silence gives way to applause as the bird flies once, twice, three times around the auditorium. As it glides gently to a stop, the audience gets up and gives the SmartBird a standing ovation.

    More than a million and a half people have watched the video of the SmartBird flying at a TED conference in Edinburgh last July (www.youtube.com/watch?v=Fg_JcKSHUtQ). And humans aren't the only ones who find it fascinating. At the same conference, when the robotic bird flew outside, it attracted a mob of curious seagulls.

    The last couple of years have been an exciting time for flapping-wing flight. Another bird-inspired aircraft, AeroVironment's Nano Hummingbird, made Time magazine's list of 50 top inventions of 2011 (see sidebar, p. 1433). The U.S. Defense Advanced Research Projects Agency and the Office of Naval Research are investing millions of dollars into so-called micro air vehicles and nano air vehicles, as well as basic research into how birds and insects fly.

    Soaring.

    Festo's Smart-Bird (above) and MIT's Phoenix (left) take robotic flapping-wing flight to new levels of grace and precision.

    CREDIT: JASON DORFMAN

    A century after the Wright brothers, fixed-wing aircraft have become a routine part of our lives. But flapping-wing aircraft, or ornithopters, still elicit wonder. “Just about everybody gets a thrill out of seeing one for the first time,” says Nathan Chronister of Rochester, New York, who makes ornithopter kits for hobbyists and science classes. At the same time, there is serious science behind them. While the theory of airflow over a flapping wing remains surprisingly rudimentary, humans are now making significant progress in understanding how to fly, control, and land flapping-wing aircraft. “It's not the physics that's the problem any more,” says aeronautical engineer Wolfgang Send, the mastermind behind the SmartBird.

    Dead end?

    From Daedalus to Leonardo da Vinci to Otto Lilienthal, early researches in flying emphasized flapping wings. And in fact, the first rubber band-powered ornithopters, made by Alphonse Pénaud of France in 1874, predate motorized airplanes.

    But after the spectacular success of the Wright brothers, flapping-wing aircraft began to look like a technological dead end. Even now, engineers struggle to understand unsteady airflow. “Not even for the simplest flight situation—level cruising flight—is there a global, recognized theory, accepted by most of the experts,” says Horst Räbiger of Nuremberg, Germany, a longtime designer of ornithopters. Yet, he adds, such a theory “is necessary to compute the best lift distribution along the wingspan at every moment, the ideal airfoils at every part of the wing, the best flapping angle, best flapping frequency, and much more. Today, every expert makes his own theory—including myself.”

    Ornithopters also languished for many years because they are simply inferior in aerodynamic efficiency to airplanes with rotating parts. “The limit of efficiency for a flapping vehicle for thrust is when it works like a bad propeller,” says Russ Tedrake of the Massachusetts Institute of Technology (MIT) in Cambridge. “For hovering, the limit of efficiency is when it approximates a helicopter.”

    However, ornithopters should have advantages, too, if they can be built. They should be more maneuverable than fixed-wing airplanes. They should be able both to hover and to fly forward. Birds, for example, direct their thrust upward at takeoff, flapping their wings at a large angle of attack to push themselves away from the ground. In cruising flight, the wings level out to a lower angle of attack, minimizing drag, generating mostly lift and only a little bit of thrust. When landing, the bird shifts once again to a high angle of attack with a lot of drag, in effect stalling out and using the wings as a parachute.

    Conventional airplanes are not so versatile. Pilots scrupulously avoid high angles of attack and stalling. To take off or land, an airplane must reproduce the conditions of cruising flight—high speed, low drag, low angle of attack—while on the ground. That is why airplanes require runways. Such aerodynamic conservatism was understandable in the early days of flight, when unsteady airflow killed many pilots. (Lilienthal, for instance, died in 1896 when his glider stalled and crashed.) But does it still make sense today, with modern sensors and computers and mathematical techniques at our disposal—and no pilot on board?

    An elegant pitch

    For decades, a few dedicated amateurs and professional engineers doing research in their spare time have kept the dream of flapping-wing flight alive. One of the latter is Send, an engineer at the German Aerospace Center (DLR) until his retirement in 2009.

    For years, Send has believed that the secret of high-efficiency flapping flight lies in two papers written by Theodore Theodorsen, an American engineer, and Hans Georg Küssner, a German engineer, in 1935 and 1936. Both researchers viewed flapping as something to be avoided; they were trying to understand the causes of wing flutter in fixed-wing aircraft. They found mathematical solutions for the aerodynamic forces on a flat plate that is both plunging (going up and down) and pitching (making an S-shaped motion). In bird flight, these two motions are typically about 90° out of phase, with the wing's highest angle of attack occurring when its vertical displacement is zero, and zero angle of attack when the vertical displacement is highest.

    Send viewed the phase lag as a control variable, together with the ratio of plunging amplitude to pitching amplitude (see figure, below). In the two-dimensional control space, flutter occurs spontaneously in the blue region on the middle graph of the figure, where the pitching amplitude is low and the phase lag is about 90°. In this region, the pitching motion extracts energy from the airflow. Most ornithopters employ this “passive torsion,” allowing the wind to twist the wing to a positive or negative angle of attack. However, the aerodynamic efficiency (a measure of thrust as a percentage of power input) is quite low, only about 30%, as the bottom graph shows.

    Sweet spot.

    SmartBird's designers used active torsion to maximize aerodynamic efficiency. In passive torsion (center), a wing twists spontaneously and extracts energy from the airflow. In active torsion, the twisting must be powered by a motor. Maximum efficiency occurs in a tiny “sweet spot” (bottom). Figures show theoretical solutions for a flat rectangular wing; SmartBird achieved an efficiency of 80% to 90%.

    CREDIT: WOLFGANG SEND

    Send realized that a wing with active torsion—using engine power to twist the wing more than the airflow can do alone—could achieve an aerodynamic efficiency of more than 80%. This efficiency occurs in the small green “sweet spot” seen in the bottom graph, the region in which the SmartBird operates.

    Send's views were definitely not shared by most of the ornithopter community. Some ornithopters had used active torsion—and Lilienthal had observed the twisting and the phase lag in bird wings in 1889—but other researchers did not consider it essential. “My colleagues were interested, but there was a skepticism,” Send says.

    In 2007, Send approached Festo, a company based in Esslingen, Germany, with the idea of turning his ideas on active torsion into a model. Festo was the perfect fit: a company that specializes in biomimetic automation. Its projects include robotic grippers inspired by an elephant's trunk and a helium-filled dirigible inspired by stingrays. “I asked them who is the aerodynamicist, who is the theoretician?” Send says. “They just smiled and looked at me.” After retiring from DLR in October 2009, Send plunged into the SmartBird project at Festo.

    Although Festo did not have an aerodynamicist, it did have a gifted team. “It was a very rare occasion in which the right people came together, from my point of view,” Send says. “Rainer Mugrauer is the Mozart of model builders. Without him, that bird wouldn't ever have been constructed.” Agalya Jebens and Kristof Jebens provided the control systems, which are essential to maintain the delicately controlled choreography of plunging and pitching that keeps the SmartBird in the aerodynamic “sweet spot.”

    SmartBird made its debut at the Hanover Trade Fair in 2011, where it drew more than 20,000 visitors. Festo will not reveal the cost of the project, which Send estimates at a couple of hundred thousand euros. At present, Festo is not planning to sell SmartBirds and denies any interest in military applications. Theme parks might be a possibility, Tedrake says: “Disney would like Tinker Bell to fly in and land on a lantern in Disney World.”

    SmartBird's technological tour de force has impressed other ornithopterists. “I was totally gobsmacked by the SmartBird,” says James DeLaurier of the University of Toronto in Canada, who built the first engine-powered remote-controlled ornithopter to be recognized by the Fédération Aéronautique Internationale, in 1991. Räbiger is more cautious. “Before the SmartBird, I didn't believe that a servo-controlled wing twisting is a good solution,” he wrote in an e-mail. “[Now] I must say: maybe.”

    Finding a perch

    There is one thing SmartBird still doesn't do: land like a bird. It needs conventional landing gear or a human to catch it in midair. If Tinker Bell wants to land on a lantern, she will have to learn how to perch. That is the focus of one of the newest entrants into birdlike flight, Tedrake's Robot Locomotion Group at MIT.

    In 2010, Tedrake and his former Ph.D. student, Rick Cory, built a fixed-wing glider, launched from a crossbow, that successfully perched on a wire 19 times out of 20. They have also built an ornithopter, called the Phoenix, that can perch successfully but is not yet as well understood theoretically.

    Surprisingly, the most novel technology behind the perching glider was mathematics. When approaching a perch, the glider has a desired trajectory that will bring it into a safe landing. However, the complicated aerodynamics of stall mean that it cannot necessarily hit that trajectory, and the actual trajectory cannot be fully anticipated. A beautiful mathematical device called Lyapunov functions makes it possible to identify a “funnel” within which all trajectories are attracted toward the desired one.

    Engineers have long known of Lyapunov functions, but they were hard to compute. The difficulty lies in proving that a polynomial function of several variables is always positive, except at one point (the center of the “funnel”). That changed in 2000, when Pablo Parrilo of the California Institute of Technology (now at MIT) showed that such functions can be found in a quick and dirty way by limiting the search to functions that are sums of squares: numbers that are never negative.

    With the sums of squares method, “we are now in the game of applying Lyapunov functions to very complicated tasks,” Tedrake says. “A planned trajectory can be much more aggressive, like a bird darting through the forest.”

    Unlike Festo, Tedrake's group has funding from the military; he is the principal investigator for a $7.5 million multi-university research initiative from the Office of Naval Research. The military interest is understandable. Maneuverable robotic birds would be useful for flying through cluttered urban settings; robots that resemble actual birds would be easier to disguise; and perching robots could conduct surveillance for long periods of time without consuming energy. Such applications, however, are far off; the research is still in a conceptual phase.

    “Our goal is to do maneuvering flight,” Tedrake says. “I'm not convinced that flapping wings are strictly necessary for that, but it's plausible. It's just a primal belief on my part that there will be a benefit from flapping. There are some remarkable success stories in nature where they're doing things that airplanes cannot do.”

    • * Dana Mackenzie is a writer in Santa Cruz, California.

  5. Avionics

    It's a Bird, It's a Plane, It's a … Spy?

    1. Dana Mackenzie

    A new 19-gram crewless aircraft called the Nano Hummingbird with a wingspan of 17 centimeters can hover in place and fly in any direction (including backward) as fast as 18 kilometers per hour.

    CREDIT: AEROVIRONMENT INC.—NANO AIR VEHICLE

    If a hummingbird follows you into a building, one of two things is going on: Either your perfume is too strong, or the world's smallest spy plane is on your tail.

    In 2011, AeroVironment, a company founded by Paul MacCready, the inventor of the first human-powered aircraft to cross the English Channel, unveiled a new crewless aircraft called the Nano Hummingbird. With a wingspan of 17 centimeters and a weight of 19 grams, the robot is hefty for a hummingbird. But it can hover in place and fly in any direction (including backward) as fast as 18 kilometers per hour. It can fly through doorways and can be steered by a remote pilot using only video from an onboard camera. All of these abilities met or exceeded the targets for a second-generation “nano air vehicle” funded by the U.S. Defense Advanced Research Projects Agency.

    Very small ornithopters like the Nano Hummingbird face different challenges from their larger kin. As a flyer (animal or robot) gets smaller, flying becomes more and more like swimming. Less energy goes to lift and more to thrust. Instead of sculpted airfoils, the wings can and should be simple, rapidly beating membranes. For robots, miniaturization of components and power sources may impose the biggest constraint. A real hummingbird can fly across the Gulf of Mexico without stopping; the Nano Hummingbird can go for only 11 minutes.

    The main role envisioned for nano air vehicles is military surveillance and reconnaissance. But they could also be used for civilian applications such as search and rescue or environmental monitoring (for example, inside crippled nuclear reactors).

    Miniature devices have taken off in the past decade. In 2002, the dragonfly-like Mentor, developed at the University of Toronto and SRI International, demonstrated hovering flight. In 2006, the (also dragonfly-like) DelFly, developed at Delft University of Technology in the Netherlands, added a camera and forward flight; later versions have shrunk to 3 grams and a 10-centimeter wingspan.

    Nano flyers aren't yet fit for duty, however. “The Hummingbird is way cool. I can't say enough good things about it,” says Ephrahim Garcia, an engineer at Cornell University. “But it has a limited endurance.” Eleven minutes is not much time to scan a building for insurgents or earthquake survivors. Even so, Garcia adds, “Everything doesn't have to be practical. We can learn a lot about flow-structure interaction from these devices.”

  6. Computational Science

    Materials Scientists Look to a Data-Intensive Future

    1. Robert F. Service

    Supercomputing power now makes it possible to compute the properties of thousands of crystalline materials in a flash and is expected to guide experimentalists where to search for the next best things.

    When the Human Genome Project took off in 1990, the goal of sequencing all 3 billion letters of DNA in a single genome in 15 years seemed a daunting task. Ultimately, advances in sequencing technology enabled researchers to finish the job early. And today, sequencing a single genome seems almost pedestrian. Now, materials researchers are hoping that a similar technological ramp-up will help them with a Herculean task of their own: using computers to calculate the properties of a near-infinite variety of solids to identify potential materials breakthroughs for batteries, solar cells, and many other applications.

    Today's machines aren't yet able to simulate all types of materials. But a broad array of researchers think steady progress in technology has now made supercomputers powerful and available enough to make the task worth starting. “The scaling of computing is really making this possible,” says Gerbrand Ceder, a materials scientist at the Massachusetts Institute of Technology in Cambridge. Richard Hennig, a computational materials scientist at Cornell University, agrees. “The field of computational materials science is ready to take off,” Hennig says.

    To help launch it, the White House announced a new federal effort last June to promote the use of high-performance computing to cut in half the time it takes to develop a new material. Known as the Materials Genome Initiative (MGI), the effort promised $100 million just this year to drastically accelerate materials discovery—particularly in alternative energy, a field heavily reliant on advanced materials. Since then, the U.S. National Science Foundation and the Department of Energy have offered new grant programs in the area.

    That initiative has been a long time coming. Ever since the advent of quantum mechanics in the early 1900s, researchers have known how to compute the properties of materials, at least in principle. But most materials are so complex that only those with a small number of electrons can be fully analyzed at the quantum level. Because complex materials have on the order of 1023 electrons, researchers must rely heavily on computational methods to simplify the problem. These include a wide array of simulation techniques, such as density functional theory calculations, finite-element methods, and Monte Carlo simulations, all of which have been around for years. “The codes really haven't changed that much” over time, says Sadasivan Shankar, a computational materials scientist at Intel Corp. in Santa Clara, California. “What has changed is that the computing power has caught up.”

    Power structure.

    Lithium atoms (green) nestle into a simulated crystalline framework of a vanadium-containing cathode material for advanced batteries.

    CREDIT: A. JAIN ET AL., JOURNAL OF THE ELECTROCHEMICAL SOCIETY 159 (5 MARCH 2012) © THE ELECTROCHEMICAL SOCIETY

    That progress opens the door to computing the properties of a wide range of materials that once seemed unapproachably complex, Ceder says. Among the more tractable problems should be advances in catalysts, battery materials, and thermoelectrics, which convert heat to electricity. And it should be relatively straightforward to make a big impact on materials research quickly. There are between 50,000 and 100,000 known inorganic compounds, depending on whose figures you believe, Ceder notes. Crunching the numbers for all the computable properties of a single known material—including crystal structure, stability, and ionic mobility—takes the equivalent of 1 day for a standard computer chip, known as a CPU. To make that calculation for all known inorganic compounds would take between 2 million and 3 million CPU hours, a job one of the most advanced supercomputers could carry out in just a day and a half. Examining a good swath of the possible unknown materials out there would still take only half a billion CPU hours, Ceder predicts. “That's just a drop in the bucket” of the computing power available, Ceder says. “We don't know most things about most materials,” he says. “Materials scientists are hungry for this data.”

    Computational methods won't supplant experimental synthesis of materials anytime soon, Ceder and others say. Rather, they will help focus the work. Because it's not possible to synthesize and test all possible combinations of elements, computational tools are helping experimentalists decide where best to try cooking up new winners.

    That's the strategy Ceder has taken in his own lab. In 2005, Ceder's team launched an effort to compute the performance of would-be battery materials, which at the time they called the Materials Genome Project. (When the Obama Administration adopted the moniker for its own initiative, Ceder's team renamed its endeavor the Materials Project.) Since 2005, Ceder and his colleagues have calculated properties of some 80,000 battery materials and have gone on to conduct experimental screens of about 25,000. In the 5 March Journal of the Electrochemical Society, Ceder and his colleagues reported that a computational analysis suggested that a new vanadium-containing cathode material for lithium ion batteries had the potential to beat the energy-storage capacity of conventional lithium iron phosphate cathodes by some 10%. The material had never been synthesized before. But in February at the meeting of the American Physical Society (APS) in Boston, Ceder reported that since submitting its theoretical paper, his group had synthesized the compound and found that it behaved as predicted. In other efforts, Ceder and colleagues at Lawrence Berkley National Laboratory in California are pushing ahead with similar studies to find improved thermoelectrics and catalysts designed to absorb sunlight and use the energy to split water into oxygen and hydrogen, a potential fuel source.

    Ceder's group isn't alone. In February, at the APS meeting, Shankar reported that computational efforts are becoming increasingly important in making advances in computer chips. Over the past 50 years, chip designers have doubled the number of transistors on chips roughly every 18 months, largely by improving chip-patterning techniques to make devices with ever-smaller features. Today, Intel sells chips with 22-nanometer features and will begin manufacturing chips with 14-nanometer features in 2013. With each new generation of chips, Intel and other companies are being forced to incorporate novel materials that can perform as better conductors, insulators, and semiconductors even with less material present. That trend has put pressure on company researchers to test ever more alternatives. And one path forward has been to improve computational efforts to narrow down the range of options. In the latest generation of chip design, for example, Intel researchers used advanced computer models to simulate and predict the behavior of different materials that wound up being incorporated into the chips, Shankar says (see graph below).

    Despite such progress, Ceder and others say, some materials challenges remain out of computational reach—at least so far. Ceder cites the effort to develop new lightweight, high-strength metal alloys for construction materials. The success of such materials depends in part on the crystalline structure of their components and in part on their resistance to fracturing and ductile stretching—properties governed by how they behave on length scales far larger than the repeating unit of their crystalline lattice. “We don't even know the underlying science of how these materials work,” Ceder says. In battery materials, by contrast, about 75% of the performance of the material comes from properties that can be computed. “So you've got to pick the right problems,” Ceder says.

    That said, Krishna Rajan, a computational materials scientist at Iowa State University, Ames, says he and colleagues are making headway on challenging materials problems with an approach that doesn't require them to compute properties from first principles. The researchers have created a machine-learning program for designing novel materials as different as piezoelectric and drug-delivery compounds. Piezoelectrics are materials that change shape when zapped with an electric field, or generate electricity when squeezed. They can also be complex, so Rajan decided that improving them would pose a good challenge for his machine-learning approach.

    Rising tide.

    As feature size in computer chips has shrunk in each generation, chips have incorporated increasing numbers of materials that had initially been simulated.

    SOURCE CREDIT: S. SHANKAR/INTEL CORP.
    Let's compare.

    Computed properties of hundreds of battery materials reveal that all chemical families of battery materials (different colors) tend to be less safe when operating at higher voltages. But within each family some recipes are safer than others.

    SOURCE CREDIT: G. CEDER/MIT

    Rajan's team started by creating a database of 30 observations on different materials variables, such as bond distances between pairs of atoms in a would-be crystal, and the affinity of different elements for electrons. They proposed a list of relationships between different variables. They then used a program called a genetic algorithm to search through the relationships, see how well they predicted the properties of known piezoelectrics, and then refine its search to zero in on which ones are most important. They used the results to create a set of design rules that predicts a piezoelectric material's properties and then turned their computational effort loose to scan the universe of unknown compounds for better piezoelectrics.

    Using this approach, Rajan and his colleagues reported in the 2 March 2011 issue of the Proceedings of the Royal Society A that they had discovered a pair of novel piezoelectrics that outperform others at high temperatures. And in the 16 December 2011 issue of Nature, Rajan's team reported using much the same strategy to design novel polymers capable of encapsulating a vaccine against Yersinia pestis, the bacterium that causes bubonic plague, and helping to activate the vaccine inside the body. “There is no theory for what makes a good vaccine-delivery material,” Rajan says. But with the informatics approach, “we were able to tease that out.” Rajan says he's now adapting the same tools to predict the toxicity of nanomaterials.

    Rajan readily acknowledges that computational materials design still has its limits. Understanding complex interfaces between multiple materials, for example, remains a major challenge. “If we try to compute everything, it's still too complex,” Rajan says. However, he adds, “there are enough discoveries to be made that it will make a huge impact.”

  7. Seismology

    Learning How to NOT Make Your Own Earthquakes

    1. Richard A. Kerr

    As fluid injections into Earth's crust trigger quakes across the United States, researchers are scrambling to learn how to avoid making more.

    Ohio rumblings.

    Wastewater injected at this site in Youngstown triggered jolting earthquakes that prompted injection-well shutdowns and strong new regulations.

    CREDIT: AMY SANCETTA/AP PHOTO

    First off, fracking for shale gas is not touching off the earthquakes that have been shaking previously calm regions from New Mexico to Texas, Ohio, and Arkansas. But all manner of other energy-related fluid injection—including deep disposal of fracking's wastewater, extraction of methane from coal beds, and creation of geothermal energy reservoirs—is in fact setting off disturbingly strong earthquakes. These quakes of magnitude 4 and 5 are rattling the local populace, shutting down clean energy projects, and prompting a flurry of new regulations.

    Researchers have known for decades that deep, high-pressure fluid injection can trigger sizable earthquakes. But after a decades-long lull in triggered quake studies, researchers are playing catch-up with the latest round of temblors. When triggered quakes surprise drillers, “we're often in the position of ambulance chasers without the necessary science done ahead of time,” says seismologist William Ellsworth of the U.S. Geological Survey (USGS) in Menlo Park, California.

    As researchers link cause and effect in recent cases of triggered seismicity, they are beginning to see a way ahead: learn as you go. Thorough preinjection studies followed by close monitoring of cautiously increasing injection offer to lower, although never eliminate, the risk of triggering intolerable earthquakes.

    An injection too deep

    “I'm told it feels like a car running into the house,” says Stephen Horton, speaking of the magnitude-4 triggered quakes he saw coming a couple of years ago in north-central Arkansas. In the current March/April issue of Seismological Research Letters, the University of Memphis seismologist recounts his learn-as-you-go experience with injection-triggered quakes strong enough to seriously shake up the locals.

    Quake masters.

    USGS geophysicists Barry Raleigh (left) and Jack Healy are poised to open a valve and pressurize deep rock to turn on earthquakes. They could also turn them off in this 1970s study.

    CREDIT: C. BARRY RALEIGH

    Fracking for natural gas, formally known as hydraulic fracturing, had come to Arkansas around 2009. Not that a seismologist in Memphis would have noticed. Injecting water into gas-bearing shale at high pressures does break the rock to free the gas—that's the point, after all. But the resulting tiny quakes rarely get above magnitude 0 (the logarithmic scale includes negative numbers), never mind to the magnitude-3 quakes that people might feel.

    But shale gas drillers need to dispose of the millions of liters of water laden with natural brines and added chemicals that flow back up after a shale gas well has been fracked (Science, 25 June 2010, p. 1624). Injecting fracking wastewater into deep rock is a common solution, so starting in April 2009, 1- to 3-kilometer-deep disposal wells were sunk in the vicinity of Guy (population 706) and Greenbrier (population 4706), Arkansas.

    That's when Horton and Scott Ausbrooks of the Arkansas Geological Survey took note of a curious cluster of earthquakes near Greenbrier. The Guy-Greenbrier area had had only one quake of magnitude 2.5 or greater in 2007 and two in 2008. But there were 10 in 2009, the first year of deep disposal, and 54 in 2010. The suspicious timing of the quake cluster—which included hundreds of small quakes with one of magnitude 3.0—and its location near the first disposal well got their attention.

    Once alerted to the suspicious quakes, Horton and Ausbrooks cast a network of seismometers around two new wells that would start injecting in July and in August 2010. On 1 October of that year, Horton warned the director of the Arkansas Oil and Gas Commission, the state agency that regulates deep injection, to “watch out” for more earthquakes. Ten days later, a magnitude 4.0 struck about a kilometer northeast of the deeper of the two new wells. On 20 November, a magnitude 3.9 struck 2 kilometers farther to the northeast toward Guy. Then, in February 2011, magnitude-4.1 and −4.7 quakes struck to the southwest of the deeper well, toward Greenbrier.

    By spring, nearly 1000 recorded quakes had struck the area since the wells had started up. “People were feeling a lot of earthquakes,” Horton says. By 4 March, the public, the Oil and Gas Commission, and the governor agreed that it was all a bad idea, and the wells were shut down. The quakes tapered away.

    “I have no problem convincing scientific audiences these earthquakes were induced” by the deep wastewater disposal, Horton says. Their timing and location were certainly strongly suggestive. The quakes began only after injection began, surged when the rate of injection surged, were limited to the vicinity of the wells, and trailed off after injection was stopped.

    But the data from the seismometer network also painted a detailed picture of exactly how the injected wastewater triggered the quakes. It was injected into an aquifer 3 kilometers down, where it increased the pressure of groundwater in the rock's pores and fractures. From there the increased pressure due to injection spread through a previously unknown buried fault into the underlying rock, triggering quakes on the fault as it went.

    Bad leak.

    Wastewater injected into the Ozark aquifer of Arkansas leaked into a deeper unknown fault (roughly outlined by the rectangle in this side view). The heightened water pressure in the fault relieved just enough of the squeeze on the fault to allow earthquakes (gray and orange circles).

    CREDIT: ADAPTED FROM S. HORTON, SEISMOLOGICAL RESEARCH LETTERS 83, 2 (MARCH/APRIL 2012) © SEISMOLOGICAL SOCIETY OF AMERICA

    Those elevated pressures could spread far and wide. Tens of thousands of cubic meters of wastewater were injected each month, month after month; fracking usually involves far smaller volumes pressurized for hours or days. Only in the deeper rock could the added pressure of the injection trigger magnitude 4s. Burdened by far more overlying rock, the deep rock is already carrying stress that will make for larger earthquakes.

    The deep rock of the Guy-Greenbrier fault is also innately stronger than the overlying aquifer's sedimentary rock. The stronger rock could therefore store the stress that plate-tectonic forces load onto the North American continent. In this setting, the widespread fluid pressures of injection could pry apart the two sides of the fault just enough to let them suddenly slip by each other and release long-stored tectonic stress as a sizable earthquake.

    A trigger here, a trigger there

    North-central Arkansas is not the only place where fiddling with Mother Nature has lately set off earthquakes. On 9 March, the Ohio Department of Natural Resources announced that it had evidence “strongly indicating” that wastewater injection—at least part of it from fracking—had triggered 12 magnitude-2.0-to-4.0 quakes in Youngstown (population 66,982) since March 2011. The indications were strong enough to prompt the state to order the shutdown of four injection wells in the area and issue strong new regulations. And injection of fracking wastewater under the Dallas/Fort Worth International Airport in Texas triggered a sequence of more than 180 earthquakes ranging up to magnitude 3.3 in 2008–2009. The quakes tapered off once the injection was stopped.

    Other pursuits of cleaner energy can apparently also trigger earthquakes. Ellsworth and his Menlo Park USGS colleagues will report at next month's annual meeting of the Seismological Society of America that a “remarkable increase” in magnitude-3 and larger earthquakes since 2000 in the central United States is “almost certainly manmade.” In addition to the Arkansas cluster, seismic activity surged along the Colorado-New Mexico border beginning in 2001. That's where drillers were injecting water to extract methane from still-buried coal beds. In central and southern Oklahoma, seismicity abruptly increased in 2009 by a factor of 20 over the rate of the previous half-century, exclusive of November's magnitude 5.6 and its aftershocks. Exactly what is causing the Oklahoma surge is still unclear, but “we're suspicious industrial activities are at the heart of what's going on” all across the central United States, Ellsworth says.

    Drillers are running short of ideal waste injection sites, says geophysicist Mark Zoback of Stanford University in Palo Alto, California. There are already 144,000 wastewater injection wells in the country, he notes, but almost none trigger quakes. That's probably because they were drilled into weak, porous rock well suited to accommodating injected fluids. But some areas new to drilling are far from suitable rock formations. Deep, brittle, low-permeability rock “doesn't have a lot of capacity for taking any of these fluids,” says geophysicist Barry Raleigh, who ran the Rangely, Colorado, earthquake-control experiment in the 1970s (see photo, left). “As a storage medium, they're pretty crappy.”

    Red light, green light

    Wastewater injection “is not a mysterious process,” Zoback says. “These are manageable problems. We simply have to be more careful.” In his article in the April issue of Earth, Zoback lays out a learn-as-you-go approach to locating injection operations similar to one that geophysical modeler Jonny Rutqvist of Lawrence Berkeley National Laboratory suggested in a January Geotechnical and Geological Engineering article. Learn-as-you-go would likely also apply to fluid injections to create geothermal energy reservoirs (injections beneath Basel, Switzerland, touched off a project-ending magnitude-3.4 quake in 2006) or to keep the greenhouse gas carbon dioxide out of the atmosphere.

    Zoback's first recommendation is to look before you leap. He believes that the seismic imaging techniques used in oil and gas exploration should easily find buried faults capable of producing damaging quakes—those above, say, magnitude 6.0.

    When injection begins—and it should begin cautiously, Zoback says—seismometers should be monitoring the area. At a minimum, their data could paint a subsurface picture in real time, as in Arkansas, that could reveal smaller faults capable of magnitude 4s and 5s. But seismic data and well observations could also be fed into models of crustal rock behavior that could, with considerable uncertainties, project the potential for sizable earthquakes. If actual or projected quakes emerged, injection could be throttled back or even stopped, Zoback says. (Stopping injection has stopped significant earthquakes within days to a year.)

    The new regulations in Ohio and Arkansas at least move in the direction of such a learn-as-you-go approach. Studies by the U.S. Environmental Protection Agency and by the National Research Council on the injection triggering of quakes are due out in the coming months.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution