News FocusSeismology

Seismic Crystal Ball Proving Mostly Cloudy Around the World

See allHide authors and affiliations

Science  20 May 2011:
Vol. 332, Issue 6032, pp. 912-913
DOI: 10.1126/science.332.6032.912

Failing at quake prediction, seismologists tried making fuzzier forecasts, but Japan's megaquake is only the latest reminder of the method's shortcomings.

Out of the blue.

Students at California State University, Northridge, ponder the destruction wrought by a quake on an unrecognized fault.

CREDIT: MARK J. TERRILL/AP IMAGES

When a devastating megaquake rocked the region north of Tokyo in March, nobody saw such a huge quake coming. “Japanese scientists are among the world's best, and they have the best monitoring networks,” notes geophysicist Ross Stein of the U.S. Geological Survey (USGS) in Menlo Park, California. “It's hard to imagine others are going to do forecasting better. No one group is doing it in a way we'd all be satisfied with.”

In China, New Zealand, and California as well, recent earthquakes have underscored scientists' problems forecasting the future. A surprisingly big quake arrives where smaller ones were expected, as in Japan; an unseen fault breaks far from obviously dangerous faults, as in New Zealand. And, most disconcerting, after more than 2 decades of official forecasting, geoscientists still don't know how much confidence to place in their own warnings. “We can't really tell good forecasts from the bad ones” until the surprises arrive, Stein says. But improvements are in the works.

Simple beginnings

The current focus of earthquake prognostication research represents a retreat from its ambitious aims of a few decades ago. In the 1960s and '70s, seismologists worked on prediction: specifying the precise time, place, and magnitude of a coming quake. To do that, scientists needed to identify reliable signals that a fault was about to fail: a distinctive flurry of small quakes, a whiff of radon gas oozing from the ground, some oddly perturbed wildlife. Unfortunately, no one has yet found a bona fide earthquake precursor. By the time the 2004 magnitude-6.0 Parkfield earthquake—the most closely monitored quake of all time—struck the central San Andreas fault without so much as a hint of a precursor (Science, 8 October 2004, p. 206), most researchers had abandoned attempts at precise prediction.

Off the mark.

A magnitude-9 quake (star and blue band) struck far north of the zone considered to have the greatest seismic hazard (red).

CREDIT: ADAPTED FROM THE EARTHQUAKE RESEARCH COMMITTEE, NATIONAL SEISMIC HAZARD MAPS FOR JAPAN (2010), HEADQUARTERS FOR EARTHQUAKE RESEARCH PROMOTION

Parkfield did mark an early success of a new strategy: quake forecasting. Rather than waiting for a warning sign, forecasters look to the past behavior of a fault to gauge future behavior. They assume that strain on a fault is building steadily and that the same segment of fault that broke in the past will produce a similar break again in the future, once it reaches the same breaking point. Instead of giving the year or range of years when the next quake will strike a particular segment of fault, they express it as a probability.

USGS issued its first official earthquake forecast for the San Andreas in 1988 (Science, 22 July 1988, p. 413). Parkfield, which had a long record of similar quakes at roughly 22-year intervals, rated a 99% probability of repeating in the next 30 years. That turned out to be a success for the 1988 forecast. And the southern Santa Cruz Mountains segment, which last slipped in the 1906 San Francisco quake, was given a 30% chance of failing again within 30 years, which it did in 1989.

Since then, the 1988 San Andreas forecast has had no more hits, but it has missed plenty of serious California seismic activity. The reason: lack of data. Quake forecasting “is highly conditioned on the information available,” says William Ellsworth, a seismologist at USGS's office in Menlo Park. Ellsworth served on USGS's Working Group on California Earthquake Probabilities (WGCEP) that issued the 1988 forecast. Given the sorely limited knowledge of California fault history, WGCEP limited itself to the best-known fault segments, mainly on the San Andreas fault. It ignored not only a plethora of unrelated faults but also most branches of the San Andreas that slice through populated regions such as the San Francisco Bay area.

The limitations of that narrow focus became clear while the group was still reaching its conclusions. At its 23 November 1987 meeting, members decided that the particularly skimpy information on one fault, the Superstition Hills fault in far Southern California, prevented them from making a forecast. Within hours, a magnitude-6.5 quake struck that fault, followed 11 hours later by a magnitude-6.7 quake on a previously unknown, intersecting fault. There would be other surprises on little-known faults: the 1992 magnitude-7.3 Landers quake off the southern San Andreas (1 killed, $92 million in damage); the 1994 magnitude-6.7 Northridge earthquake on a previously unknown, buried fault (60 killed, $20 billion in damage); and the 1999 Hector Mine quake, magnitude 7.1, in the remoteness of the Mojave Desert.

Modern prognostication

But earthquake forecasting “has hardly stood still over 20-plus years,” Ellsworth notes. Scientists now have much more information about faults both major and minor; just as significant, they have developed a much keener sense of how to proceed when data are lacking. “To do seismic hazard right, we have to acknowledge we don't know how Earth works,” says seismologist Edward Field of USGS in Golden, Colorado, who is leading the next official forecast for California, due in 2012.

In the past, Field notes, official forecasters would take their best guess at how a fault works—for example, what length of fault will break and how fast the fault slips over geologic time—and feed it into a single forecast model, which would spit out a probability that a particular quake would occur in the next 30 years. WGCEP followed that approach for its 1988 and 1995 forecasts.

In the late 1990s, however, it became clear that a single model wasn't enough. “There's no agreement on how to build one model,” Field says, “so we have to build more than one to allow for the range of possible models.” Forecasters merged 480 different models for the 2007 California forecast to produce a single forecast with more clearly defined uncertainties. They aren't done yet. Current models cannot produce some sorts of quakes that have actually occurred, Field says, including the sorts that struck New Zealand and Japan.

Those two quakes are “perfect examples of what we need to fix,” Field says. In New Zealand, last September's magnitude-7.1 Darfield quake struck a previously unknown fault that probably hadn't ruptured for the past 15,000 years. Scientists were aware that a quake of that size was possible in the region. New Zealand's official forecast estimated that “background” seismicity would give rise somewhere to one about once in 10,000 years. The New Zealand forecast “is doing its job,” says seismologist Mark Stirling of New Zealand's GNS Science in Lower Hutt—but in this case, the information was too vague to be of much use. (Once the quake had occurred, statistical forecasting based on the size of the main shock did anticipate the possibility of its largest aftershock: a magnitude-6.3 quake in February that heavily damaged older structures in Christchurch.)

Mind the red.

The southern San Andreas bears the highest probability (red) of a big California quake.

CREDIT: ADAPTED FROM THE SOUTHERN CALIFORNIA EARTHQUAKE CENTER/NSF/USGS

In the case of the Tohoku earthquake, the culprit was an “unknown unknown.” Japanese seismologists preparing official Japan forecasts have assumed that the offshore fault running the length of the main island north of Tokyo had largely revealed its true nature. “I thought we really understood the Tohoku area,” says seismologist James Mori of Kyoto University in Japan. “Five hundred years seemed to be a long enough quake history given the [large] number of earthquakes.”

Drawing on a centuries-long history of quakes of magnitude 7 to 8 rupturing various parts of the fault, members of the official Earthquake Research Committee had divided the offshore fault into six segments, each roughly 150 kilometers long, that they expected to rupture again. They assigned each segment a probability of rupturing again in the next 30 years; the probabilities ranged from a few percent to 99% in recent official forecasts.

Official forecasters had not included the possibility of more than two segments rupturing at once. They knew that two adjacent segments seemed to have broken together in 1793 to produce a magnitude-8.0 quake. And at their meeting in February, they also considered geologic evidence that a big tsunami in 869 C.E. had swept several kilometers inland across the same coastal plain inundated in March. But they concluded they had too little data to consider what sort of earthquake would result if more than two segments failed at once, says seismologist Kenji Satake of the University of Tokyo, who served on the committee. “I don't think anyone anticipated a 9.0,” Satake says.

In the event, five fault segments failed in one magnitude-9.0 quake (see p. 911). (The disastrous 2008 Wenchuan quake in China also grew to an unimagined size by breaking multiple fault segments.) In Japan, “what happened was a very improbable, 1000-year event,” Ellsworth says. The most dangerous earthquakes tend to be rare, and their very rarity, unfortunately, makes them hard to study.

In spite of the obstacles, improvements are on the way. Previous official forecasts for California considered each fault segment in isolation, and they ignored aftershocks such as those that caused so much damage in New Zealand. By contrast, Field says, the next forecast, due in 2012, will allow the possibility of ruptures breaking through onto adjacent segments, such as a rupture on a side fault breaking onto the San Andreas itself. And it will also accommodate aftershocks.

Seismologists are also beginning to test their models against new earthquakes as they occur. Because larger quakes are rarer than smaller ones, their forecasts take longer to test. (Testing on California quakes for the past 5 years, for example, gets you only up to magnitude 5.) So it pays to cast as wide a net as possible. “The best strategy is testing around the globe,” Stein says. He is chair of the scientific board of the GEM Foundation, a nonprofit based in Pavia, Italy, which is developing a global earthquake model to make worldwide forecasts. By including the whole world, Stein says, the model should enable scientists to test forecasts of big, damaging earthquakes in a practical amount of time.

Navigate This Article