Technical Comments

Cannot Earthquakes Be Predicted?

Science  17 Oct 1997:
Vol. 278, Issue 5337, pp. 487-490
DOI: 10.1126/science.278.5337.487

Robert J. Geller et al. are on shaky ground when they state, in the title of their Perspective (1), that earthquakes cannot be predicted. In spite of their advice, we should not stop studying the physics of preparation for catastrophic rupture, in the field, the laboratory, and theoretically; neither should we stop measuring crustal parameters that might furnish constraints for physical models; and we should continue researching statistical methods to evaluate prediction claims and to test hypotheses quantitatively.

Some of the arguments Geller et al. put forward are incorrect. For example, the “slip on geological faults” is not always as “sudden” as they state. In 10% to 30% of large earthquakes, foreshocks occur days (2) to months (3) before the main shock. Seismologists agree that foreshocks are a symptom of some preparatory process to the main rupture. Thus, foreshocks are precursors. If that process could be detected and understood by measuring the several physical parameters of Earth's crust that probably change during it, then prediction would be possible, even if foreshocks themselves can be identified with a low probability only (4).

When Geller et al. state that “[t]here are no objective definitions of ‘anomalies’” and that “statistical evidence for a correlation is lacking,” they appear to be referring to specific papers that have been criticized (5). However, there are examples of clearly formulated, even tested, hypotheses. Evison and Rhoades (6) formulated a rigorous statistical test and applied it in real time to their well-defined hypothesis of precursory earthquake swarms. The algorithm M8 (7) has been tested in real time, and critically evaluated by others (8). The hypothesis of precursory quiescence also has been clearly stated (9), and specific predictions have been made to test it (10). On the basis of a mathematical model of failure of earth materials, the hypothesis of increasing moment release has been formulated (11) and tested by predictions (12).

Geller et al. are also incorrect in stating that “no quantitative physical mechanism links the alleged precursors to earthquakes.” Laboratory rock fracture experiments have shown that dilatancy occurs in rocks under high deviatoric stresses and that rock properties are drastically altered by this phenomenon (13). Dilatancy could explain many precursors, as proposed by Scholz et al. (14). An alternate mechanism to explain precursors is a reduction in ambient stress level that results from strain softening (15) during days to years before catastrophic failure in a major earthquake. This phenomenon is routinely observed in the laboratory in stiff rock presses, and it has been modeled quantitatively by modern friction laws (16), with the result that years before large subduction shocks occur, a reduction of stress is expected near the source volume (17). Thus, several types of measurements could furnish observable precursors. Finally, the pore pressure of underground fluids, which is known to play an important role in rupture initiation along many faults (18), can be altered in a number of ways, which also could lead to precursors. Some of these models are quantitative, others not yet because too few constraints exist at this time.

Geller et al. are again incorrect when they say that “the leading seismological authorities of each era have generally concluded that earthquake prediction is not feasible.” They should have added “with the current knowledge” to this sentence. I remember the frustration of Richter, when asked sensationalist questions about unfounded predictions, instead of the science of earthquakes. However, as cited by Geller et al., Richter did not advocate an opinion that earthquake prediction was inherently impossible.

Geller et al. state that they believe that earthquakes occur at random. Randomness would require the assumptions that the tectonic stress is near failure everywhere and at all times, and that the stress drops are small, depleting the local elastic energy available for further ruptures only to an insignificant degree. But the accumulation and release of strain that has been measured leading up to and following earthquakes, respectively, suggests otherwise. For example, the M7.2, Kalapana, Hawaii, earthquake in 1975 was anticipated on the basis of the observed accumulated strain (19). Over decades, compressive strain of 4 (10−4) accumulated, and during the earthquake the same amount of strain was released (20). This conclusion was also reached by Reid et al. (21) for the 1906 San Francisco earthquake. Precise leveling along the coast of Japan (22), and geological records of recent sedimentation along beaches (23) show that strain is released in great earthquakes only after it has been accumulated over centuries.

The figure in the Perspective by Geller et al. (p. 1617) does not support the idea that earthquakes are unpredictable. It shows, instead, that the sizes of cracks available for failure in earthquakes are fractally distributed. In volumes where no strain energy has been built up by tectonic processes, however, large earthquakes cannot occur, regardless of the nature of the crack distribution. Along the plate margins, where the vast majority of all earthquakes occur, the stresses are likely to be low, and the stress release may be nearly complete (24).

At the time of Columbus, most experts asserted that one could not reach India by sailing from Europe to the west and that funds should not be wasted on such a folly. Geller et al. make a similar mistake, but I doubt that human curiosity and ingenuity can be prevented in the long run from exploring fully the extent to which at least some earthquakes are predictable, although it is not easy. Such discoveries will be made in Japan, Europe, or China if the current lack of funding for earthquake prediction research continues in the United States.

REFERENCES AND NOTES

Geller et al. (1) present an unduly negative view of research in a difficult field. At present, no mechanism for the cause of electromagnatic precursors is known well enough so that these precursors can be used reliably for earthquake prediction (2). Within the framework of the scientific method, however, refinement of a hypothesis in earthquake prediction becomes a multiyear process with infrequent experiments (that is, observations associated with earthquakes). This is the reason why most reports of precursors are written after the earthquake has occurred. It is time for earthquake prediction research to be more honestly identified as earthquake monitoring. Part of the problem is of our own making; some of us in the field have been overly quick to promise viable prediction techniques and equally quick to declare prediction experiments failures. Had we been more patient in the 1980s, monitoring such experiments as that in Hollister, California, we might have recorded the 1989 Loma Prieta earthquake, and experiments in Palmdale, California, might have caught the 1994 Northridge earthquake.

The length of an experiment should not be an argument against the potential value of the eventual results. Not only do efforts to detect these precursory changes continue in both the United States (3) and internationally (4), but contrary to the statements by Geller et al., the VAN experiment (5) is still being actively debated and considered as a viable prediction tool. The key to assuring that these experiments are valuable is to design them to objectively define anomalies, differentiate between natural signals and noise, elucidate physical mechanisms, and provide a data set amenable to statistical analysis. The answer to whether or not earthquakes can eventually be predicted depends on how one defines the acceptable level of uncertainty associated with prediction. The field is not yet sufficiently mature to address the uncertainty in most cases. We simply have not had sufficient numbers of events to establish a cause-and-effect relationship, much less assess uncertainty or identify a physical mechanism. Whether or not a particular level of uncertainty is useful must be judged by those entrusted with public safety and decision making.

Geller et al. say that the previous 100 years of failure at prediction is an argument against future success. However, geophysically based prediction techniques are still only a few decades old, and they introduce fundamentally new and different approaches from the previous 100 years. Although we agree that earthquake hazard mitigation is more valuable in the immediate future, dismissing the field of earthquake prediction research seems premature to us. If one considers the potentially large payoff that can be realized from a successful prediction, and that at least one technique (VAN) is still being actively evaluated, one would have to conclude that earthquake prediction research should be continued and the debate left open.

REFERENCES AND NOTES

Response: Earthquake science can achieve significant improvements in fundamental understanding of tectonics, material behavior, stress interactions, and related physical processes. It can also deliver improved seismic hazard estimation and risk reduction. However, as we pointed out in our Perspective (1), earthquake prediction would have to be reliable (producing few false alarms and few failures to predict) and accurate (with small ranges of uncertainty in space, time, and magnitude) to justify the cost of responses such as declaring a state of emergency or ordering evacuations. As decades of intensive research have not yielded positive results (2, 3), hopes for such reliable and accurate prediction appear to be unrealistic.

There are many systems whose governing physical laws are known, but whose underlying complexity and strong nonlinearity nevertheless preclude reliable and accurate prediction. For example, the rate of auto accidents can be estimated, but the time and location of individual accidents cannot be predicted. Speeding frequently precedes accidents, but only a small fraction of speeding violations are followed by serious accidents. Even after a crash has begun, its final extent and severity depend on unpredictable dynamic interactions between drivers, cars, and other objects. Predicting individual earthquakes is a still greater challenge, because we lack detailed knowledge of the relevant parameters (fault geometry, strength variations in the fault zone material, rheological properties, and state of stress), and the relations governing failure are not known. Even after faulting starts, whether any small earthquake cascades into a large one depends on details of the nonlinear interference of large amplitude dynamic stress waves in a highly heterogeneous medium. These general physical considerations, which do not rest on any particular model of the source process, suggest that the outlook for reliable and accurate earthquake prediction is bleak (4, 5).

There are four key reasons why the above comments are overly optimistic.

1) Statistically significant precursors have not been identified. The existence of foreshocks illustrates the fundamental difficulty of prediction. “Foreshocks” can be identified retrospectively: they are earthquakes that occur shortly before larger nearby earthquakes. However, there is no known way to prospectively distinguish foreshocks from random small earthquakes, although considerable efforts have been made, unsuccessfully, to find one (6). Foreshocks occur under almost the same conditions as the subsequent main shock, but their energy release is orders of magnitude smaller. Highly accurate strain measurements (5, 7) show that any physical change in the interval between the foreshocks and the main shock is extremely subtle. There is little, if any, correlation, even retrospectively, between the size of foreshocks and that of the subsequent main shock (8).

The lack of agreement among prediction advocates on a single set of “best-candidate precursors” underscores the weakness and inconsistency of their case. A committee chaired by Wyss (9,10) compiled a list of five possibly significant precursors, which he (10, p. 12) characterizes as the “cream of the crop.” Yet his comment appears to emphasize a different set of possible precursors (his references 6–12), while Aceves and Park (their references 2, 4, and 5) cite reports of possible electromagnetic precursors, some of which have been criticized by Wyss (11). To validate a prediction method, one must show it to be successful beyond random chance (12,13), but such success has not been demonstrated in these or other studies.

The work cited in the comment by Wyss (his references 6–12) does not appear to support the existence of precursors with reliable predictive power. The “M8” algorithm (14) does not aim to make predictions of the type discussed in our Perspective; M8 instead aims to identify space-time regions where the probability of an earthquake is higher than normal. However, the alarms issued by M8 are not statistically significant (15), and some appear to be artifacts (16). Evison and Rhoades (17) showed that the precursory swarm hypothesis was not statistically significant. “Precursory quiescence” (18) is not a well-defined phenomenon, appears not to be a statistically significant precursor, and is frequently an artifact (19). A successful prediction of the 1978 Oaxaca, Mexico, earthquake was claimed on the basis of a report of seismic quiescence (20), but other analyses (21) make a strong case that the quiescence was an artifact. Kisslinger acknowledges that “the specifically predicted event has not happened” (22, p. 218).

2) A physical basis for prediction has not been established. It was briefly believed in the 1970s that dilatancy (an increase in volume of rocks before failure) occurs extensively before earthquakes [see13, 14 in the comment by Wyss]. Some reports of possibly precursory 10 to 20% temporal changes in seismic wave velocities were cited as evidence, but were later found to be artifacts (23). Such temporal variations were not found by studies using controlled (explosive) sources (24). Recent research has placed much lower limits on possible temporal variations (25). A reported large crustal uplift in California (the “Palmdale Bulge”) was interpreted by Wyss (26) as a result of dilatancy, but was later shown to have been an artifact (2, 27).

Some models (see 15–17 in the comment by Wyss) in which earthquakes have observable precursors have been proposed, but their applicability to the Earth has not been demonstrated. Studies [see19–24 in the comment by Wyss] of strain accumulation cannot be used to make reliable and accurate predictions, because actual seismicity is highly irregular (13, 28). Wyss' statement that changes in “pore pressure of underground fluids … also could lead to precursors” is speculation. There is strong evidence that the crust is widely in a near-failure state, in which small perturbations can trigger earthquakes. Such triggering was observed over a large area at distances of 1000 km or more from the epicenter of the 1992 Landers earthquake (29).

3) Prediction efforts based on electromagnetic observations do not seem promising. There are several problems with reports of electromagnetic precursors: the lack of statistical significance (12,30); the absence of simultaneous geodetic or seismological precursors; the absence of coseismic (at the time of the main shock) electromagnetic signals of the same type as, but with larger amplitudes than, the alleged precursors; the fact that sources other than earthquakes have not been ruled out; the lack of consistency; and the lack of a quantitative relation between the anomalies and the earthquake source parameters.

Aceves and Park cite the VAN studies in arguing that prediction research should be continued. However, some geoelectrical signals described as precursors in the VAN studies have been shown to be artifacts (31). We have not seen convincing evidence that any of the signals observed in the VAN studies are earthquake precursors. Varotsos and his co-workers state that geo-electrical signals recorded weeks before, and at distances of over 100 km from, subsequent earthquakes were precursors (32), but this would require paths with extremely high electrical conductivity that are inconsistent with the geology of Greece (33). The VAN studies' “predictions” are vague and ambiguous, their “successful predictions” are not statistically significant, their “successes” include cases where the nominal tolerances were exceeded (34), and their “predictions” correlate much better with preceding, rather than subsequent, earthquakes, as they were issued preferentially during periods of heightened seismic activity (12, 34). Wyss (11, p. 1302) apparently shares our opinion: “ … there is nothing in favor of the VAN hypothesis.”

4) Empirical prediction research seems unlikely to be fruitful. The long search for precursors has yielded none with reliable predictive power and has contributed little to understanding earthquakes. Some prediction research employs high standards, but putting the hoped-for result ahead of the scientific methodology invites a lack of rigor. Wyss says “discoveries will be made in Japan … if the current lack of funding for earthquake prediction research continues in the United States.” However, the obstacles to predicting earthquakes are the same in Japan as elsewhere; a recent review (35) criticized Japan's prediction program (36).

Emergency measures in response to predictions, as defined above, would be highly costly and would greatly disrupt society. Such measures could only be taken on the basis of a glaringly obvious precursor that almost invariably preceded large earthquakes and almost never occurred otherwise (37). Furthermore there would have to be a reliable quantitative relation between the precursor and the parameters of the impending earthquake. As the past decades of prediction research have not found any such clear and highly reliable precursory signal (2, 3), it seems likely that none exists. We therefore think that emphasis should be placed on basic research in earthquake science, real-time seismic warning systems, and long-term probabilistic earthquake hazard studies.

REFERENCES AND NOTES

Related Content

Navigate This Article