News FocusMETEOROLOGY

Weather Forecasts Slowly Clearing Up

See allHide authors and affiliations

Science  09 Nov 2012:
Vol. 338, Issue 6108, pp. 734-737
DOI: 10.1126/science.338.6108.734

Ever-increasing computer power and new kinds of observations are driving weather prediction to new heights, but some kinds of weather are still not yielding.

Spot on.

An experimental high-resolution NOAA model produced this strikingly accurate forecast of the “D.C. derecho” (higher winds are orange and white) 12 hours ahead of the storm's arrival in D.C.

CREDIT: EARTH SYSTEM RESEARCH LABORATORY/NOAA

The machines aren't just challenging weather forecasters—they're taking over. Predicting tomorrow's weather, or even next week's? Computer models fed by automated observing stations on the ground and by satellites in the sky have had the upper hand for years. Predicting where a hurricane will strike land in 3 days' time? Computer models have outperformed humans since the 1990s, some by larger margins than others (see p. 736).

Into next week.

The rising skill of the European Centre for Medium-Range Weather Forecasts' model has made forecasting the globe's weather more than 8 days ahead worthwhile.

CREDIT: ECMWF

“People like to joke about predicting the weather,” says meteorologist Kelvin Droegemeier of the University of Oklahoma, Norman, “but they have to admit that forecasting is a lot better than it used to be.” And they can thank vast increases in computing power as well as technological advances, such as sharper-eyed satellites and advanced computer programming techniques.

But the machines have yet to complete their takeover. Human forecasters can usually improve, at least a bit, on numerical weather predictions by learning the machines' remaining foibles. And humans still do better than computer models at some tasks, including forecasting the fits and starts of evolving hurricanes. But that's not saying much; human forecasts of hurricane intensity are hardly more skillful today than they were 20 years ago.

Researchers working on the forecast problem say they are closing in on breakthroughs that could soon put the machines out front in even the toughest areas of forecasting. “We're not just talking,” says Alexander MacDonald of the National Oceanic and Atmospheric Administration's (NOAA's) Earth System Research Laboratory (ESRL) in Boulder, Colorado, “we're building future models and seeing some spectacular results.”

On to next week

Nothing illustrates the computer-driven rise of weather forecasting skill like the 3-decade-long track record of the European Centre for Medium-Range Weather Forecasts (ECMWF) in Reading, U.K. Since 1979, ECMWF has been feeding the couple billion weather observations made each day into the most sophisticated forecast model available and running that model on the most powerful computer in the business, all in order to predict the weather around the globe as accurately and as far into the future as possible.

From the beginning, ECMWF has been the world champ of medium-range forecasting. In 1980, its forecasts of broad weather patterns—the location and amplitude of atmospheric highs and lows—were useful out to 5.5 days ahead. Beyond then, the computer forecast became useless as the atmosphere's innate chaos swamped the model's predictive powers. Today, ECMWF forecasts remain useful into the next week, out to 8.5 days. That leaves the rest of the forecasting world, including the U.S. National Weather Service (NWS) with its less powerful computer, in the dust by a day or more.

With the help of ever-improving computer models, forecasters are also making progress on an even tougher sort of prediction: where and when heavy rain and snow will fall. NWS forecasters have nearly doubled their skill at forecasting heavy precipitation, but it has been a long haul. They measure success on a scale of 0—complete failure—to 1, a perfect forecast. In 1961, they scored 0.18 on forecasting where 2.5 centimeters or more of rain (or 25 cm of snow) would fall a day later. Over the next 50 years, their score staggered up, never stagnating for much more than 5 years, until it stands at about 0.33 out of 1.

All hail the computer

Forecasting global weather patterns, heavy rain and snow, and any number of other sorts of weather better and further into the future has depended heavily on increasing computer power. When numerical weather prediction began in the mid-1950s, NWS's computational capacity was a meager 1 kiloflop (1000 calculations per second). It's now 108 megaflops, an increase of a factor of 100 billion.

Ever lower.

Errors have long been declining in NHC's official forecasting of where a hurricane will be 1 to 5 days ahead. That has allowed forecasters to warn the public earlier to flee the destruction of the coming storm.

CREDITS (TOP TO BOTTOM): NATIONAL HURRICANE CENTER/NOAA; ANDREA BOOHER/FEMA PHOTO

Forecasters have plenty of uses for the added computing power. The most straightforward is sharpening a forecast model's view of the atmosphere. Models work by calculating changes in air pressure, wind, rain, and other properties at points on a globe-spanning grid. In principle, the more points there are in a given area, the more closely the model's weather will resemble the real weather.

In the early days of numerical weather prediction, grid spacing—or resolution—was something like 250 kilometers, far too coarse for individual thunderstorms. Today, thanks to increased computer power, grid points in global models are 15 kilometers to 25 kilometers apart. Over areas of special interest, resolution can be even greater. In the NWS global model, a grid with 4-kilometer spacing can be laid or “nested” on the lower 48 states of the United States.

So far, every increase in operational model resolution has produced more realistic simulations of the weather and thus more accurate forecasts, says William Lapenta, acting director of NWS's Environmental Modeling Center in College Park, Maryland. Now NWS's goal, he says, is to use model forecasts to warn the public about the most severe weather threats—which happen to be on the smallest scales—as soon as the models predict them. For that, the models will need to get down to 1-kilometer resolution.

Beyond increasing resolution, forecasters have used added computing power to improve a forecast model's starting point. Observations from weather stations, weather balloons, and satellites must be fed into a model to give it a jumping-off point for forecasting. But a model's intake, or “assimilation,” of observations has never been optimal.

The assimilation of weather satellite observations that began in the 1970s was especially far from ideal. Satellites measure atmospheric infrared emissions at various wavelengths, and these observations were converted into temperature, pressure, and humidity values like those returned by weather balloons. “That didn't work very well,” says meteorologist James Franklin of the NWS's National Hurricane Center (NHC) in Miami, Florida.

Beginning in the 1990s, instead of converting satellite observations to familiar atmospheric properties, forecasters began to assimilate the satellite observations directly into models without converting them. That improved forecasts all around, but especially forecasts of where hurricanes are headed. “A lot of the success in hurricane track forecasting is using the satellite data in a more intelligent, natural way,” Franklin says.

That's because hurricanes don't propel themselves. “To first order, a hurricane is moving like a cork in a stream,” notes meteorologist Russell Elsberry of the Naval Postgraduate School in Monterey, California. The better the forecast for the stream—the atmosphere's flow for thousands of kilometers around—the better the forecast for a hurricane's eventual track.

Thanks to improving models, NHC hurricane forecasters have made great strides in track forecasting. “In the 1970s, the best forecast was what human forecasters could do,” says meteorologist Mark DeMaria, who works for NOAA (NWS's parent agency) at Colorado State University in Fort Collins. But by the 1990s, models forecasting hurricane tracks surpassed human performance.

Today, NHC forecasters consult a half-dozen different models before predicting a hurricane's position 3 days into the future. Those forecasts now have an error of 185 kilometers. In the 1970s, the 3-day error was about 740 kilometers. That quartering of track error has enabled NHC forecasters to issue hurricane warnings 36 hours ahead instead of 24 hours ahead, giving coastal residents 50% more time to evacuate.

When Hurricane Sandy began brewing in October, the models consulted by NHC converged on a serious threat to the U.S. Northeast several days before landfall, though ECMWF modeling gave an inkling a whopping 10 days ahead.

Making it real

Other, less computationally demanding improvements have also brought model calculations closer to reality. For example, every weather forecast model in routine use today has a serious problem with simulating vertical air movement. In effect, their equations impose a speed limit on vertical motions. That is a particular problem in simulating the so-called supercell thunderstorms that can generate tornadoes and the violent winds near a hurricane's eyewall.

So researchers from the National Center for Atmospheric Research in Boulder and elsewhere spent 15 years developing the Weather Research and Forecasting Model, or WRF (pronounced “worf”). Its equations of motion allow rapid vertical acceleration of air when cold, heavy air rushes downward to become highly destructive surface winds.

In ongoing experimental forecasts, a WRF-derived forecast model being run at ESRL, called the High-Resolution Rapid Refresh model (HRRR), is having some dramatic forecast successes, ESRL's MacDonald says. On 29 June, the 3-kilometer-resolution model made a forecast over the lower 48 states, starting with a small cluster of thunderstorms in northern Illinois. The 12-hour forecast put the much-intensified system at Washington, D.C., at about 10 p.m. that night with winds to 100 kilometers per hour. That's exactly how the Indiana storm cluster actually evolved into the “D.C. derecho,” the worst summer windstorm in the D.C. area in decades.

Here's hoping.

Feeding radar data into models can help forecast hurricane intensity, but monitoring all the possible tornado-generating storms in Tornado Alley would be a daunting task.

CREDIT: HERB STEIN, CSWR

Though MacDonald cautions that not every HRRR forecast is so successful, “for big, dangerous storms, we tend to do pretty well,” he says. Other successes of the model include an early forecast this summer for Hurricane Isaac to head to New Orleans, Louisiana, while most other models called for it to molest the Republican convention in Tampa, Florida. For Sandy, HRRR nailed the 130-kilometer-per-hour winds that blew water into New York Harbor 15 hours ahead.

Forecasting bugaboos

Despite all the recent advances, forecasters have hit a wall when the weather plays out rapidly and on small scales. The deployment in the 1990s of NWS Doppler radars capable of mapping out the spinning winds of supercells enabled forecasters to extend average tornado warning times from 3 minutes to 13 minutes. But the new technology left the tornado false-alarm rate—how often forecasters warned of a tornado that never showed up—stuck at an uncomfortably high 75%.

The problem, says Joshua Wurman of the Center for Severe Weather Research in Boulder, is that “we don't have a good idea of what's going to make a tornado. Seventy-five to 80% of supercells don't make a tornado. Some of the meanest-looking ones don't make them. What's special about the 20% that do? We know there must be some subtle differences between supercells, but they have eluded us.” And even the highest-resolution models are giving few clues to the secrets of tornadogenesis.

Hurricane forecasters trying to predict storm intensity are in much the same boat. The error in forecasting a storm's maximum sustained wind speed a few days in advance has not changed much since the NHC record began in 1990. Forecasts are particularly bad when a hurricane rapidly intensifies or weakens. In 2004, “Hurricane Charley went from a Category 2 storm to Category 4 overnight, and nobody knows why,” says tropical meteorologist Peter Webster of the Georgia Institute of Technology in Atlanta. “It's a mystery what determines intensity. Perhaps we still don't understand the basic physics of a tropical cyclone.”

Progress, or not.

The U.S. National Weather Service has increased its tornado warning time (lead time, red), thanks to Doppler radar. But the false alarm ratio—how often forecasters warned of a tornado that never appeared—hasn't budged in 20 years.

CREDIT: J. WURMAN ET AL., BULL. AMER. METEOR. SOC., 93 (AUGUST 2012)

Help is on the way, however, at least for those predicting hurricanes. Because the mystery seems to lie at or near a tropical cyclone's eyewall, which is only a few kilometers thick, researchers have been figuring out how to usefully assimilate the detailed three-dimensional observations of airborne Doppler radar into hurricane forecast models. Meteorologist Fuqing Zhang of Pennsylvania State University, University Park, and colleagues have been doing just that using their own WRF-based forecasting system run at resolutions of 1 kilometer to 5 kilometers.

Radar input has improved the accuracy of these intensity forecasts by 30% to 40% during the past 5 years, Zhang says. In the case of Sandy, ECMWF's modeling advantages—higher resolution among them—yielded suggestions of a strong storm 8 days ahead.

Onward and upward, if …

“The computer continues to be the big issue” in most types of weather forecasting, says James Hoke, director of NWS's Hydrometeorological Prediction Center in College Park. Researchers have long complained that they need more computing power. “It's frustrating for all of us, not being able to implement what we know,” Hoke says. But “as the computer power becomes available, the current trend of improvement will hold out for at least a decade.”

That is, if forecasters can afford to keep adding the needed computing power. But MacDonald sees a way ahead: so-called massively parallel, fine-grain computers such as Intel's MIC computer or NVIDIA's graphics processing units. GPUs are the specialized electronic circuits found in game consoles and other devices that require processing large blocks of data in parallel. MacDonald sees GPU-based forecast computers running at petaflop speeds at a fifth the cost of conventional central processing units.

If an ongoing federal interagency program can accelerate the adoption of GPUs, MacDonald says, NWS could be producing “transformational” forecasts by fiscal year 2017. With more computer power, the assimilation of radar observations, and more physically realistic models, forecasters could be doing an even better job forecasting the next Sandy.

View Abstract

Stay Connected to Science

Subjects

Navigate This Article