U.S. global numerical weather prediction has now fallen into fourth place, with national and regional prediction capabilities a shadow of what they could be.
There are several reasons for these lagging numerical weather prediction capabilities, including lack of strategic planning, inadequate cooperation between the research and operational communities, and too many sub-optimal prediction efforts.
But there is another reason of equal importance: a profound lack of computer resources dedicated to numerical weather prediction, both for operations and research.
The bottom line: U.S. operational numerical weather prediction resources used by the National Weather Service must be increased 10 times to catch up with leading efforts around the world and 100 times to reach state of the science.
Why does the National Weather Service require very large computer resources to provide the nation with world-leading weather prediction?
Immense computer resources are required for modern numerical weather prediction. For example, NOAA/NWS TODAY is responsible for running:
- A global atmospheric model (the GFS/FV-3) running at 13-km resolution out to 384 hours.
- Global ensembles (GEFS) of many (21 forecasts) forecasts at 35 km resolution
- The high-resolution Rapid Refresh and RAP models out 36 h.
- The atmosphere/ocean Climate Forecast System model out 9 month.s
- The National Water Model (combined WRF and hydrological modeling)
- Hurricane models during the season
- Reanalysis runs (rerunning past decades to provide calibration information)
- Running the North American mesoscale model (NAM)
- Running the Short-Range Ensemble Forecast System (SREF)
This is not a comprehensive list. And then there is the need for research runs to support development of the next generation systems. As suggested by the world-leading European Center for Medium Range Weather Prediction, research computer resources should be at least five times greater than the operational requirements to be effective.
NY Times Magazine: 10/23/2016
How Lack of Computing Resources is Undermining NWS Numerical Weather Prediction
The current modeling systems (some described above) used by the National Weather Service are generally less capable then they should be because of insufficient computer resources. Some examples.
1. Data Assimilation. The key reason the U.S. global model is behind the European Center and the other leaders is because they use an approach called 4DVAR, a resource-demanding technique that involves running the modeling systems forward and backward in time multiple times. Inadequate computer resources has prevented the NWS from doing this.
2. High-resolution ensembles. One National Academy report after another, one national workshop committee after another, and one advisory committee after another has told NWS management that the U.S. must have a large high-resolution ensemble system (at least 4-km grid spacing, 30-50 members) to deal with convection (e.g., thunderstorms) and other high-resolution weather features. But the necessary computer power is not available.
European Center Supercomputer
3. Global ensembles. A key capability of any first-rate global prediction center is to run a large global ensemble (50 members at more), with sufficient resolution to realistically simulate storms and the major impacts of terrain (20 km grid spacing or better). The European Center has a 52 members ensemble run at 18-km grid spacing. The U.S. National Weather Service? 21 members at 35-km resolution. Not in the same league.
I spend a lot of time with NOAA and National Weather Service model developers and group leaders. They complain continuously how they lack computer resources for development and testing. They tell me that such resource deficiency prevents them from doing the job they know they could. These are good people, who want to do a state-of-the-art job, but they can’t do to inadequate computer resources.
NOAA/NWS computer resources are so limited that university researchers with good ideas cannot test them on NOAA computers or in facsimiles of the operational computing environment. NOAA grant proposal documents make it clear: NOAA/NWS cannot supply the critical computer resources university investigators need to test their innovations (below is quote from a recent NOAA grant document):
So if a researcher has a good idea that could improve U.S. operational weather prediction, they are out of luck: NOAA/NWS doesn’t have the computer resources to help. Just sad.
U.S. Weather Prediction Computer Resources Stagnate While the European Center Zooms Ahead
The NOAA/NWS computer resources available for operational weather prediction is limited to roughly 5 petaflops (pflops). Until Hurricane Sandy (2010), National Weather Service management was content to possess one tenth of the computer resources of the European Center, but after the scandalous situation went public after that storm (including coverage on the NBC nightly news), NOAA/NWS management managed to get a major increment to the current level–which is just under what is available to the European Center.
Image courtesy of Rebecca Cosgrove, NCEP Central Operations
But the situation is actually much worse than it appears. The NWS computer resources are split between operational and backup machines and is dependent on an inefficient collection of machines of differing architectures (Dell, IBM, and Cray). There is a bottleneck of I/O (input/output) from these machines (which means they can’t get information into and out of them efficiently), and storage capabilities are inadequate.
There is no real plan for seriously upgrading these machines, other than a 10-20% enhancement over the next few years.
In contrast, the European Center now has two machines with a total of roughly 10 pflop peak performance, with far more storage, and better communication channels into and out of the machine.
And keep in mind that ECMWP computers have far few responsibilities than the NCEP machines. NCEP computers have to do EVERYTHING from global to local modeling, for hydrological prediction to seasonal time scales. The ECMWF computers only have to deal with global model computing.
To make things even more lopsided, the European Center is now building a new computer center in Italy and they recently signed an agreement to purchase a new computer system FIVE TIMES as capable as their current one.
They are going to leave NOAA/NWS weather prediction capabilities in the dust. And it did not have to happen.
Fixing the Problem
Past NOAA/NWS management bear substantial responsibility for this disaster, with Congress sharing some blame for not being attentive to this failure. Congress has supplied substantial funding to NOAA/NWS in the past for model development, but such funding has not been used effectively.
Importantly, there IS bipartisan support in Congress to improve weather prediction, something that was obvious when I testified at a hearing for the House Environment Subcommittee last November. They know there is a problem and want to help.
There is bipartisan support in Congress for better weather modeling
A major positive is that NOAA is now led by two individuals (Neil Jacobs and Tim Gallaudet), who understand the problem and want to fix it. And the President’s Science Adviser, Kelvin Droegemeier, is a weather modeler, who understands the problem.
So what must be done now?
(1) U.S. numerical prediction modeling must be reorganized, since it is clear that the legacy structure, which inefficiently spreads responsibility and support activities, does not work. The proposal of NOAA administrator Neal Jacobs to build a new EPIC center to be the centerpiece of U.S. model development should be followed (see my blog on EPIC here).
(2) NOAA/NWS must develop a detailed strategic plan that not only makes the case for more computer resources, but demonstrates how such resources will improve weather prediction. Amazingly, they have never done this. In fact, NOAA/NWS does not even have a document describing in detail the computer resources they have now (I know, I asked a number of NOAA/NWS managers for it–they admitted to me it doesn’t exist).
(3) With such a plan Congress should invest in the kind of computer resources that would enable U.S. weather prediction to become first rate. Ten times the computer resources (costing about 100 million dollars) would bring us up to parity, 100 times would allow us to be state of the science (including such things as running global models at convection-permitting resolution, something I have been working on in my research).
Keep in mind that a new weather prediction computer system would be no more expensive that a single, high tech jet fighter. Which do you think would provide more benefit to U.S. citizens? And remember, excellent weather prediction is the first line of defense from severe weather that might be produced by global warming.
82 million dollars a piece
(4) Future computer resources should divided between high-demand operational forecasting, which requires dedicated large machines, and less time-sensitive research/development runs, which could make use of cloud computing. Thus, future NOAA computer resources will be a hybrid.
(5) Current operational numerical prediction in the National Weather Service has been completed at the NCEP Central Operations Center. This center has not been effective, has unnecessarily slowed the transition to operations of important changes, and must be reorganized or replaced with more facile, responsive entity.
U.S. citizens can enjoy far better weather forecasts, saving many lives and tens of billions of dollars per year. But to do so will require that NOAA/NWS secure vastly increased computer resources, and reorganize weather model development and operations to take advantage of them.