Before the age of computers, weather forecasters analysed observations plotted on paper charts, drew isobars and other features and – based on their previous knowledge and experience – constructed charts of conditions at a future time, often one day ahead.
They combined observational data and rules of thumb based on physical principles to predict what would follow from a given state. The results were undependable for two main reasons: the data was sparse and the empirical rules were unreliable.
For the past 60 years or so, forecasts have been based on computer models that numerically solve the mathematical equations expressing the physical laws. This approach is radically different but, after a shaky start, the numerical weather prediction (NWP) models have become remarkably accurate, with forecasting skill increasing by about one day each decade. Now there are signs of a return to the analogue, data-driven methods.
With the inexorable march of artificial intelligence (AI), it is now possible to build models that take input weather data and produce forecasts without direct appeal to atmospheric physics. Using enormous data bases of previous observations and forecasts, and machine learning algorithms, these models deduce weights for the input values that maximise the accuracy of the predicted quantities.
There are many millions of weights but, once computed, they can be used repeatedly to produce predictions far more rapidly than conventional models based on integrating the equations of motion.
The most spectacular progress has been in the field of Nowcasting, or forecasting for a few hours ahead. Accurate prediction of extreme events such as thunderstorms can be challenging. Radar data is invaluable but is difficult to assimilate into conventional models.
Recently, a team at Google Research developed a system that takes a sequence of radar images and predicts what the radar will show for the next six hours. They solve a computer vision problem: there is no input of physical laws, but the system learns from training data how to simulate the physical behaviour. The predictions are effectively instantaneous, and more accurate than conventional forecasts that take hours to produce.
The HRES model of the European Centre for Medium-Range Weather Forecasts (ECMWF) is the world’s most accurate deterministic operational weather forecasting system. An AI prediction system called GraphCast, developed by researchers at Google and DeepMind, is based on graph neural networks and is trained on historical weather data from ECMWF. Recent results show that GraphCast has substantially greater skill than HRES for the majority of variables and time ranges tested. This represents a major advance in weather modelling.
Current NWP models comprise a dynamical core based on the equations of motion, and parameterisation, or approximate representation, of physical processes. The physics can already be improved using AI, but some scientists believe that it may be time to “dump the dynamical core” and replace it by DLWP (deep learning weather prediction).
One advocate of this is Prof Dale Durran of the University of Washington. He argues that such a change can reduce the time required to produce forecasts, get better probabilistic predictions and capture extreme weather events more reliably.
ECMWF, based in Reading, has no plans to replace the dynamical core of their model any time soon. But they are watching developments in DLWP and are keenly aware that dramatic changes in how we forecast the weather are on the horizon.
Peter Lynch is emeritus professor at the School of Mathematics & Statistics, University College Dublin. He blogs at thatsmaths.com