How to improve weather predictions by reducing precision

That’s Maths: Decimal points come with costs that may not fine-tune ensemble forecasts

Atmospheric flow is chaotic: a small change in the starting values can lead to a wildly different forecast.
Atmospheric flow is chaotic: a small change in the starting values can lead to a wildly different forecast.

Weather forecasting relies on supercomputers used to solve the mathematical equations that describe atmospheric flow. The accuracy of the forecasts is constrained by available computing power.

Processor speeds have not increased much in recent years and speed-ups are achieved by running many processes in parallel. Energy costs have risen rapidly: there is a multimillion-euro annual power bill to run a supercomputer, which may consume something like 10 megawatts of power.

An image from a video posted on social media of a  tornado  in Mangum, Oklahoma. Photograph:  Lorraine Matti via Reuters
An image from a video posted on social media of a tornado in Mangum, Oklahoma. Photograph: Lorraine Matti via Reuters

Early computer programs for weather prediction, written in the Fortran language, stored numbers in single precision. Each number had 32 binary digits or bits corresponding to a decimal number with about seven significant digits. Later models moved to double precision, each number having 64 bits and about 15 accurate decimal digits. This is now common practice.

Atmospheric flow is chaotic: this means that a small change in the starting values can lead to a wildly different forecast

It seems self-evident that higher numerical precision would result in greater forecast accuracy, but this is not necessarily the case. Observations, which provide starting data, have only a few digits precision. We may know the temperature to one-tenth of a degree or the wind speed within one metre per second. Representing these values with several digits beyond the decimal point may be futile. It could be likened to giving somebody’s height to the nearest millimetre or, in double precision, the nearest micron, which is meaningless.

READ SOME MORE

Computational resources

Of course, higher precision does reduce errors during the millions of computations needed for a forecast. But higher precision also implies greater storage requirements, larger data transmission volumes and longer processing times. Are these costs justified or can limited computational resources be used in more efficient ways?

At the European Centre for Medium-Range Weather Forecasts, researchers have been seeking ways to reduce computing power. They have found that 64-bit accuracy is not necessary and that, with 32-bit numbers, forecasts of the same quality are obtained much faster. The 40 per cent saving can be used in other ways, like enhancing spatial resolution or increasing ensemble size. Thus, a reduction in numerical precision can result in improved forecast accuracy.

Chaotic flow

Atmospheric flow is chaotic: this means that a small change in the starting values can lead to a wildly different forecast. Chaos was well-described by meteorologist Ed Lorenz, who asked if a tiny disturbance could cause a storm in a far-away continent. The idea is encapsulated in a Limerick:

Lorenz demonstrated, with skill,

The chaos of heatwave and chill:

Tornadoes in Texas

Are formed by the flexes

Of butterflies’ wings in Brazil

To allow for chaos, meteorologists run their models multiple times with slightly different starting values. The spread of these “ensembles” gives a measure of the confidence that can be placed in a forecast, and probabilities of different scenarios can be given. Researchers at the European Centre for Medium-Range Weather Forecasts were surprised that reducing numerical precision of the computations had little influence on the quality of the ensemble forecasts.

For climate simulations, which run over decades or centuries, it is expected that, by using reduced number lengths, substantial savings will be possible. Some sensitive operations like matrix inversion may require double precision but the bulk of the calculations can be done with 32 bits. Current work is also testing half-precision (16-bit numbers) for non-sensitive components of the models.

Peter Lynch is emeritus professor at UCD school of mathematics and statistics, University College Dublin. He blogs at thatsmaths.com