Climate prediction models are markedly defective, even in reproducing the changes that have already occurred. Given the great importance of climate change, we must identify the causes of model errors and reduce the uncertainty of climate predictions.
In 1979 a study group led by Jule Charney, one of the greatest meteorologists of the 20th century, submitted a report to the Climate Research Board of the National Research Council in the United States.
The task of the group was to assess the scientific basis for projection of possible future climatic changes resulting from man-made releases of carbon dioxide into the atmosphere. The report was entitled Carbon Dioxide and Climate: A Scientific Assessment.
The study group estimated the most probable global warming for a doubling of CO2 was 3°, with an error of plus or minus 1.5°. In 2014 the fifth assessment report from the Intergovernmental Panel on Climate Change stated that warming due to doubling of CO2 is “likely between 1.5 degrees and 4.5 degrees”, the same uncertainty range as the Charney report some 35 years earlier. Thus, notwithstanding major improvements in climate modelling, large uncertainties remain about how much the Earth will warm in response to increasing CO2.
Uncertainty
Climate models are complex computer programs with millions of lines of code. Some of the atmospheric and oceanic processes are represented well. For example, fluid flow is described accurately by the Navier-Stokes equations. Other processes are accounted for in a much cruder fashion.
What are the reasons for errors and uncertainties in climate predictions? The treatment of clouds and their interaction with radiation from the sun and the Earth is the main culprit. To model the climate, we represent the continuous atmosphere and ocean by a set of values on a computational grid.
The pressure, temperature, humidity and winds, which vary from place to place, are specified at points separated by tens of kilometres. Details at finer scales – so-called sub-grid processes – are poorly represented in the models and lead to errors that grow with time.
As computer power increases, the grids are refined, leading to greater confidence in the predictions, but it will be decades before grids at a 1km scale can be used for long-term climate simulations. In the meantime, we must seek other ways to narrow the confidence gap.
Weather forecasts
Short-range simulations, for days or weeks, using very high-resolution modes are now in routine use for weather forecasting. Can we use information from these to improve the treatment of clouds in low-resolution models?
Several climate modelling groups are now using machine learning to improve the representation of clouds and other sub-grid processes. Is it possible to train a machine-learning scheme by using short-term, high-resolution data?
Early evidence indicates that this approach is feasible and enables low-resolution models to produce some key features of the high-resolution simulations. However, these experiments have been done only with simplified models. We are still far from being able to generate better climate simulations.
There remain some significant problems, one of which is instability, where the simulations become completely unrealistic. Since machine-learning schemes involve “black boxes” such as neural networks, where the variables have no obvious physical interpretation, it is difficult to rectify these instabilities.
These problems are formidable and, as a result, the technique of marrying machine learning to deterministic models is far from trivial and its potential is yet to be realised.
However, there have been major advances in machine learning over the past decade, especially in computer vision and language translation. These give us hope that as machine-learning techniques improve, we will be able to produce climate predictions with reduced levels of uncertainty.