Navier-Stokes – How to Calculate the Upper Limit of Reliable Weather Forecasting Days

navier-stokes;turbulenceweather

To put it bluntly, weather is described by the Navier-Stokes equation, which in turn exhibits turbulence, so eventually predictions will become unreliable.

I am interested in a derivation of the time-scale where weather predictions become unreliable. Let us call this the critical time-scale for weather on Earth.

We could estimate this time-scale if we knew some critical length and velocity scales. Since weather basically lives on an $S^2$ with radius of the Earth we seem to have a natural candidate for the critical length scale.

So I assume that the relevant length scale is the radius of the Earth (about 6400km) and the relevant velocity scale some typical speed of wind (say, 25m/s, but frankly, I am taking this number out of, well, thin air). Then I get a typical time-scale of $2.6\cdot 10^5$s, which is roughly three days.

The result three days seems not completely unreasonable, but I would like to see an actual derivation.

Does anyone know how to obtain a more accurate and reliable estimate of the critical time-scale for weather on Earth?

Best Answer

I don't think that such a computation of a theoretical limit of accuracy is possible. There are several sources of uncertainty in weather models:

  • initial and boundary data,

  • parameterizations,

  • numerical instability, rounding and approximation errors of the numerical scheme employed to solve the Navier-Stokes equations for the atmosphere.

The term "parameterization" refers to the approximation of all subgrid processes, these are all processes/influences that happen at a scale that is smaller than the length of a grid cell. This includes effects from the topography, or the local albedo. More sophisticated approximations to subgrid processes can actually lead to less predictivity of a model, because the needed more detailed initial and boundary data are not available.

The Navier-Stokes equations themselves are usually approximated up to a minimum length scale that is way larger than the length scales that would be necessary to resolve turbulent flows, these kinds of approximations are called large eddy simulations.

The accuracy of this truncation depends critically on the kind of flow and turbulence.

While I don't think that it is possible to derive a theoretical limit, what people do instead is performing ensemble runs, where the results of a weather model are compared that are calculated with slightly perturbed initial and boundary data.

An example of such a "twin" experiment can be found here:

(The result is that the error becomes significant after ca. 15 days of simulated time.)