Solved – Stationarity – assumptions and examination

geostatisticsinterpolationspatialstationarity

I am examining rodent captures on six permanent rodent trapping grids measuring 150 x 150 meters and consisting of 121 trap stations evenly spaced 15 meters apart. There are six such trapping grids on the study site that is < 1000 hectares in size. I would like to interpolate the capture data to create a Kriged surface of rodent activity. An assumption of interpolation is that the data are stationary.

As Fortin & Dale (2005) state

stationarity is required for making inferences from a model that
characterizes the process of the spatial structure of data at
locations that are not sampled.

From what I understand, a process can be described as stationary when its statistical properties (mean and variance) do not vary across space.

But isn't variation across space why we conduct spatial analysis in the first place?

Stationarity is very often introduced in the spatial/geostatistical analysis literature but, I have yet to find solid direction and information about

  1. what scale, or for which types of studies, it is reasonable to
    assume your data are stationary,
  2. how to examine and verify data are stationary, and lastly,
  3. once quantified in some way just how much difference from one
    area to the next area qualifies your data as non-stationary?

Thus far after reviewing the literature the concept and the examination of stationarity seems highly subjective, arbitrary and/or obfuscated.

If anyone can provide some practical advice with this problem I'd greatly appreciate it!

Best Answer

There are always two ways to calculate statistics with the kinds of things your talking about:

  1. Calculate statistics within one grid.
  2. Calculate statistics between different grids.

Now, there is no reason that the statistical properties within one grid have to match the statistical characteristics between grids. They could conceivably be completely different, i.e. one could be in a minefield with no rats and the other could be in downtown Baltimore. Clearly, the distribution of rats would be quite different depending on which way I slice the data, that is, across grids or within grids.

Stationarity is the assumption that the statistics you calculate are the same regardless of which way you sliced the data. Practically speaking, you can "examine and verify data are stationary" by analyzing means, variances, histograms, etc., within sites, and then across sites, and seeing if they are the same, to within confidence intervals. There are no hard and fast rules; you do the best with the data you have and the techniques at your disposal, try to justify them mathematically, and present practical results. I would say that you can justify your methods if you can show stationarity in this way to some standard confidence interval, say 95% or 99%.

Related Question