[GIS] Correct resampling method to simulate lower resolution satellite data

aggregationgdalpixelresampling

I would like to simulate lower resolution satellite imagery from continuous higher resolution imagery. For example I want to use Landsat derived leaf area index data with a resolution of 30 m and simulate a MODIS type of resolution of 250 m or 100 m.
The operation can be done using 'gdalwarp':

 gdalwarp -overwrite Landsat30m.tif Landsat_at250m.tif -tr 250 250

This would use gdals default setting for the resampling, which is the nearest neighbour resampling. However, this method is fast, but not very accurate, as it only takes into account a single pixel value (?).

Other available resampling methods are:

-r resampling_method:
Resampling method to use. Available methods are:

near:
    nearest neighbour resampling (default, fastest algorithm, worst   interpolation quality). 
bilinear:
    bilinear resampling. 
cubic:
    cubic resampling. 
cubicspline:
    cubic spline resampling. 
lanczos:
    Lanczos windowed sinc resampling. 
average:
    average resampling, computes the average of all non-NODATA contributing pixels. (GDAL >= 1.10.0) 
mode:
    mode resampling, selects the value which appears most often of all the sampled points. (GDAL >= 1.10.0) 
max:
    maximum resampling, selects the maximum value from all non-NODATA contributing pixels. (GDAL >= 2.0.0) 
min:
    minimum resampling, selects the minimum value from all non-NODATA contributing pixels. (GDAL >= 2.0.0) 
med:
    median resampling, selects the median value of all non-NODATA contributing pixels. (GDAL >= 2.0.0) 
q1:
    first quartile resampling, selects the first quartile value of all non-NODATA contributing pixels. (GDAL >= 2.0.0) 
q3:
    third quartile resampling, selects the third quartile value of all non-NODATA contributing pixels. (GDAL >= 2.0.0) 

My first guess would be to use 'average' as it would sum all values covered by the lower resolution pixel. However, the value going into a pixel is defined by a point spread function. Is there any algorithm, that would come closer to simulate lower resolution satellite pixel data than 'average'?

Best Answer

If I understand the question correctly, this paper could be interesting. It discusses the process of reducing spatial resolution of satellite imagery. There is a procedure explained to degrade Landsat MSS data to a 125×125 m resolution. I stumbled upon both, your question and this paper, by chance, so I didn't go through the effort of reading everything they wrote in the paper in detail. I hope it helps, though. There is a scheme of the process on page 10. Maybe the technique is not applicable (without certain adaptations) to your particular situation, but then at least you know.

Related Question