Google Earth Engine Python API – Neighborhood Statistics in Google Earth Engine

google-earth-enginegoogle-earth-engine-python-api

I think I might be misunderstanding the reduceNeighborhoods function in GEE. I'm trying to use elevation data to mask out land cover data so I was trying to run statistics on the elevation data to figure out where to apply a mask. I am trying to compare the elevation in a pixel and the elevations in the surrounding pixels, so I have used reduceNeighborhood, but I am not sure this is doing what I think it is doing because I am not getting the results I expected. Here is the code I'm using:

#packages
#import ee
#import geemap
#import geemap.colormaps as cm

#modis land cover data (500 m resolution) (data to be masked)
lc_igbp = ee.ImageCollection('MODIS/006/MCD12Q1').select('LC_Type1')
lc_scale = 500

#jaxa elevation data (30 m resolution) (data which will be used to define mask for modis data)
elev = ee.ImageCollection('JAXA/ALOS/AW3D30/V3_2').select('DSM')
elev_scale = 30

#the datasets are in different resolutions so I define a target projection to apply reduceResolution (and reproject the modis data from it's original sinusoidal projection otherwise it visualizes strangely)
target = lc_igbp.first().reproject('EPSG:4326',scale=lc_scale) 
proj = target.projection()

#reprojecting image otherwise I get an error when reducing resolution
elev_im = elev.median().reproject('EPSG:4326',scale=elev_scale)

#reducing resolution using both mean and standard deviation reducers
elev_mean = elev_im.reduceResolution(reducer=ee.Reducer.mean(),maxPixels=1024).reproject(proj)
elev_sd = elev_im.reduceResolution(reducer=ee.Reducer.stdDev(),maxPixels=1024).reproject(proj)

#neighbourhood mean of elevation standard deviation reduction
v_1 = elev_sd.reduceNeighborhood(reducer=ee.Reducer.mean(),
                                 kernel=ee.Kernel.square(radius=2))

#difference between the mean elevation and the neighbourhood mean elevation
v_2 = elev_mean.subtract(elev_mean.reduceNeighborhood(reducer=ee.Reducer.mean(),
                                                      kernel=ee.Kernel.square(radius=2))).abs()

When I visualize the images, elev_mean and elev_sd look as I expect, but v_1 and v_2, the results of the reduceNeighborhood functions, do not.

#define trial bounding box
grid = ee.Geometry.Rectangle(-115.75,50.75,-115.25,51.25)

#visualizations
#visualization characteristics
palette = cm.get_palette(cmap_name='Spectral', n_class=60, hashtag=True)
vis_elev = {'min' : 0,
            'max' : 8768,
            'palette': palette}

palette = cm.get_palette(cmap_name='Spectral', n_class=60, hashtag=True)
vis_stats = {'min' : 0,
            'max' : 200,
            'palette': palette}

#generate map
Map = geemap.Map(#center=(45.5, -73.5), 
                 center=(51,-115.5),
                 zoom=10)
Map

Map.addLayer(elev_mean.clip(grid),vis_elev,'elev_mean')
Map.addLayer(elev_sd.clip(grid),vis_stats,'elev_sd')

Map.addLayer(v_1.clip(grid),vis_stats,'v_1')
Map.addLayer(v_2.clip(grid),vis_stats,'v_2')

When I zoom in on v_2 there are different values around the edge of each pixel compared to in the middle:

difference between elev_mean and the neighborhood mean elevation

This had been mentioned in Statistics of Image Neighborhood in Google Earth Engine, but I didn't understand the reason for it. My understanding of the reduceNeighborhoods function was that it would take the surrounding pixels (as defined by the ee.Kernel, here I am using a square kernel with a radius of 2, although maybe this does not mean what I think it means) and apply a reducer and return the result in the originating pixel like so:

enter image description here

But this does not appear to be the case. Maybe I am misunderstanding how this function works or am using it wrong.

Can anyone tell me what is going on at the edges of the pixels?

The hope was to put limits on v_1 and v_2, such that if they were over a certain value, a mask would be applied to the land cover data. However, if I do this with the current results for v_1 and v_2, I end up masking out some partial pixels of the land cover dataset due to the difference in values around the edges of the pixels. These edges are getting masked out, but not the whole pixel.

Best Answer

This function (and all the other neighborhood operations like GLCMTexture, slope, aspect, etc) operates in units of "output pixels". So if you zoom in, you're working on pixels that are ultimately smaller than your input's original pixel source. This is only an issue for interactive browsing, where you can change the pixel resolution at will; should you export an image, it'll all be done in the output projection specified in the Export.

To fix this for interactive browsing, reproject the results (as the very last step) to make sure all computations are being done in that projection.

Related Question