I have a small raster (or matrix) and I would like to know what is the difference between each pixel and the average of its' 8 neighbors in the case where the kernel window is on the edges and no values exist. When the 8 neighbor pixels have "real" values I can calculate the average of the 8 for each pixel, by using scipy.ndimage.convolve
with kernel window as np.array([[0.125,0.125,0.125],[0.125,0,0.125],[0.125,0.125,0.125]])
.
The problem is on the edges.
For example, if this is my matrix:
array([[13, 21, 13, 8],
[ 5, 10, 22, 14],
[21, 33, 9, 0],
[ 0, 0, 0, 0]])
I would like to calculate the average of the neighbors of the upper left cell (value 13). The desired calculation will be
(21+10+5)/3 = 12
For the pixel in the first column and second row (value 5) the desired calculation will be –
(13+21+10+5)/4 = 12.25
Note that I need to ignored the central pixel (value 13 or 5) and all the cells that are outside of the 3×3 kernel window and also that the number of neighboring pixels with "real" values is different from pixel to pixel.
I read about the mode
that deals with edges here and in other places but could not find a solution.
Best Answer
The easiest way is to combine numpy's
NaN
functions (in this casenanmean
) andndimage.generic_filter
, like so:At this point
result
comes out as:If you want to ignore the centre cell as well you can do so by supplying a footprint:
This returns the
result
Which we can check for the top left cell as
(21 + 5 + 10) / 3 == 12
and the second row, first column (value 5) becomes(13 + 21 + 10 + 21 + 33) / 5 == 19.6
, and the bottom right (value 0) becomres(9 + 0 + 0) / 3 == 3
.