Solved – Why we don’t normalize the images

conv-neural-networkdata preprocessingmachine learningneural networks

I was watching the video from this stanford course on convolutional neural nets where the professor says (at 28:59) 'we do zero-mean the pixel values in image but we do not normalize the pixel values much because in images, at each location, we already have relatively comparable scale and distribution'. I do not understand what does she mean by 'relatively comparable scale and distribution'?

Best Answer

I'm not sure why he said that. Image Normalization is extremely common in practice!

Say you have an image with pixel values in $[0, 255]$. Besides removing the mean, you also want to divide by either the $(max - min)$ or by the standard deviation. The first step's goal is to reduce the mean of the dataset to zero, while the second's is to scale the pixel values down to a range close to $[-1, 1]$.

Note that all these measures (mean, min, max, std) are calculated on the whole dataset (not on each image individually.