I was watching the video from this stanford course on convolutional neural nets where the professor says (at 28:59) 'we do zero-mean the pixel values in image but we do not normalize the pixel values much because in images, at each location, we already have relatively comparable scale and distribution'. I do not understand what does she mean by 'relatively comparable scale and distribution'?
Solved – Why we don’t normalize the images
conv-neural-networkdata preprocessingmachine learningneural networks
Related Solutions
A major insight into how a neural network can learn to classify something as complex as image data given just examples and correct answers came to me while studying the work of Professor Kunihiko Fukushima on the neocognitrion in the 1980's. Instead of just showing his network a bunch of images, and using back-propagation to let it figure things on it's own, he took a different approach and trained his network layer by layer, and even node by node. He analyzed the performance and operation of each individual node of the network and intentionally modified those parts to make them respond in intended ways.
For instance, he knew he wanted the network to be able to recognize lines, so he trained specific layers and nodes to recognize three pixel horizontal lines, 3 pixel vertical lines and specific variations of diagonal lines at all angles. By doing this, he knew exactly which parts of the network could be counted on to fire when the desired patterns existed. Then, since each layer is highly connected, the entire neocognitron as a whole could identify each of the composite parts present in the image no matter where they physically existed. So when a specific line segment existed somewhere in the image, there would always be a specific node that would fire.
Keeping this picture ever present, consider linear regression which is simply finding a formula ( or a line) via sum of squared error, that passes most closely through your data, that's easy enough to understand. To find curved "lines" we can do the same sum of products calculation, except now we add a few parameters of x^2 or x^3 or even higher order polynomials. Now you have a logistic regression classifier. This classifier can find relationships that are not linear in nature. In fact logistic regression can express relationships that are arbitrarily complex, but you still need to manually choose the correct number of power features to do a good job at predicting the data.
One way to think of the neural network is to consider the last layer as a logistic regression classifier, and then the hidden layers can be thought of as automatic "feature selectors". This eliminates the work of manually choosing the correct number of, and power of, the input features. Thus, the NN becomes an automatic power feature selector and can find any linear or non-linear relationship or serve as a classifier of arbitrarily complex sets** (this, assumes only, that there are enough hidden layers and connections to represent the complexity of the model it needs to learn). In the end, a well functioning NN is expected to learn not just "the relationship" between the input and outputs, but instead we strive for an abstraction or a model that generalizes well.
As a rule of thumb, the neural network can not learn anything a reasonably intelligent human could not theoretically learn given enough time from the same data, however,
- it may be able to learn somethings no one has figured out yet
- for large problems a bank of computers processing neural networks can find really good solutions much faster than a team of people (at a much lower cost)
- once trained NNs will produce consitsent results with the inputs they've been trained on and should generalize well if tweaked properly
- NN's never get bored or distracted
Each image is composed of 32 $\times$ 32 pixels, so for a given pixel (say row 13, column 31) something measured is averaged over all the images, and the standard deviation (SD for short) for the same something is also calculated.
(value − mean) / SD is often called a z-score and is a way of standardizing values to take account of mean and SD. Presumably that's done for every pixel, meaning every pixel position.
It is spelled out that they are "dividing by the standard deviation of all pixels over all images" [my emphasis] and that SD would usually be calculated with reference to the corresponding overall mean. However, division by that SD would be dividing by a constant, so it won't have any effect on the images beyond a question of units.
Best Answer
I'm not sure why he said that. Image Normalization is extremely common in practice!
Say you have an image with pixel values in $[0, 255]$. Besides removing the mean, you also want to divide by either the $(max - min)$ or by the standard deviation. The first step's goal is to reduce the mean of the dataset to zero, while the second's is to scale the pixel values down to a range close to $[-1, 1]$.
Note that all these measures (mean, min, max, std) are calculated on the whole dataset (not on each image individually.