By the definition of min-max normalisation, the value is divided by max - min
, what if the max
and min
have the same value?
So I have read about this anwser, but I am still not sure about my case.
For example, I have an image with n channels and I want to perform min-max normalisation over each channel i.e. $$x_{channel_a} = \frac{x_{channel_a} – min_{channel_a}}{max_{channel_a} – min_{channel_a}}$$
It is possible that only channel a
is the same and other channels are different. Does it mean I should drop this channel information? How exactly can I do? What about setting the channel value to 0 if all channel values are the same?
If I drop this value, will it cause any loss of information. For example, I have a image with pixels value in RGB format [12, 23, 34]
, [12, 25, 87]
, [12, 182, 230]
. Since R channel is the same, I can just drop it and the image become to [0, 23, 34]
, [0, 25, 87]
, [0, 182, 230]
. Is this correct?
By the way, I am trying to use ResNet-18
to extract the image features.
Best Answer
If $\min_i x_i = \max_i x_i$ then this implies that $x_1 = \cdots = x_n$ (i.e., all data values are the same). In that case, there is not really any difference to "normalise". By convention, you would probability set the "normalised" values to $z_1 = \cdots = z_n = 0$ in this case, which shifts them to a mean of zero, and retains the property that:
$$z_k (\max_i x_i - \min_i x_i) = x_k - \min_i x_i.$$