Machine Learning – Why Not Standardize and Normalize Features?

lstmmachine learningnormalizationstandardization

If one has data that's assumed to be normal distributed and want to use it as input in a machine learning model, why not first standardize the data and then normalize (min max scale it between zero and one)?

So first transform as follows

$$
S = \frac{X – \mu}{\sigma}
$$

…and then transform it one more time to
$$
X_{standaardizedAndNormalized} = \frac{S – S_{min}}{S_{max}-S_{min}}
$$

Best Answer

That is equivalent to normalizing only $X$ since the standardization step does not change the min/max values. Besides, these transformations are not associated with normality assumption.

Related Question