MATLAB: Is it important to normalise the input to a neural network before training

Deep Learning Toolboxneural networkneural networks

I have a feature vector of the size 10000×400(400 samples) and target matrix is 40×400(40 classes).The input feature vecotr for each sample has 10,000 rows which have values like 0 123 212 242 123 45 etc.So I want ot ask that should I normalise all the elements in the rows by using the standard formula:
element of row=(element of row-mean(of column))/standard deviation (if same col).

Best Answer

1. Delete and/or modify numerical outliers. Standardization of data to
zero-mean/unit-variance is the most effective way to do this.
2. Keep the ranges of all input and target vector components comparable to help
understand their relative importance.
3. Consider biases to be weights that act on unit vector components
4. Keep the initial scalar products of weights and vectors within the linear regions
of the sigmoids to avoid algebraic stagnation in the asymptotic regions.
5. Data scaling to [-1 1 ] is a MATLAB default. Standardization and no scaling are the
alternatives. Since you already have unscaled and standardarized data, you have a
variety of choices. My choice is to use the standardized data but accept the
[-1 1 ] default.
Why? ... because it is the easiest to code and understand.
Hope this helps.
Thank you for formally accepting my answer
Greg