MATLAB: Scaling data inside [-1,1]

Deep Learning Toolboxneural network

What are the diferences between normalizating features in [0,1], [-1,1] or [-5,5] with NN minmax ?

Best Answer

The purpose of normalization is to keep inputs to transfer functions as close to the middle of the so called 'active region' as much as possible. For example, Warren Sarle posted the results of experimental examples in the FAQ of comp.ai.neural-nets indicating that in general, you can do no better than use bipolar inputs, outputs and transfer functions.
Nevertheless, it is easier in MATLAB to use unit sum unipolar [0,1] coding for target classification because of the functions vec2ind and ind2vec.
My interpretation of 'better' is faster and/or more accurate. Obviously, this result is machine dependent. So, given what you know now, you can perform your own speed and accuracy tests on your own machine.
You have to take into account how the weights are being initialized. That means understanding the functions init, initwb and initnw.
However, before you start, see my post "Nonsaturating Initial Weights" in comp.ai.neural-nets.
Hope this helps.
Thank you for formally accepting my answer
Greg