MATLAB: Do I get a dimension mismatch error when training the neural network using constant input data in Neural Network Toolbox 6.0 (R2008a)

Deep Learning Toolboxdimensionerrormismatchnetneuralnn

There are 2 datasets in the attached MAT files : data1.mat , data2.mat. Creating and training the network using the first dataset works fine, however using the second dataset (containing constant inputs) leads to an error:
??? Error using ==> plus
Matrix dimensions must agree.
Error in ==> calcperf2 at 163
N{i,ts} = N{i,ts} + Z{k};
Error in ==> trainlm at 253
[perf,El,trainV.Y,Ac,N,Zb,Zi,Zl] = calcperf2(net,X,trainV.Pd,trainV.Tl,trainV.Ai,Q,TS);
Error in ==> network.train at 219
[net,tr] = feval(net.trainFcn,net,tr,trainV,valV,testV);
Error in ==> netztraining at 63
Netz = train(Netz,[p;t],t,pi);
To reproduce this behavior, run the attached sample script 'nettraining.m' after loading one of the MAT files.

Best Answer

This problem occurs because of some automated input
preprocessing that is done by Neural Network Toolbox 6.0 (R2008a). The current default preprocessing functions are:
fixunknowns, removeconstantrows, and mapminmax
In some of the data (attached example data1.mat) there are elements of the
input vector that are constant over time. For network training,
these constant rows do not provide any useful information, since they
can be replaced with an adjustment of the bias in the final layer of
the network. For this reason, the toolbox removes any constant rows,
if the default settings are used.
In the second data set (data2.mat), all of the rows of
the input vector were constant, so they were all removed. This
produced an input with dimension zero. Even in the first data set,
there were some constant elements of the input vector that were removed.
To override the default settings, the following three lines can be
added to the attached example script:
Netz.inputs{1}.processFcns = {'mapminmax'};
Netz.inputs{2}.processFcns = {'mapminmax'};
Netz.outputs{2}.processFcns = {'mapminmax'};
These new lines should follow the existing line
Netz = newnarxsp(P1,T1,DI,DO,15);
In this way, the network will still normalize the inputs and targets,
which is good practice for network training, but it will not remove
the constant rows. If you do not want to normalize the
data, then you can remove " 'mapminmax' " from the above lines. (If
a row is constant, and mapminmax is used, the constant value will be
replaced by -1.)
As a side note – generally you may want to use a data set where all elements of
the input vector change. If any of the elements of the input vector
are constant throughout the data set, then the bias in the last layer
can compensate for the effect of those input elements. This means
that the weights that multiply the constant inputs will be effectively random.