MATLAB: Neural Nets: gradient descent issue for polynomial functions

gradient descentneural network

Hi all, I am just getting started with the neural network toolbox. Our teacher asked us to use feedforwardnet to approximate a noiseless non-linear function.
I tried several example with success, but now, I am moving to simple polynomial functions and seem to be getting problems with the 'traingd' method (which I used with success (though sub-optimal by far) with other functions such as sin(x), tanh(x), etc.)
Anyway, whatever polynomial I try to learn, it fails after a few epochs, and the only thing I can see is that the gradient becomes larger and larger at each epoch, until it finally becomes 'NaN'.
Is this to be expected or am I missing something? Why would other non-linear function work but not the polynomial ones.
Here is my code with an example of polynomial function (y):
x = 0:1:1000 ; y = x.^2 + 1;
p=con2seq(x); t=con2seq(y);
net1 = feedforwardnet(10,'traingd');
net1 = train(net1,p,t);
Thanks in advance for your help,
Best, Sedrik

Best Answer

Apparently, a target variance of 9e10 is too much for feedforwardnet to handle. Standardization(zero-mean/unit-variance) does the trick!
close all, clear all, clc
x = 0:1:1000 ; t = x.^2 + 1;
vart1 = var(t,1) % 8.9078e+10
zx = zscore(x,1); zt = zscore(t,1); % var(zt,1) = 1
net = feedforwardnet( 10, 'traingd' );
rng('default')
for i = 1:10
net = configure( net, zx, zt );
net = train( net, zx, zt );
NMSE( i ) = mse( zt - net( zx ) );
end
NMSE = NMSE
% NMSE = [ 0.004831 0.0070654 0.0034342
% 0.0038815 0.0031794 0.004332
% 0.0024331 0.0014878 0.0013247
% 0.0078712 ]
Hope this helps.
Thank you for formally accepting my answer
Greg