I have implemented a very simple neural network to estimate a sine function. The following is the code for generating and training the network:
% Generate Data
dataSize = 1000;x = linspace(0, 2*pi, dataSize);y = sin(x);hold offplot(x,y)hold on% Add noise to Data
yInput = y+randn(1,dataSize)./5;% No need to seperate training, test and validation data, that occurs automatically in train function.
% Generate Network. A Very simple two layer model, with two nodes in input layer.
net = feedforwardnet([2]);% Train Network
net = train(net,x,yInput);% Show result of trained network
yNN = net(x);figureplot(x,yNN, '*')hold onplot(x,y, '.')
Now my question, how is this network actually implemented. According to literature, I should be able to recreate the network, copying the weights and biases, with the following function:
function [y] = mynet(net, x_val)%MYNET A manual implementation of the feedforward network, to demonstrate functionality.
W1 = net.IW{1}; b1 = net.b{1}; W2 = net.LW{2}; b2 = net.b{2}; y = purelin(W2*tansig(W1*x_val + b1)+b2);end
However, the original net(x) function, and mynet(x) produce completely different results. Although the weights and biases are exactly the same, the functions are also directly copied over, you can extract them from the network with:
>> net.layers{1}.transferfcnans = 'tansig'>> net.layers{2}.transferfcnans = 'purelin'
Can anyone suggest where my implementation of the neural network is wrong. I am really hoping that it is a simple mistake, but I just cant see it at the moment.
Many thanks in advance
Best Answer