MATLAB: What functions is the patternnet for the hidden layer and output layers

Deep Learning ToolboxMATLABnerual networkpatternnetsigmoidtan-sigmoidweight

Hi, I am trying to learn how a NN works. I created a NN using MatLab patternet to classify XOR. However, when I input manually, it has a different result than the net(input). According to this article, if you use the GUI, it used the sigmoid transfer functions in both the hidden layer and the output layer, bullet 7. And if you used cmd-line, it used the tan-sigmoid transfer functions in both the hidden and output layers, bullet 2. I tried both version and it still give me a different result.Here is my code:
input = [0 0; 0 1; 1 0; 1 1]';
xor = [0 1; 1 0; 1 0; 0 1]';
% Create a larger sample size
input10 = repmat(input,1,10);
xor10 = repmat(xor,1,10);
% MatLab NN
net = patternnet(2);
net = train(net, input10, xor10);
% Get the weights
IW = net.IW;
b = net.b;
LW = net.LW;
IW = [IW{1}'; b{1}'];
LW = [LW{2}'; b{2}'];
%%Using tan-sigmoid
% Input to hidden layer

hid = zeros(2,1);
hidsig = zeros(2,1);
in = input(:,1);
for i = 1:2
hid(i) = dot([in;1],IW(:,i));
hidsig(i) = tansig(hid(i));
end
% Hidden to output layer without normalization

out = zeros(2,1);
outsig = zeros(2,1);
for i = 1:2
out(i) = dot([hidsig;1],LW(:,i));
outsig(i) = tansig(hidsig(i));
end
outsoftmax = softmax(out);
outsoftmaxsig = softmax(outsig);
% Hidden to output layer with normalization

normout = zeros(2,1);
normoutsig = zeros(2,1);
normhidsig = hidsig./norm(hidsig);
for i = 1:2
normout(i) = dot([normhidsig;1],LW(:,i));
normoutsig(i) = tansig(normhidsig(i));
end
normoutsoftmax = softmax(normout);
normoutsoftmaxsig = softmax(normoutsig);
result = net(in);
disp(result);
disp('tan-sigmoid');
disp(outsig);
disp(outsoftmax);
disp(outsoftmaxsig);
disp(normoutsig);
disp(normoutsoftmax);
disp(normoutsoftmaxsig);
%%Using sigmoid
% Input to hidden layer
hid = zeros(2,1);
hidsig = zeros(2,1);
in = input(:,1);
for i = 1:2
hid(i) = dot([in;1],IW(:,i));
hidsig(i) = sigmf(hid(i),[1,0]);
end
% Hidden to output layer without normalization
out = zeros(2,1);
outsig = zeros(2,1);
for i = 1:2
out(i) = dot([hidsig;1],LW(:,i));
outsig(i) = sigmf(hidsig(i),[1,0]);
end
outsoftmax = softmax(out);
outsoftmaxsig = softmax(outsig);
% Hidden to output layer with normalization
normout = zeros(2,1);
normoutsig = zeros(2,1);
normhidsig = hidsig./norm(hidsig);
for i = 1:2
normout(i) = dot([normhidsig;1],LW(:,i));
normoutsig(i) = sigmf(normhidsig(i),[1,0]);
end
normoutsoftmax = softmax(normout);
normoutsoftmaxsig = softmax(normoutsig);
result = net(in);
disp('sigmoid');
disp(outsig);
disp(outsoftmax);
disp(outsoftmaxsig);
disp(normoutsig);
disp(normoutsoftmax);
disp(normoutsoftmaxsig);

Best Answer

1. Do not use xor as the name of a variable. It is the name of a function
help xor
doc xor
2.
input = [0 0; 0 1; 1 0; 1 1]';
target = xor(input) % NO SEMICOLON!

3. There is no good reason to add exact duplicates to this training set. In general, however, adding noisy duplicates can help if the net is to be used in an environment of noise, interference and measurement errors.
4. If you want to know what transfer functions are being used, all you have to do is ask
net = patternnet(2) % NO SEMICOLON!
5. Also note that there are default normalizations
Hope this helps.
*Thank you for formally accepting my answer*
Greg