I have been randomly initializing weights for my neuron layers. I did some calculations on the paper and realized at least for the initial few iterations the probabilities might not add up while summing the results from the output layer. Is it acceptable or there should be weight distribution from the initial iteration itself so that the probabilities add up to one. If so how to initialize the weights for the neural networks.
Solved – How to initialize the weights of neural networks so that sum of probabilities at output layer adds up to 1
neural networks
Related Question
- Solved – Neural networks – how can I interpret what a hidden layer is doing to the data
- Solved – In Neural Network back propagation, how are the weights for one training examples related to the weights for next training examples
- Solved – What are the theoretical/practical reasons to use normal distribution to initialize the weights in Neural Networks
- Solved – Why does each convolution layer require activation function and weight initialization
Best Answer
You don't need to hack the initial weights. You may need to revisit the architecture of the neural network. For example, add a softmax layer as a final layer so that the output can be interpreted as probabilities for classes.