Solved – If all neurons in a 2 layer neural network get the same inputs, wouldn’t they all give the same output

deep learningmachine learningneural networks

In a basic 2 layer neural net, there are (for example) 3 inputs that each go into each neuron. If the same inputs are going into each neuron and we are applying the same optimization algorithm of gradient descent as well as the same sigmoid activation function, wouldn't they all give you the same result rendering the extra neurons useless?

What I see is that you would apply gradient descent to each of the randomly chosen weights and biases, and eventually, they would all reach the same value since all the other functions are kept constant.

Can anyone please explain what I'm missing here? Thanks!

Best Answer

If all weights are initialized with the same values, all neurons in each layer give you the same outputs. This is the reason that the weights are initialized with random numbers.

Related Question