Solved – A Logistic Regression with Neural Network mindset VS a shallow Neural Network

deep learningneural networksregression

I am a new student in the world of deep learning and after studying the functioning of logistic regression and neural networks there are some insights that probably escape me.

Given these two settings:

enter image description here

I have understood how the individual steps work, from forward prop to backward prop and optimisation via gardient descent, but these are steps that are taken in both cases, so my question is:

Intuitively what is the difference? In addition to the introduction of non-linearity because of the different activation functions, is there also any change in parameters? Efficiency ? Is it more accurate?

Best Answer

A very basic example: logistic regression, as in your image, tries to model the class posteriors. Under no modification, the choice of nonlinearity in this case is sigmoid function; which is a linear function of inputs and neuron weights, i.e. $\sigma(w^Tx)$, where $\sigma(z)=\frac{1}{1+e^{-z}}$. Here, we set a threshold and compare the output of the activation function to decide if it is class 0 or 1; so, we decide if $\frac{1}{1+e^{-w^Tx}} > \theta$, which reduces to a inequality similar to $w^Tx < \theta_2$. This means you've just constructed a linear boundary, but nothing else. If you add a layer, the decisions you made get more complicated and not linear anymore. So, you can learn nonlinear decision boundaries.