Solved – Logistic Regression Vs Neural Networks

logisticneural networksregression

This is what I have understood about neural networks.

There are $L$ layers from input to output (including both) and the equations as follows.

$a(2) = \theta(1) x$

$a(3) = \theta(2)a(2)$

.

.

.

$y = a(L) = \theta(n-1) a(n-1)$

So, we can write $a(L)$ as below

$y = a(L) = \theta(1)\theta(2)\cdots\theta(n-1)x$

Let's suppose $\theta(1)\theta(2)\cdots\theta(n-1) = \Omega(L)$

Now we have $y = \Omega(L)x$

To solve the above equation, we can even have logistic regression techniques, which will be much easier.

So, what is the point in doing it so complicated, when we are not including any higher order terms?

Someone please explain

Best Answer

It's generally true that classification problems can also be solved by regression.

Problems solvable by ANNs can also be solved by linear regression. Problems solvable by decision trees can be also be solved by logistic regression. Etc.

Abstractly, they do the same thing: prediction.

As to why people use one and not the other, the short answer is: it depends. Sometimes on what the people implementing the solution are comfortable with, sometimes on what's available, institutional knowledge, etc.

The slightly longer answer to whether a specific regression model will perform better than a specific ANN or DBN on a specific dataset is: it still depends.

There are numerous papers that have novel linear regression based solutions for, say, image recognition. You can pit any one of these against the NN-based solution that comes with OpenCV and get wildly varying results.

You can replicate the inconsistency by picking the best regression model and then compare it to various and sundry classifiers.

Your results are based more on your familiarity with the algorithms, tools and data in question. If there were a clearly, obviously, undeniably better solution, we'd be using it. (Some places more quickly than others.)