Cost Function Derivation Using MLE – Why Use Log Function?

logarithmsmachine learningregressionstatistics

I am learning machine learning from Andrew Ng's open-class notes and coursera.org. I am trying to understand how the cost function for the logistic regression is derived. I will start with the cost function for linear regression and then get to my question with logistic regression.

(Btw a similar question was asked here, which answers the question how the derivative of cost function was derived but not the cost function itself.)

1) Linear regression uses the following hypothesis: $$ h_\theta(x) = \theta_0 + \theta_1 x$$

Accordingly, the cost function is defined as:

$$J(\theta) = \dfrac {1}{2m} \displaystyle \sum_{i=1}^m \left (h_\theta (x^{(i)}) – y^{(i)} \right)^2$$

2) The logistic regression uses a sigmoid/logistic function which is $ 0 \leq h_\theta (x) \leq 1 $.

Defined as :

$$
\begin{align*}
& h_\theta (x) = \dfrac{1}{1 + e^{-(\theta^T x)}} \newline \newline
\end{align*}
$$

Accordingly, our cost function has also changed. However, instead of plugging-in the new h(x) equation directly, we used logarithm.

$$
\begin{align*}
& J(\theta) = \dfrac{1}{m} \sum_{i=1}^m Cost(h_\theta(x^{(i)}),y^{(i)}) \newline
& Cost(h_\theta(x),y) = -log(h_\theta(x)) \; & \text{if y = 1} \newline
& Cost(h_\theta(x),y) = -log(1-h_\theta(x)) \; & \text{if y = 0}
\end{align*}
$$

And the new cost function is defined as:

$$ J(\theta) = – \frac{1}{m} \sum_{i=1}^m [y^{(i)}\log (h_\theta (x^{(i)})) + (1 – y^{(i)})\log (1 – h_\theta(x^{(i)}))]$$

From class notes ".. the more hypothesis is off from y, the larger the cost function output. If our hypothesis is equal to y, then our cost is 0."

It's also mentioned in the class notes that MLE (maximum-likelihood estimation) is used to derive the logs in the cost function. I can see how logs function and set penalty values until we find the right values, but I don't see how we came to choose them in the cost function.

Best Answer

Lets try to derive why the logarithm comes in the cost function of logistic regression from first principles.

So we have a dataset X consisting of m datapoints and n features. And there is a class variable y a vector of length m which can have two values 1 or 0.

Now logistic regression says that the probability that class variable value $y_i =1$ , $i=1,2,...m$ can be modelled as follows

$$ P( y_i =1 | \mathbf{x}_i ; \theta) = h_{\theta}(\mathbf{x}_i) = \dfrac{1}{1+e^{(- \theta^T \mathbf{x}_i)}} $$

so $y_i = 1$ with probability $h_{\theta}(\mathbf{x}_i)$ and $y_i=0$ with probability $1-h_{\theta}(\mathbf{x}_i)$.

This can be combined into a single equation as follows, ( actually $y_i$ follows a Bernoulli distribution)

$$ P(y_i ) = h_{\theta}(\mathbf{x}_i)^{y_i} (1 - h_{\theta}(\mathbf{x}_i))^{1-y_i}$$

$P(y_i)$ is known as the likelihood of single data point $\mathbf{x}_i$, i.e. given the value of $y_i$ what is the probability of $\mathbf{x}_i$ occurring. it is the conditional probability $P(\mathbf{x}_i | y_i)$.

The likelihood of the entire dataset $\mathbf{X}$ is the product of the individual data point likelihoods. Thus

$$ P(\mathbf{X}|\mathbf{y}) = \prod_{i=1}^{m} P(\mathbf{x}_i | y_i) = \prod_{i=1}^{m} h_{\theta}(\mathbf{x}_i)^{y_i} (1 - h_{\theta}(\mathbf{x}_i))^{1-y_i}$$

Now the principle of maximum likelihood says that we find the parameters that maximise likelihood $P(\mathbf{X}|\mathbf{y})$.

As mentioned in the comment, logarithms are used because they convert products into sums and do not alter the maximization search, as they are monotone increasing functions. Here too we have a product form in the likelihood.So we take the natural logarithm as maximising the likelihood is same as maximising the log likelihood, so log likelihood $L(\theta)$ is now:

$$ L(\theta) = \log(P(\mathbf{X}|\mathbf{y}) = \sum_{i=1}^{m} y_i \log(h_{\theta}(\mathbf{x}_i)) + (1-y_i) \log(1 - h_{\theta}(\mathbf{x}_i)) $$.

Since in linear regression we found the $\theta$ that minimizes our cost function , here too for the sake of consistency, we would like to have a minimization problem. And we want the average cost over all the data points. Currently, we have a maximimzation of $L(\theta)$. Maximization of $L( \theta)$ is equivalent to minimization of $ -L(\theta)$. And using the average cost over all data points, our cost function for logistic regresion comes out to be,

$$ J(\theta) = - \dfrac{1}{m} L(\theta)$$

$$ = - \dfrac{1}{m} \left( \sum_{i=1}^{m} y_i \log (h_{\theta}(\mathbf{x}_i)) + (1-y_i) \log (1 - h_{\theta}(\mathbf{x}_i)) \right )$$

Now we can also understand why the cost for single data point comes as follows:

the cost for a single data point is $ = -\log( P(\mathbf{x}_i | y_i))$, which can be written as $ - \left ( y_i \log (h_{\theta}(\mathbf{x}_i)) + (1 - y_i) \log (1 - h_{\theta}(\mathbf{x}_i) \right )$.

We can now split the above into two depending upon the value of $y_i$. Thus we get

$J(h_{\theta}(\mathbf{x}_i), y_i) = - \log (h_{\theta}(\mathbf{x}_i)) , \text{ if } y_i=1$, and

$J(h_{\theta}(\mathbf{x}_i), y_i) = - \log (1 - (h_{\theta}(\mathbf{x}_i) ) , \text{ if } y_i=0 $.