GLM – Is the Log Likelihood of the Saturated Model Always Zero?

deviancegeneralized linear modellikelihoodregression

As part of the output of a generalised linear model, the null and residual deviance are used to evaluate the model. I often see the formulas for these quantities expressed in terms of the log likelihood of the saturated model, for example: https://stats.stackexchange.com/a/113022/22199, Logistic Regression : How to obtain a saturated model

The saturated model, as far as I understand it, is the model that perfectly fits the observed response. Thus, in most places I have seen, the log-likelihood of the saturated model is always given as zero.

Yet, the way the formula for deviance is given suggests that sometimes this quantity is non zero. (As if it is zero always, why bother including it?)

In what cases can it be non zero? If it is never non-zero, why include it in the formula for deviance?

Best Answer

If you really meant log-likelihood, then the answer is: it's not always zero.

For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, \ldots, y_n)$ is given by: $$\ell(\mu; Y) = -\sum_{i = 1}^n \mu_i + \sum_{i = 1}^n y_i \log \mu_i - \sum_{i = 1}^n \log(y_i!). \tag{$*$}$$

Differentiate $\ell(\mu; Y)$ in $(*)$ with respect to $\mu_i$ and set it to $0$ (this is how we obtain the MLE for saturated model): $$-1 + \frac{y_i}{\mu_i} = 0.$$ Solve this for $\mu_i$ to get $\hat{\mu}_i = y_i$, substituting $\hat{\mu}_i$ back into $(*)$ for $\mu_i$ gives that the log-likelihood of the saturated model is: $$\ell(\hat{\mu}; Y) = \sum_{i = 1}^n y_i(\log y_i - 1) -\sum_{i = 1}^n \log(y_i!) \neq 0$$ unless $y_i$ take very special values.

In the help page of the R function glm, under the item deviance, the document explains this issue as follows:

deviance up to a constant, minus twice the maximized log-likelihood. Where sensible, the constant is chosen so that a saturated model has deviance zero.

Notice that it mentioned that the deviance, instead of the log-likelihood of the saturated model is chosen to be zero.

Probably, what you really wanted to confirm is that "the deviance of the saturated model is always given as zero", which is true, since the deviance, by definition (see Section 4.5.1 of Categorical Data Analysis (2nd Edition) by Alan Agresti) is the likelihood ratio statistic of a specified GLM to the saturated model. The constant aforementioned in the R documentation is actually twice the maximized log-likelihood of the saturated model.

Regarding your statement "Yet, the way the formula for deviance is given suggests that sometimes this quantity is non zero.", it is probably due to the abuse of usage of the term deviance. For instance, in R, the likelihood ratio statistic of comparing two arbitrary (nested) models $M_1$ and $M_2$ is also referred to as deviance, which would be more precisely termed as the difference between the deviance of $M_1$ and the deviance of $M_2$, if we closely followed the definition as given in Agresti's book.

Conclusion

  1. The log-likelihood of the saturated model is in general non-zero.

  2. The deviance (in its original definition) of the saturated model is zero.

  3. The deviance output from softwares (such as R) is in general non-zero as it actually means something else (the difference between deviances).


The following are the derivation for the general exponential-family case and another concrete example. Suppose that data come from exponential family (see Modern Applied Statistics with S, Chapter $7$): $$f(y_i; \theta_i, \varphi) = \exp[A_i(y_i\theta_i - \gamma(\theta_i))/\varphi + \tau(y_i, \varphi/A_i)]. \tag{1}$$ where $A_i$ are known prior weights and $\varphi$ are dispersion/scale parameter (for many cases such as binomial and Poisson, this parameter is known, while for other cases such as normal and Gamma, this parameter is unknown). Then the log-likelihood is given by: $$\ell(\theta, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \theta_i - \gamma(\theta_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). $$ As in the Poisson example, the saturated model's parameters can be estimated by solving the following score function: $$0 = U(\theta_i) = \frac{\partial \ell(\theta, \varphi; Y)}{\partial \theta_i} = \frac{A_i(y_i - \gamma'(\theta_i))}{\varphi}$$

Denote the solution of the above equation by $\hat{\theta}_i$, then the general form of the log-likelihood of the saturated model (treat the scale parameter as constant) is: $$\ell(\hat{\theta}, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \hat{\theta}_i - \gamma(\hat{\theta}_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). \tag{$**$}$$

In my previous answer, I incorrectly stated that the first term on the right side of $(**)$ is always zero, the above Poisson data example proves it is wrong. For a more complicated example, consider the Gamma distribution $\Gamma(\alpha, \beta)$ given in the appendix.


Proof of the first term in the log-likelihood of saturated Gamma model is non-zero: Given $$f(y; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)}e^{-\beta y}y^{\alpha - 1}, \quad y > 0, \alpha > 0, \beta > 0,$$ we must do reparameterization first so that $f$ has the exponential family form $(1)$. It can be verified if letting $$\varphi = \frac{1}{\alpha},\, \theta = -\frac{\beta}{\alpha},$$ then $f$ has the representation: $$f(y; \theta, \varphi) = \exp\left[\frac{\theta y - (-\log(-\theta))}{\varphi}+ \tau(y, \varphi)\right],$$ where $$\tau(y, \varphi) = -\frac{\log \varphi}{\varphi} + \left(\frac{1}{\varphi} - 1\right)\log y - \log\Gamma(\varphi^{-1}).$$ Therefore, the MLEs of the saturated model are $\hat{\theta}_i = -\frac{1}{y_i}$. Hence $$\sum_{i = 1}^n \frac{1}{\varphi}[\hat{\theta}_iy_i - (-\log(-\hat{\theta}_i))] = \sum_{i = 1}^n \frac{1}{\varphi}[-1 - \log(y_i)] \neq 0, $$ unless $y_i$ take very special values.