Solved – Is it possible to get fitted values 0 or 1 in logistic regression when the fitting algorithm converges

logisticrseparation

We have comprehensive coverage on the topic of perfect separation in logistic regression. When it happens in R we usually see two warnings:

Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: fitted probabilities numerically 0 or 1 occurred

Is it possible that the algorithm did converge and fitted probabilities numerically 0 or 1 occurred nonetheless?

Best Answer

R is giving you two different warnings because these really are two distinct issues.

Very loosely, the algorithm that fits a logistic regression model (typically some version of Newton-Raphson) looks around for the coefficient estimates that will maximize the log likelihood. It will estimate the model at a given point in the parameter space, see which direction is 'uphill', and then move some distance in that direction. The potential problem with this is that when perfect separation exists, the maximum of the log likelihood is where the slope is infinite. Because a search algorithm has to be designed to stop at some point, it doesn't converge.

On the other hand, no matter where it stops, whether it converged or not, it is (theoretically) possible to calculate the model's predicted values for the data. However, because computers use finite precision arithmetic, when they perform the calculations, they eventually need to round off or drop extremely low decimal values. Thus, if the arithmetically correct value is sufficiently close to 0 or 1, when it is rounded, it can end up being 0 or 1 exactly. Values can have that property and be within the normal range of the data due to an extremely large (in absolute value) slope estimate due to complete separation, or they can just be so far out on X that even a small slope will lead to the same phenomenon.

# I'll use this function to convert log odds to probabilities
lo2p = function(lo){ exp(lo) / (1+exp(lo)) }    
set.seed(163)                             # this makes the example exactly reproducible
x  = c(-500, runif(100, min=-3, max=3), 500)  # the x-values; 2 are extreme
lo = 0 + 1*x
p  = lo2p(lo)
y  = rbinom(102, size=1, prob=p)

m  = glm(y~x, family=binomial)
# Warning message:
# glm.fit: fitted probabilities numerically 0 or 1 occurred 
summary(m)
# ... 
# Coefficients:
#             Estimate Std. Error z value Pr(>|z|)    
# (Intercept)   0.3532     0.3304   1.069    0.285    
# x             1.3686     0.2372   5.770 7.95e-09 ***
# ...
# 
#     Null deviance: 140.420  on 101  degrees of freedom
# Residual deviance:  63.017  on 100  degrees of freedom
# AIC: 67.017
# 
# Number of Fisher Scoring iterations: 9

Here we see that we got the second warning, but the algorithm converged. The betas are reasonably close to the true values, the standard errors aren't huge, and the number of Fisher scoring iterations is moderate. Nonetheless, the extreme x-values yield predicted log odds that are perfectly calculable, but when converted into probabilities, become essentially 0 and 1.

predict(m, type="link")[c(1, 102)]      # these are the predicted log odds
#         1       102 
# -683.9379  684.6444 
predict(m, type="response")[c(1, 102)]  # these are the predicted probabilities
#            1          102 
# 2.220446e-16 1.000000e+00