Regression – Linear Mixed Model with Response Variable Containing Many Zeros

generalized linear modellogisticmixed modelregressionzero inflation

My response variable is accuracy (0 or 1) with 10 trials per subject. I thought about treating the response variable as a percentage and logit-transforming it to deal with its boundedness so that the predicted values were sensible, but this doesn't deal with the non-normality of the errors due to a large number of zeros in the data (i.e., About 25 of 83 subjects have only zeros (i.e., their accuracy rate was 0%). So now I am thinking about using a mixed logistic regression (i.e., generalized linear mixed model via glmer() in R with family = binomial("logit") and accounting for dependence among observations by modeling intercepts for subjects.

Is GLMM an adequate way to deal with the zeros? Is there another more appropriate model I could use that isn't overly complicated?

Also, any suggestions for how to approach this in R?

Edit: Here are my data (which if I were to run a GLMM I would convert back to long form):

zz <- "id successes ageMonths
2   2            10      83.1
3   3             0      82.0
4   4             0      80.2
5   6             0      63.5
6   7             3      85.2
7   8             0      84.0
9  10             0      83.2
10 11             0      74.8
11 12             0      65.6
12 14             0      81.8
13 15             0      83.0
14 16             2      70.2
15 17             0      76.2
16 18            10      64.4
17 19             0      79.9
18 20             1      76.9
19 21             0      71.3
20 22             0      79.8
21 23             0      82.1
22 24             1      79.8
23 25             0      77.1
24 26            10      63.5
25 27             0      80.3
26 28             0      81.8
27 29            10      68.8
28 30             0      81.5
29 31             0      74.8
30 32             0      60.4
31 33             0      77.8
32 35            10      69.1
34 37             0      70.4
35 38             0      83.3
36 39             3      69.2
37 40             0      77.4
38 41             0      77.2
39 42             9      64.3 
40 43             9      69.2
41 44             0      68.4
42 45             1      60.3
43 46             3      73.1
44 47            10      67.4
45 48             0      67.2
46 49             8      68.0
47 50             0      60.8
48 51             0      63.5
49 52             0      66.4
50 53             3      71.7
51 54             0      67.4
52 55             0      60.2
53 56            10      70.8
54 57             3      78.1
55 58            10      61.4
56 59             1      61.3
57 60             0      60.9
58 61             0      67.3
59 62             0      73.2
60 63             0      70.8
61 64             0      64.8
62 65             0      62.4
63 66             6      73.3
64 67             0      62.3
65 68            10      63.3
66 69            10      60.1
67 70             0      72.6
68 71            10      63.6
69 72             7      73.0
70 73             0      61.3
71 74             6      83.7
72 75            10      82.2
73 76             7      82.7
74 77            10      61.1
75 78             0      71.8
77 80             3      70.3
78 81             0      62.5
79 82             4      72.3
80 83             5      73.1
81 84             0      70.4
82 85             1      75.4
83 86             0      77.1
84 87             0      75.6
85 88             0      73.4
86 89             0      73.8
87 90             0      72.3"
Data <- read.table(text=zz, header = TRUE); Data

Edit 2: clarity

Best Answer

Your response data are all $0$s and $1$s. That means your response variable is distributed as a binomial; it is not normal. You should not try to transform it to achieve normality (that will never work anyway), and you should not use a "linear" model (in the sense that is meant in linear regression or linear mixed model, which here mean that they are for a normally distributed $Y$ variable).

Instead, you should use a model that is appropriate for a binomial response. Specifically, you should use some form of logistic regression. It is most common for logistic regression to have a single Bernoulli trial (a single $0$ or $1$) as the response, but since nothing is varying over the 10 responses, there is no need to take anything else into account and a binomial logistic regression is fine.

The excellent UCLA statistics help site has a tutorial on logistic regression in R here. To see a quick example of a logistic regression in R when the response is multiple Bernoulli trials, you might want to take a look at my answer here: Difference in output between SAS's proc genmod and R's glm. Lastly, although written in a different context, my answer here: Difference between logit and probit models, has a lot of information about logistic regression that may help you understand it better.


Update:
The $0$s do matter, although perhaps not the way you think (you could just as easily say the $10$s are the issue). I take a look at your data below:

Data = read.table(text=zz, header=TRUE)
m = glm(cbind(successes, 10-successes)~ageMonths, Data, family=binomial)
summary(m)
# Call:
# glm(formula = cbind(successes, 10 - successes) ~ ageMonths, family = binomial, 
#     data = Data)
# 
# Deviance Residuals: 
#    Min      1Q  Median      3Q     Max  
# -3.151  -2.433  -1.964   1.539   6.115  
# 
# Coefficients:
#             Estimate Std. Error z value Pr(>|z|)    
# (Intercept)  2.87265    0.79011   3.636 0.000277 ***
# ageMonths   -0.05505    0.01116  -4.935 8.03e-07 ***
# ---
# Signif. codes:  
# 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# (Dispersion parameter for binomial family taken to be 1)
# 
#     Null deviance: 734.18  on 82  degrees of freedom
# Residual deviance: 708.78  on 81  degrees of freedom
# AIC: 763.09
# 
# Number of Fisher Scoring iterations: 4
1-pchisq(708.78, df=81)  # [1] 0

The key feature here is that you have a residual deviance of $709$ on $81$ df. If your data were a simple draw from a binomial distribution, that shouldn't happen. We can formally test for overdispersion by comparing that to its corresponding chi-squared distribution. You can see that the p-value is very small (R only reports $0$).

There are a couple of ways you could have overdispersion in binomial data. One is that you have a poorly fitting model, e.g., you are missing interaction or polynomial terms (cf., Test logistic regression model using residual deviance and degrees of freedom). Another possibility is that the data aren't actually distributed as a binomial, or that there are latent groupings. An easy first check is to look at your data using a method that is based on different assumptions. Below, I plot your data and overlay a LOWESS fit. Because the y-values come only at discrete points, I jitter them slightly to ensure that multiple copies of the same value remain visible. The overdispersion is easy to see. It is also possible that you should be using a cubic polynomial, but to test that on this dataset because you saw it in this plot is invalid.

windows()
  with(Data, plot(ageMonths, jitter(successes, amount=.3)))
  lines(with(Data, lowess(ageMonths, successes)), col="red")

enter image description here

Given that you really have overdispersion, rather than a too simple functional form leading to an underfitted model, one hack is to use the quasibinomial 'distribution'. (That isn't really a different distribution, it just estimates the overdispersion and multiplies your standard errors by that factor in hopes that that will provide adequate protection.) Note that the output is largely the same, but the SEs and the p-values are larger, and it now says "Dispersion parameter ...taken to be 7.7" instead of 1.

mqb = glm(cbind(successes, 10-successes)~ageMonths, Data, family=quasibinomial)
summary(mqb)
# Call:
# glm(formula = cbind(successes, 10 - successes) ~ ageMonths, family = quasibinomial, 
#     data = Data)
# 
# Deviance Residuals: 
#    Min      1Q  Median      3Q     Max  
# -3.151  -2.433  -1.964   1.539   6.115  
# 
# Coefficients:
#             Estimate Std. Error t value Pr(>|t|)  
# (Intercept)  2.87265    2.19467   1.309   0.1943  
# ageMonths   -0.05505    0.03099  -1.776   0.0794 .
# ---
# Signif. codes:  
# 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
# 
# (Dispersion parameter for quasibinomial family taken to be 7.715571)
# 
#     Null deviance: 734.18  on 82  degrees of freedom
# Residual deviance: 708.78  on 81  degrees of freedom
# AIC: NA
# 
# Number of Fisher Scoring iterations: 4

The question is whether you believe that hack is adequate to cover you. I'm not confident. You seem to have a mix of mostly two latent groupings where for one the questions are way too hard and another for whom the questions are trivial (with perhaps a few intermediate people). Unfortunately, there may be no way to really estimate who is who and control for their abilities independently of the number of successes you observe here. To be honest, I suspect you won't really be able to learn anything from these data. At any rate, another strategy is not to make any distributional assumptions at all. Instead, you could use ordinal logistic regression, which would be my choice here.

library(rms)
om = orm(successes~ageMonths, Data)
om
# Logistic (Proportional Odds) Ordinal Regression Model
# 
# orm(formula = successes ~ ageMonths, data = Data)
# Frequencies of Responses
# 
#  0  1  2  3  4  5  6  7  8  9 10 
# 49  5  1  6  1  1  2  2  1  2 13 
# 
#                      Model Likelihood          Discrimination          Rank Discrim.    
#                         Ratio Test                 Indexes                Indexes       
# Obs            83    LR chi2      2.23    R2                  0.028    rho     0.161    
# Unique Y       11    d.f.            1    g                   0.386                     
# Median Y        0    Pr(> chi2) 0.1351    gr                  1.472                     
# max |deriv| 1e-06    Score chi2   2.21    |Pr(Y>=median)-0.5| 0.096                     
#                      Pr(> chi2) 0.1367                                                  
# 
#           Coef    S.E.   Wald Z Pr(>|Z|)
# ageMonths -0.0447 0.0303 -1.48  0.1397  

The key test is the likelihood ratio test of the model as a whole at the top (2.2, df=1, p=.135).