R – How to Interpret Random Effect Coefficients in glmer

lme4-nlmemixed modelrrandom-effects-model

I am studing relationship between the competition facing a hospital and the death at 30 days within it. I performed mixed-effect model assuming that patient in same hospital should be more correlated. Hospital(finessGeoDP) and Trimester are in random effect. HHI_cat is the index or competition (with four level)

Here is below the script of the model and the output.

MODEL

MultModel<-glmer(dc30 ~HHI_cat+age_cat+Sexe+Urgence+neoadj+
                    denutrition+score_charlson_cat+Acte+
                    Nbre.sejour_cat+statutHop2+Fdep09_cat3+
                    (1|Trimestre)+(1|finessGeoDP),
                     data =data_Final,family=binomial(link="logit"),
                  control=glmerControl(optimizer="bobyqa",
                                       optCtrl=list(maxfun=2e5)))

OUTPUT

I calculated the odds ratio of fixed-effects using function exp()

I also calculated the confident interval of odds using the standard error*1.96

However, I am not used to interpreting the results of random effects.
How to interpret the variance for finessGeoDP (Hospital ID) and Trimester. Do I have to convert these coef with exp() before interpreting them?
Coul I calculate the confident interval of the variance using the SD*1.96?
Is there an interest in determining the significance of random effects?
Could results of random effects influence the interpretation of fixed effects?

 AIC      BIC   logLik deviance df.resid 
 42319.9  42578.0 -21133.9  42267.9   151533 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-1.0389 -0.2019 -0.1446 -0.1108 15.6751 

Random effects:
 Groups      Name        Variance Std.Dev.
 finessGeoDP (Intercept) 0.12824  0.3581  
 Trimestre   (Intercept) 0.03333  0.1826  
Number of obs: 151559, groups:  finessGeoDP, 711; Trimestre, 20

Fixed effects:
                           Estimate Std. Error z value Pr(>|z|)    
(Intercept)                -4.41959    0.11735 -37.663  < 2e-16 ***
HHI_catUn.peu.compétif     -0.01905    0.05663  -0.336 0.736554    
HHI_catmoy.competif        -0.02566    0.06121  -0.419 0.675128    
HHI_catTrès.competitif     -0.20815    0.06389  -3.258 0.001122 ** 
age_cat61-70 ans            0.31443    0.05653   5.562 2.67e-08 ***
age_cat71-80 ans            0.62614    0.05461  11.466  < 2e-16 ***
age_cat81-90 ans            1.29198    0.05346  24.169  < 2e-16 ***
age_catPlus de 90 ans       1.86270    0.07069  26.349  < 2e-16 ***
SexeHomme                   0.30788    0.02935  10.489  < 2e-16 ***
UrgenceOui                  1.07916    0.03549  30.408  < 2e-16 ***
neoadjOui                   0.20516    0.04978   4.122 3.76e-05 ***
denutritionOui              0.35383    0.03156  11.210  < 2e-16 ***
score_charlson_cat3-4       0.26342    0.04129   6.379 1.78e-10 ***
score_charlson_cat>4        0.88358    0.03925  22.512  < 2e-16 ***
ActeAutres                  0.43596    0.05404   8.068 7.15e-16 ***
Actecolectomie_gauche      -0.14714    0.03827  -3.844 0.000121 ***
ActeResection rectale      -0.39737    0.07856  -5.058 4.24e-07 ***
Acteresection_multiple_CCR  0.08006    0.05210   1.537 0.124376    
ActeRRS                    -0.17226    0.04293  -4.013 6.01e-05 ***
Nbre.sejour_cat51-100      -0.17283    0.04731  -3.653 0.000259 ***
Nbre.sejour_cat>100        -0.37517    0.07712  -4.865 1.15e-06 ***
statutHop2Hpt.non.univ     -0.10931    0.07480  -1.461 0.143940    
Fdep09_cat3Niv.moy          0.00302    0.03668   0.082 0.934384    
Fdep09_cat3Niv.sup.        -0.04000    0.03960  -1.010 0.312553 

Best Answer

How to interpret the variance for finessGeoDP (Hospital ID) and Trimester. Do I have to convert these coef with exp() before interpreting them?

No, this would simply be wrong. Typically models with random effects are either interpreted

  • in terms of variance components — common e.g. in population genetics, and very much harder to do for generalized linear (rather than "ordinary" linear) mixed models, i.e. with a non-Gaussian response variable. In this case you would look at the proportion of variance explained by each term, i.e. you would say something like "variation among groups in finessGeoDP explains about 80% (0.12/0.15) of the variance while Trimestre explains the remaining 20% (0.03/0.15). In the mixed case this is tricky because the decomposition includes neither the variability explained by the fixed-effect parameters, nor by binomial variation. (If you want to do things this way you should probably look into the plethora of plausible pseudo-$R^2$ measures for GLMM.)

  • in terms of standard deviations; I generally find this more useful because the standard deviations are on the same (log-odds) scale as the fixed-effect estimates; for example, you could say that a "typical" range encompassing 95% of the variation in finessGeoDP would be about 4$\sigma$=1.44; this is of about the same magnitude as the largest fixed-effect parameters.

Could I calculate the confident interval of the variance using the SD*1.96?

No. The SD here is not a measure of the uncertainty of the random effect parameter, it's just the value on the standard-deviation scale (i.e. $\sqrt{\textrm{variance}}$). Furthermore, even if you did have the standard error of the SD (or variance) estimate, these intervals are based on a Gaussian sampling distribution, which is usually a poor approximation. confint(fitted_model,parm="theta_") will give you more reliable likelihood profile confidence intervals (warning, this is computationally intensive).

Is there an interest in determining the significance of random effects?

I would say usually not, but it is interesting in some contexts/to some people. Since we know that variances are always >0, p-values of random effects don't have the same sensible interpretation of "can we reliably determine the sign of this effect?" that applies to fixed-effect parameters.

Could results of random effects influence the interpretation of fixed effects?

Sure. (Otherwise there would be a lot of analyses where we don't care about the random effects per se and could save ourselves a lot of trouble by running simpler GLMs.)

Related Question