Mixed Models in R – Dealing with One Observation Per Level

lme4-nlmemixed modelrregression

I'm fitting a random effects model with glmer to some business data. The aim is to analyse sales performance by distributor, taking into account regional variation. I have the following variables:

  • distcode: distributor ID, with about 800 levels
  • region: top-level geographical ID (north, south, east, west)
  • zone: mid-level geography nested within region, about 30 levels in all
  • territory: low-level geography nested within zone, about 150 levels

Each distributor operates in only one territory. The tricky part is that this is summarised data, with one data point per distributor. So I have 800 data points and I'm trying to fit (at least) 800 parameters albeit in a regularised fashion.

I've fitted a model as follows:

glmer(ninv ~ 1 + (1|region/zone/territory) + (1|distcode), family=poisson)

This runs without a problem, although it does print a note:

Number of levels of a grouping factor for the random effects
is equal to n, the number of observations

Is this a sensible thing to do? I get finite estimates of all the coefficients, and the AIC also isn't unreasonable. If I try a poisson GLMM with the identity link, the AIC is much worse so the log link is at least a good starting point.

If I plot the fitted values vs the response, I get what is essentially a perfect fit, which I guess is because I have one data point per distributor. Is that reasonable, or am I doing something completely silly?

This is using data for one month. I can get data for multiple months and get some replication that way, but I'd have to add new terms for month-to-month variation and possible interactions, correct?


ETA: I ran the above model again, but without a family argument (so just a gaussian LMM rather than a GLMM). Now lmer gave me the following error:

Error in (function (fr, FL, start, REML, verbose) :
Number of levels of a grouping factor for the random effects
must be less than the number of observations

So I'd guess that I'm not doing something sensible, as changing the family shouldn't have an effect. But the question now is, why did it work in the first place?

Best Answer

I would strongly disagree with the practice of fitting a mixed model where you have the same number of groups as observations on conceptual grounds, there are not "groups", and also on computational grounds, as your model should have identifiably issues- in the case of an LMM at least. (I work with LMM exclusively it might be a bit biased also. :) )

The computational part: Assume for example the standard LME model where $y \sim N(X\beta, ZDZ^T + \sigma^2 I)$. Assuming now that you have an equal number of observations and groups (let's say under a "simple" clustering, no crossed or nested effects etc.) then all your sample variance would moved in the $D$ matrix, and $\sigma^2$ should be zero. (I think you convinced yourself for this already) It is almost equivalent of having as many parameters as data in a liner model. You have an over-parametrized model. Therefore regression is a bit nonsensical.

(I don't understand what you mean by "reasonable" AIC. AIC should be computable in the sense that despite over-fitting your data you are still "computing something".)

On the other hand with glmer (lets say you have specified family to be Poisson) you have a link function that says how your $y$ depends on $X\beta$ (in the case of a Poisson that is simple a log - because $X\beta> 0$). In such cases you fix you scale parameter so you can account for over-dispersion and therefore you do have identifiability (and that's why while glmer complained, it did gave you results out); this is how you "get around" the issue of having as many groups as observations.

The conceptual part: I think this a bit more "subjective" but a bit more straightforward also. You use Mixed Eff. models because you essentially recognised that there is some group-related structure in your error. Now if you have as many groups as data-points, there is not structure to be seen. Any deviations in your LM error structure that could be attributed to a "grouping" are now attributed to the specific observation point (and as such you end up with an over-fitted model).

In general single-observation groups tend to be a bit messy; to quote D.Bates from the r-sig-mixed-models mailing list:

I think you will find that there is very little difference in the model fits whether you include or exclude the single-observation groups. Try it and see.

Related Question