The package MCMCglmm can easily handle and estimate covariance structures and random effects. However it does use bayesian statistics which can be intimidating to new users. See the MCMCglmm Course Notes for a thorough guide to MCMCglmm, and chapter 5 in particular for this question. I absolutely recommend reading up on assessing model convergence and chain mixing before analysing data for real in MCMCglmm.
library(MCMCglmm)
MCMCglmm uses priors, this is an uninformative inverse wishart prior.
p<-list(G=list(
G1=list(V=diag(2),nu=0.002)),
R=list(V=diag(2),nu=0.002))
Fit the model
m<-MCMCglmm(cbind(x,y)~trait-1,
#trait-1 gives each variable a separate intercept
random=~us(trait):group,
#the random effect has a separate intercept for each variable but allows and estiamtes the covariance between them.
rcov=~us(trait):units,
#Allows separate residual variance for each trait and estimates the covariance between them
family=c("gaussian","gaussian"),prior=p,data=df)
In the model summary summary(m)
the G structure describes the variance and covariance of the random intercepts. The R structure describes the observation level variance and covariance of intercept, which function as residuals in MCMCglmm.
If you are of a Bayesian persuasion you can get the entire posterior distribution of the co/variance terms m$VCV
. Note that these are variances after accounting for the fixed effects.
simulate data
library(MASS)
n<-3000
#draws from a bivariate distribution
df<-data.frame(mvrnorm(n,mu=c(10,20),#the intercepts of x and y
Sigma=matrix(c(10,-3,-3,2),ncol=2)))
#the residual variance covariance of x and y
#assign random effect value
number_of_groups<-100
df$group<-rep(1:number_of_groups,length.out=n)
group_var<-data.frame(mvrnorm(number_of_groups, mu=c(0,0),Sigma=matrix(c(3,2,2,5),ncol=2)))
#the variance covariance matrix of the random effects. c(variance of x,
#covariance of x and y,covariance of x and y, variance of y)
#the variables x and y are the sum of the draws from the bivariate distribution and the random effect
df$x<-df$X1+group_var[df$group,1]
df$y<-df$X2+group_var[df$group,2]
Estimating the original co/variance of the random effects requires a large number of levels to the random effect. Instead your model will likely estimate the observed co/variances which can be calculated by cov(group_var)
The equivalence of the models can be observed by calculating the correlation between two observations from the same individual, as follows:
As in your notation, let $Y_{ij} = \mu + \alpha_i + \beta_j + \epsilon_{ij}$, where
$\beta_j \sim N(0, \sigma_p^2)$ and $\epsilon_{ij} \sim N(0, \sigma^2)$.
Then
$Cov(y_{ik}, y_{jk}) = Cov(\mu + \alpha_i + \beta_k + \epsilon_{ik}, \mu + \alpha_j + \beta_k + \epsilon_{jk}) = Cov(\beta_k, \beta_k) = \sigma_p^2$, because all other terms are independent or fixed,
and $Var(y_{ik}) = Var(y_{jk}) = \sigma_p^2 + \sigma^2$,
so the correlation is $\sigma_p^2/(\sigma_p^2 + \sigma^2)$.
Note that the models however are not quite equivalent as the random effect model forces the correlation to be positive. The CS model and the t-test/anova model do not.
EDIT: There are two other differences as well. First, the CS and random effect models assume normality for the random effect, but the t-test/anova model does not. Secondly, the CS and random effect models are fit using maximum likelihood, while the anova is fit using mean squares; when everything is balanced they will agree, but not necessarily in more complex situations. Finally, I'd be wary of using F/df/p values from the various fits as measures of how much the models agree; see Doug Bates's famous screed on df's for more details. (END EDIT)
The problem with your R
code is that you're not specifying the correlation structure properly. You need to use gls
with the corCompSymm
correlation structure.
Generate data so that there is a subject effect:
set.seed(5)
x <- rnorm(10)
x1<-x+rnorm(10)
x2<-x+1 + rnorm(10)
myDat <- data.frame(c(x1,x2), c(rep("x1", 10), rep("x2", 10)),
rep(paste("S", seq(1,10), sep=""), 2))
names(myDat) <- c("y", "x", "subj")
Then here's how you'd fit the random effects and the compound symmetry models.
library(nlme)
fm1 <- lme(y ~ x, random=~1|subj, data=myDat)
fm2 <- gls(y ~ x, correlation=corCompSymm(form=~1|subj), data=myDat)
The standard errors from the random effects model are:
m1.varp <- 0.5453527^2
m1.vare <- 1.084408^2
And the correlation and residual variance from the CS model is:
m2.rho <- 0.2018595
m2.var <- 1.213816^2
And they're equal to what is expected:
> m1.varp/(m1.varp+m1.vare)
[1] 0.2018594
> sqrt(m1.varp + m1.vare)
[1] 1.213816
Other correlation structures are usually not fit with random effects but simply by specifying the desired structure; one common exception is the AR(1) + random effect model, which has a random effect and AR(1) correlation between observations on the same random effect.
EDIT2: When I fit the three options, I get exactly the same results except that gls doesn't try to guess the df for the term of interest.
> summary(fm1)
...
Fixed effects: y ~ x
Value Std.Error DF t-value p-value
(Intercept) -0.5611156 0.3838423 9 -1.461839 0.1778
xx2 2.0772757 0.4849618 9 4.283380 0.0020
> summary(fm2)
...
Value Std.Error t-value p-value
(Intercept) -0.5611156 0.3838423 -1.461839 0.1610
xx2 2.0772757 0.4849618 4.283380 0.0004
> m1 <- lm(y~ x + subj, data=myDat)
> summary(m1)
...
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.3154 0.8042 -0.392 0.70403
xx2 2.0773 0.4850 4.283 0.00204 **
(The intercept is different here because with the default coding, it's not the mean of all subjects but instead the mean of the first subject.)
It's also of interest to note that the newer lme4
package gives the same results but doesn't even try to compute a p-value.
> mm1 <- lmer(y ~ x + (1|subj), data=myDat)
> summary(mm1)
...
Estimate Std. Error t value
(Intercept) -0.5611 0.3838 -1.462
xx2 2.0773 0.4850 4.283
Best Answer
You can include twins and non-twins in a unified model by using a dummy variable and including random slopes in that dummy variable. Since all families have at most one set of twins, this will be relatively simple:
Let $A_{ij} = 1$ if sibling $j$ in family $i$ is a twin, and 0 otherwise. I'm assuming you also want the random slope to differ for twins vs. regular siblings - if not, do not include the $ \eta_{i3}$ term in the model below.
Then fit the model:
$$ y_{ij} = \alpha_{0} + \alpha_{1} x_{ij} + \eta_{i0} + \eta_{i1} A_{ij} + \eta_{i2} x_{ij} + \eta_{i3} x_{ij} A_{ij} + \varepsilon_{ij} $$
$\alpha_{0}, \alpha_{1}$ are fixed effect, as in your specifiation
$\eta_{i0}$ is the 'baseline' sibling random effect and $\eta_{i1}$ is the additional random effect that allows twins to be more similar than regular siblings. The sizes of the corresponding random effect variances quantify how similar siblings are and how much more similar twins are than regular siblings. Note that both twin and non-twin correlations are characterized by this model - twin correlations are calculated by summing random effects appropriately (plug in $A_{ij}=1$).
$\eta_{i2}$ and $\eta_{i3}$ have analogous roles, only they act as the random slopes of $x_{ij}$
$\varepsilon_{ij}$ are iid error terms - note that I have written your model slightly differently in terms of random intercepts rather than correlated residual errors.
You can fit the model using the
R
packagelme4
. In the code below the dependent variable isy
, the dummy variable isA
, the predictor isx
, the product of the dummy variable and the predictor isAx
andfamID
is the identifier number for the family. Your data is assumed to be stored in a data frameD
, with these variables as columns.The random effect variables and the fixed effects estimates can be viewed by typing
summary(g)
. Note that this model allows the random effects to be freely correlated with each other.In many cases, it may make more sense (or be more easily interpretable) to assume independence between the random effects (e.g. this assumption is often made to decompose genetic vs. environmental familial correlation), in which case you'd instead type