Solved – Combining multiple outcomes in studies (no meta-analysis)

covarianceeffect-sizemeta-analysisvariance

I'm looking for a way to combine effect sizes (mean difference) and variances for several single studies but I don't want to conduct an overall meta-analysis. That means I'm only looking for an average score for each study.

Example:
Study 1 has 2 outcomes -> average ES and variance of ES for Study 1
Study 2 has 4 outcomes -> average ES and variance of ES for Study 2

Problem:
I can calculate the mean of the effect sizes but that does not work for the variance. To average the variances I would need the correlations between the outcomes but these are rarely reported.
Also please note that the outcomes within each study are very similar (e.g. two different measures for word problem solving).

Ideas so far (I also refer to this post):

  • Estimate the covariance between outcomes using the formula by Gleser, L. J., & Olkin, I. (2009) (example in metafor): In principle this would work. However in some of my studies the sample sizes are not the same for every outcome. So I can't really apply the formula to every pair of outcome to calculate their covariance because of different Ns.
    In R-package MAd the mean of sample sizes between two outcomes is used to avoid these kind of problems. And also I have to guess the correlation between outcomes.
  • Robust Variance Estimation: Does not seem suitable to combine multiple effect sizes of only one study because the variance between studies is needed to estimate an average variance (?). And the number of studies is too small for a decent estimation.
  • Using Borenstein's (2009) formula to combine ES and variances. This would work because only effect sizes and variances of effect sizes are needed. However I would have to guess a correlation between every ES.
  • I can also do some non-statistical methods like selecting only one ES per study.

I would prefer Borenstein's method. Compared to Gleser & Olkin I also have to guess a correlation but don't need to bother with different sample sizes.
What do you think? Looking forward to your opinion!

Best Answer

The agg function in MAd package actually can apply Borenstein et al. 2009 method for you. You just need to do:

library(MAd)
agg (id = id,es = es,var = var, cor=1,method = "BHHR", data = yourdata)

With id as the column that contains the common name of all the rows you want to aggregate, and r as correlation. Depending on the type of study you are carrying out, you initial r would be different. Some people use r=0.5 as default. In my field (life sciences) I use r=1 when the non-independent outcomes are measurements across several time-points. What I would do is run the analysis with r=0, r=0.5 and r=1 and see if the conclusions are different. This is what some people would call a sensitivity analysis.