I want to run a meta-analysis over several studies reporting one-sample data (e.g. comparing participants' scores against a baseline score of zero). I calculated Cohen's d by dividing the difference of the sample mean and the baseline score by the sample standard deviation (as reported in the first answer here). How do I get the sampling variance of this effect size? (needed in the meta-analysis for calculating inverse variance weights)
Meta-Analysis – Calculating Sampling Variance for One-Sample Data
meta-analysisrvariance
Related Solutions
If you meta-analyze a mean differences with weights of $n$ instead of by $1/\text{SE}^2$ (inverse variance) - assuming groups of equal size are being compared - this gets you an appropriate average effect estimate under the assumption that variability is the same across studies. I.e. the weights would be proportional to the ones you do would use, if the standard errors were all exactly $2\hat{\sigma}/\sqrt{n}$ for a standard deviation $\sigma$ that is assumed to be identical across trials. You will no longer get a meaningful overall standard error or confidence interval for your overall estimate though, because you are throwing away the information $\hat{\sigma}$ on the sampling variability.
Also note that if groups are not of equal size $n$ is not the correct weight, because the the standard error for the difference of two normal distributions is $\sqrt{\sigma^2_1/n_1 + \sigma^2_2/n_2}$ and this only simplifies to $2\sigma/\sqrt{n}$, if $n_1=n_2=n/2$ (plus $\sigma=\sigma_1=\sigma_2$).
You could of course impute the missing standard errors under the assumption that $\sigma$ is the same across the studies. Then studies without a reported standard error have the same underlying variability as the average of the studies, for which you know it and that's easy to do.
Another thought is that using untransformed US dollars or US dollars per unit might or might not be problematic. Sometimes it can be desirable to use e.g. a log-transformation to meta-analyze and then to back-transform afterwards.
Basically if you do not have standard errors then there are a number of options.
1 it may be possible to back-calculate them from significance tests (or $p$-values) and the sample sizes.
2 it may be possible to impute them from similar studies which used the same measure under similar circumstances. In this case it would be wise to do a sensitivity analysis using a variety of imputed values. Mark the imputed values in some way if you provide a main forest plot.
3 authors of the primary studies may be contactable and may be willing to supply the missing values if you explain why you need them. If they do be sure to acknowledge their help in your paper and flag the results they supplied.
Best Answer
This is an interesting question because (so far as I know) there is no widely used formula for computing the variance in this situation. Some time ago, I did some simulations to examine the performance of different formulas to estimate the sampling variance of Cohen's d in case of a one-sample t-test.
I was aware of three different formulas:
The formula used in the Comprehensive Meta-analysis Software:
with
ni
being the sample size per study anddi
the observed Cohen's d.Other people use the standard formula for the dependent samples t-test (e.g., Borenstein, 2009) with correlation between pre- and posttest (r) equal to 0.5:
Another formula I have seen is one that was used in a paper by Koenig et al. (2011). This formula is obtained by personal communication with B. Becker.
I did a very small simulation study to examine the performance of these three formulas with sample sizes ranging from 10 to 500 and effect sizes in the population ranging from 0 to 0.8. The differences between the formulas were most observable for a population effect size of 0.8.
Using the formula of the dependent samples t-test with r=0.5 yielded the least biased estimates. However, there may be other formulas with better properties. I am curious what other people think about this.
Code:
References:
Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The Handbook of Research Synthesis and Meta-Analysis (pp. 221-236). New York: Russell Sage Foundation.
Koenig, A. M., Eagly, A. H., Mitchell, A. A., & Ristikari, T. (2011). Are leader stereotypes masculine? A meta-analysis of three research paradigms. Psychological Bulletin, 137, 4, 616-42.