Solved – Effect size measure for pretest-posttest design

anovacohens-deffect-size

I recently conducted an intervention study with random assignment to treatment and control conditions. Participants completed a pretest before the intervention and a posttest after. I analyzed test performance using a 2 (time: pretest or posttest) * 2 (condition: treatment or control) ANOVA. After finding a time * condition interaction, I conducted post-hoc t-tests comparing pretest to posttest within each condition. Together with these t-test results, I reported an effect size within each condition, which I called Cohen's d (hopefully accurately) and calculated as (mean(post) – mean(pre))/sd(pre).

After submitting the results to a journal, I received a reviewer comment saying that Cohen's d does not account for the correlation between pretest and posttest, which can result in an inflated estimate. The reviewer said there is a more correct formula which does not have this problem, but didn't say what the formula was.

My questions: (1) Is it true that Cohen's d, as I calculated it, is not correct in this situation? (2) If so, what is a more correct effect size computation I could use, ideally with a citation?

Best Answer

The effect size is correct but the standard error and confidence interval for the effect size must take into account the pre-post correlation. The equation for the standard error of the pre-post Cohen's $d$ is: $$ se = \sqrt{ \frac{2\left(1-r\right)}{n}+\frac{d^2}{2n}} . $$ That said, I don't think these are the effect sizes you really want and they are misleading. If you have a statistically significant effect in the treatment group but you don't have a statistically significant effect in the control group, that does not mean that the difference is significant. For that, you must look at the interaction. A more meaningful effect size would be of the difference-in-differences. This effect size would reflect the effect of interest in a single number.

Related Question