Solved – In a meta-analysis, how should one handle non-significant studies containing no raw data

effect-sizegroup-differenceshypothesis testingmeta-analysisstatistical-power

Let's say that I'm conducting a meta-analysis, looking at the performance of group A and group B with respect to a certain construct. Now, some of the studies that I'll come across will report that no statistical differences could be found between the two groups but no exact test statistics and/or raw data will be presented. In a meta-analysis, how should I handle such studies?

Basically, I see three different alternatives here:

  1. Include them all and assign to each one of them an effect size of 0.
  2. Throw them all out.
  3. Do some kind of power analysis for each one of them or set a threshold at a certain number of participants. Include all which should have been able to reach statistical significance and assign to each one of them an effect size of 0. Throw the rest out.

I can see merits with all the different options. Option one is fairly conservative and you'll only risk making a type II error. Option two raises the risk for making a type I error, but it also avoids having your results ruined because of a bunch of underpowered studies. Option three seems like the middle road between option one and option two, but a lot of assumptions and/or pure guesses will have to be made (What effect size should you base your power analyses on? What number of participants should you demand from each study for it to pass?), probably making the final result less reliable and more subjective.

Best Answer

As you point out, there are merits with all three approaches. There clearly isn't one option that is 'best'. Why not do all 3 and present the results as a sensitivity analysis?

A meta-analysis conducted with ample and appropriate sensitivity analyses just shows that the author is well aware of the limits of the data at hand, makes explicit the influence of the choices we make when conducting a meta-analysis, and is able to critically evaluate the consequences. To me, that is the mark of well-conducted meta-analysis.

Anybody who has ever conducted a meta-analysis knows very well that there are many choices and decisions to be made along the way and those choices and decisions can have a considerable influence on the results obtained. The advantage of a meta-analysis (or more generally, a systematic review) is that the methods (and hence the choices and decisions) are made explicit. And one can evaluate their influence in a systematic way. That is exactly how a meta-analysis should be conducted.