Solved – Why are small sample sized avoided in meta-analyses

meta-analysissmall-sample

In most meta-analyses that I've read, the authors choose to exclude studies with really small sample sizes (e.g n=10). Why is that?

Speculation

I could speculate that one reason is that since the effect size from a study depends on the means of the groups as well as the pooled standard deviation, like so

$$\frac{\bar{X}_1-\bar{X}_2}{SD_{pooled}}$$

and small samples cannot be assumed to achieve enough of a normal distribution among their results, the pooled standard deviation will be terribly off, thereby possibly messing up the overall effect size.

Further, some authors argue that studies with large sample sizes, among other things, tend to be of higher quality than studies with smaller sample sizes, and that there might be a publication bias going on, with smaller studies having to have positive results to get published. For example, this is the argument laid forward by Nüesch et al., 2010, although small studies are here defined as $n<100$. However, I guess one could extrapolate from results like these to what to do with even smaller studies.

References

  • Nüesch et al., 2010. Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study. BMJ, 2010 July 16, 10.1136/bmj.c3515.

Best Answer

If the meta-analysis uses fixed effects then small studies, other things being equal, will get very little weight and so the substantive conclusions are unlikely to be affected by including or excluding them.

If the meta-analysis uses random effects and there is substantial heterogeneity then the weights tend to become more equal. In that case even a small study may have almost the same weight as a large one. Some people feel this is undesirable because they believe that (a) small studies are of poorer quality (b) those small studies which they can find are not likely to be a random sample of all the small studies there have ever been.

Related Question