R Correlation – How to Conduct Three-Level Meta-Analyses to Compare Correlations and Correct for Publication Bias

correlationmeta-analysismetaforpublication-biasr

I'm conducting a three-level meta-analysis on correlations using the {metafor} package on R.

I'm quite a newbie when it comes to three-level meta-analyses, so I have two doubts.

1) From each study, I collected effects sizes that indicate:

  • correlations between variable A and variable B (A-B)
  • correlations between variable A and variable C (A-C)

Simplified example of the database with the effect size:

> data

es.id   study.id   z      type
1       Study1     0.10   A-B
2       Study1     0.20   A-B
3       Study1     0.30   A-C
4       Study2     0.15   A-B
5       Study2     0.18   A-C
6       Study3     0.13   A-B
7       Study3     0.10   A-C
8       Study4     0.10   A-B
9       Study4     0.12   A-C
10      Study4     0.09   A-C

I planned to run two separate meta-analyses on this effect sizes (a meta-analysis for A-B correlations and another one for A-C correlations).
However, is it possible to compare the two pooled correlation coefficients, namely to say whether A-B correlations are stronger than A-C correlations in the same studies?

Intuitively, I'd run a three-level mixed-effect model of the entire pool of effect sizes (A-B and A-C correlations together), using the "type" of correlation (A-B vs. A-C) as a sort of within-study moderator.

Something like this:

rma.mv(yi, V, data, random = ~ 1 | study.id/es.id, method = "REML", mods = ~ type)

However, I'm not sure this is the best way to proceed and I could not find any study that approached the analysis as I planned to…

2) For my meta-analysis, I also planned to assess publication bias.
By looking at the best strategies to adopt with three-level meta-analyses, I decided to use Egg's Test with standard error as a moderator:

egger <- rma.mv(yi, V, mods = ~ se, random = ~1 | study.id/es.id, data)
egger

However, I failed in finding a reliable source concerning the possible correction strategies that can be applied to three-level meta-analyses.
I'd like to correct potential publication bias in my study, but most of the online resources describe strategies or procedures that are not easily applicable to rma.mv objects or complex analyses.
I would really appreciate it if you could point out some references on how to deal with publication bias on R with {metafor} or other packages.

Thank you everybody for your help!

Best Answer

  1. To compare the A-B and A-C correlations, you need to account for the dependency therein, not just by including random effects, but also for the dependency in the sampling errors. The rcalc() function (see here) helps with with computing those and constructing the proper V matrix. Note that you will need to include the B-C correlations in your dataset (since the covariance between $r_{AB}$ and $r_{AC}$ depends on $r_{BC}$). Also, instead of random = ~ 1 | study.id/es.id, you might want to use random = ~ type | study.id, struct="UN".

  2. Including the standard error as a predictor in your model (in addition to type) would be the straightforward generalization of the 'regression test for funnel plot asymmetry'. See:

    Fernández-Castilla, B., Declercq, L., Jamshidi, L., Beretvas, S. N., Onghena, P. & Van den Noortgate, W. (2021). Detecting selection bias in meta-analyses with multiple outcomes: A simulation study. The Journal of Experimental Education, 89(1), 125-144. https://doi.org/10.1080/00220973.2019.1582470

    Rodgers, M. A. & Pustejovsky, J. E. (2021). Evaluating meta-analytic methods to detect selective reporting in the presence of dependent effect sizes. Psychological Methods, 26(2), 141-160. https://doi.org/10.1037/met0000300

Related Question