I'm currently interested in learning NMA using WinBUGS. I studied the general concepts behind multiple treatment comparisons and i have a good base on classic pair-wise meta-analysis. I can also do a network meta-analysis using the frequentist approach with the netmeta R package. However i see that most of the publications are based on WinBUGS models within a bayesian approach, i tried learning alone WinBUGS programming but I can't find a good place where to start and I can't afford workshops because I'm still a student. Can someone recommend a good pathway for learning NMA programming with WinBUGS?
Solved – Learning WinBUGS programming for network meta-analysis
bayesian networknetwork-meta-analysiswinbugs
Related Solutions
Given that there is a single effect size estimate from each study (see comments above), the analysis can be carried out with regular meta-regression methods. You can carry out such an analysis with the metafor package. The "trick" is to code variables that indicate what treatments have been compared within a particular study:
library(metafor)
my_data$A1 <- ifelse(treat1 == "A1", 1, 0)
my_data$A2 <- ifelse(treat1 == "A2", 1, 0)
my_data$B1 <- ifelse(treat2 == "B1", -1, 0)
my_data$B2 <- ifelse(treat2 == "B2", -1, 0)
res <- rma(TE, sei=seTE, mods = ~ A1 + A2 + B1 - 1, data=my_data)
res
yields:
Mixed-Effects Model (k = 38; tau^2 estimator: REML)
tau^2 (estimated amount of residual heterogeneity): 0.2898 (SE = 0.1578)
tau (square root of estimated tau^2 value): 0.5384
I^2 (residual heterogeneity / unaccounted variability): 59.02%
H^2 (unaccounted variability / sampling variability): 2.44
Test for Residual Heterogeneity:
QE(df = 35) = 93.5215, p-val < .0001
Test of Moderators (coefficient(s) 1,2,3):
QM(df = 3) = 435.5223, p-val < .0001
Model Results:
estimate se zval pval ci.lb ci.ub
A1 2.2446 0.2837 7.9123 <.0001 1.6886 2.8006 ***
A2 0.9060 0.3387 2.6751 0.0075 0.2422 1.5699 **
B1 -2.2983 0.3467 -6.6294 <.0001 -2.9778 -1.6188 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Since variable B2
has been left out, this becomes the "reference" treatment. So, the coefficient for A1
is the estimated average effect when comparing treatment A1
against B2
. The coefficient for A2
is the estimated average effect when comparing treatment A2
against B2
. And the coefficient for B1
is the estimated average effect when comparing treatment B1
against B2
.
The network that is analyzed here looks like this:
A1 A2
|\ /|
| \ / |
| X |
| / \ |
|/ \|
B1 B2
So, the comparison between B1
and B2
is based purely on indirect evidence.
There are 3 more comparisons that can be obtained here besides the ones above (i.e., A1
vs A2
, A1
vs B1
, and A2
vs B1
). You can obtain those by changing the "reference" treatment.
An assumption made here is that the amount of heterogeneity is the same regardless of the comparison. This may or may not be true.
An article that describes this type of analysis is:
Salanti et al. (2008). Evaluation of networks of randomized trials. Statistical Methods in Medical Research, 17, 279-301.
Edit: To test whether the effect of the first factor (A & B) depends on the second factor (1 & 2), that is, whether (A1 vs B1) = (A2 vs B2) or not, first note that:
(A1-B2) - (A2-B2) - (B1-B2) = (A1-A2) - (B1-B2)
= (A1-B1) - (A2-B2).
So, you just have to test whether b1 - b2 - b3 = 0
. You can do this with:
predict(res, newmods=c(1,-1,-1))
or install/load the multcomp
package and use:
summary(glht(res1, linfct=rbind(c(1,-1,-1))), test=adjusted("none"))
which yields:
Simultaneous Tests for General Linear Hypotheses
Linear Hypotheses:
Estimate Std. Error z value Pr(>|z|)
1 == 0 3.6369 0.5779 6.293 3.12e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Adjusted p values reported -- none method)
I think, the modeling approaches and estimation techniques should be viewed seperately. From modeling point of view, Lumley model only works for two-arm trials only. So it is not preferable. To my understanding, node-splitting approach, which you listed as Dias et al, is very intuitive. Also, I think you should add the design-by-treatment interaction approach (http://www.ncbi.nlm.nih.gov/pubmed/24777711). From estimation point of view, I dont know much about frequentist techniques, but one can use MCMC for almost all models for NMA. Lastly, there is a different technique (which is not widely known unfortunately) called INLA. You can use INLA from within R and fit NMA models, it is faster and no need to check convergence diagnostics. Here is the paper http://www.ncbi.nlm.nih.gov/pubmed/26360927. So, at the end I would prefer node-splitting and the design-by-treatment interaction approach using INLA.
Best Answer
I was in a similar situation like one year ago. And I had to learn NMA models and BUGS at the same time. However, I think better way is starting with pairwise meta-analysis (conventional meta-analysis) with WinBUGS, and then going to NMA models which are more complicated. Actually a pairwise meta-analysis is just a network meta-analysis with only two treatments. The BUGS tutorials has one example with a pairwise meta-analysis which you can read. Or you can read any other WinBUGS tutorial and apply to the example I just mentioned. And it may take some time, because you need to have some knowledge about Bayesian inference (prior and posterior distributions etc.), Markov Chain Monte Carlo (checking convergence diagnostics etc.)
Once you learned how WinBUGS work, then usually the only thing you need is the right BUGS code. NICE reports are good resources for NMA models using WinBUGS. More specifically, after you learned how to fit pairwise meta-analysis using WinBUGS, you can read this technical report. In this report, the BUGS codes for different type of NMA models are shown and explained in detailed.