Solved – JAGS: posterior predictive distribution

bugsgibbsjagsmarkov-chain-montecarlo

I am fitting a simple linear regression model with R & JAGS:

model {
    for (i in 1:length(ri)){
        ri[i] ~ dnorm(mu[i],tau)
        mu[i] <- alpha + b.vr*vr[i] + b.ir*ir[i]
    }

    #posterior predictive distribution
    for (j in 1:length(pvr)){
        pmu[j] <- alpha + b.vr*pvr[j] + b.ir*pir[j]
        pri[j] ~ dnorm(pmu[j],tau)
     }  

    #priors for regression
    alpha ~ dnorm(0,0.01)
    b.vr ~ dunif(-10,0)
    b.ir ~ dunif(0,10)

    #prior for precision
    tau ~ dgamma(0.001,0.001)
    sigma <- 1/sqrt(tau)
}

I am trying to calculate the posterior predictive distribution with new data (pvr and pir in the model). But somehow the reported results (pri) do not make sense (the means of pri are smaller than expected).

Could someone explain me, is something wrong with the model specification?

Best Answer

disclaimer I still don't fully understand your model; but without at least a reproducible example, this is the best I can offer. It is not clear exactly what you are doing here. For example, how are pvr and pir calculated? Would it make sense to calculate them inside the same model?

Answer

I am assuming that your data includes observations for mu[] but not pmu[] and you want to estimate pmu[j] given j values of pvr and pir.

Append the pir and pvr to the ir and vr columns, get rid of the second for loop, and then consider the values of mu[] estimated using pir and pvr to be the posterior predictive estimates of mu. Then replace the two for loops with this:

for (i in 1:length(ri)+length(pri)){
    ri[i] ~ dnorm(mu[i],tau)
    mu[i] <- alpha + b.vr*vr[i] + b.ir*ir[i]
}

I have done something similar, but without predicted regressors, similar to the example given by Gelman et al in 'Bayesian Data Analysis' (pp 598-599 starting under posterior predictive simulations).