I am fitting a simple linear regression model with R & JAGS:
model {
for (i in 1:length(ri)){
ri[i] ~ dnorm(mu[i],tau)
mu[i] <- alpha + b.vr*vr[i] + b.ir*ir[i]
}
#posterior predictive distribution
for (j in 1:length(pvr)){
pmu[j] <- alpha + b.vr*pvr[j] + b.ir*pir[j]
pri[j] ~ dnorm(pmu[j],tau)
}
#priors for regression
alpha ~ dnorm(0,0.01)
b.vr ~ dunif(-10,0)
b.ir ~ dunif(0,10)
#prior for precision
tau ~ dgamma(0.001,0.001)
sigma <- 1/sqrt(tau)
}
I am trying to calculate the posterior predictive distribution with new data (pvr
and pir
in the model). But somehow the reported results (pri
) do not make sense (the means of pri
are smaller than expected).
Could someone explain me, is something wrong with the model specification?
Best Answer
disclaimer I still don't fully understand your model; but without at least a reproducible example, this is the best I can offer. It is not clear exactly what you are doing here. For example, how are
pvr
andpir
calculated? Would it make sense to calculate them inside the same model?Answer
I am assuming that your data includes observations for
mu[]
but notpmu[]
and you want to estimatepmu[j]
givenj
values ofpvr
andpir
.Append the
pir
andpvr
to their
andvr
columns, get rid of the second for loop, and then consider the values ofmu[]
estimated usingpir
andpvr
to be the posterior predictive estimates ofmu
. Then replace the twofor
loops with this:I have done something similar, but without predicted regressors, similar to the example given by Gelman et al in 'Bayesian Data Analysis' (pp 598-599 starting under posterior predictive simulations).