Solved – Truncating a posterior predictive distribution in JAGS

bayesianjagsposterior

I have run into an error associated with truncating a distribution in JAGS.

In my minimum reproducible example, I have data for 9 observations and would like to find a posterior predictive distribution for the 10th observation. To do this, I include the 10th observation as an NA and estimate its posterior predictive distribution as the variable pi10.

jagsdata <- data.frame(Y = c(47, 126, 68, 43, 67, 80, 61, 9, 26, NA))

model.string <- "
model{
 for (k in 1:10){
    Y[k]  ~ dlnorm(Z[k], tau.sp[k])
    tau.sp[k] ~ dgamma(0.01,0.01)
    Z[k] <- beta.o + beta.sp[k]
 }
  for (g in 1:10) {
    beta.sp[g] ~ dnorm(0, 0.0001)
  }
  beta.o    ~ dgamma (2, 0.04)
  pi10   <- Y[10]
}
"
writeLines(model.string, con = 'jagstest.bug')

library(rjags)
j.model  <- jags.model(file = "jagstest.bug", 
                       data = jagsdata, 
                       n.adapt = 500, 
                       n.chains = 4)
mcmc.object <- coda.samples(model = j.model,
                            variable.names = c('pi10'),
                            n.iter = 5000)

This works, but I would like to truncated the distribution of Y, for example by using the T(1,200). However replacing line 4 above with

    Y[k]  ~ dlnorm(Z[k], tau.sp[k])T(1,200)

gives the error:

Unobserved node inconsistent with unobserved parents at initialization

Although Y with a normal distribution does not give an error.

    Y[k]  ~ dnorm(Z[k], tau.sp[k])T(1,200)

I have read through the JAGS manual section 7 and some examples online, but it is not clear to me how to implement this or why I am getting this error.

Suggestions appreciated.

Best Answer

You can avoid this problem altogether by sampling from the untruncated distribution of Y[k], then (in R) discarding all samples for which Y[k] doesn't lie within the constraint bounds. This is a perfectly valid operation, however, if you have few posterior observations in the feasible region, you'll naturally have a large simulation error associated with your posterior distribution(s).

As a side note, you might want to avoid the Gamma(0.01,0.01) prior on the variance; see for example this presentation by Andrew Gelman and this paper, also by Gelman, for reasons why and alternative suggestions.