You can create the credible interval by
- taking only those iterations that satisfy your criteria
- calculating quantiles from samples of these iterations
In order to do this, you will need to extract the samples from JAGS.
As @Glen_b mentioned, you could also encode this in the prior. In JAGS, you can do
theta ~ dunif(0.5,1)
or, on the probit scale,
theta <- pnorm(logit_theta,0,1)
logit_theta ~ dnorm(0,1)I(0, )
Now, the intervals calculated from JAGS will have the proper truncation.
With all this being said, you may want to allow for the possibility that $\theta$ is negative. You may expect that the participants will achieve at least chance performance, but they may do worse (perhaps they are using external information that is bad).
Unfortunately, the interval that you are looking for is not uniquely determined. Essentially, what you need is the Posterior Predictive Density (PPD, see https://en.wikipedia.org/wiki/Posterior_predictive_distribution), which is the density function of new/unseen data given the observed data. This PPD depends on the posterior distribution of the parameters, $\theta_1, ..., \theta_n$ in your case. It can be written as
$p(y^* | y, x, x^*) = \int p(y^*, \theta | y, x, x^*) d\theta = \int p(y^* | \theta) p (\theta | y, x, x^*)d\theta$
where $y^*$ represents the unseen response data, $y$ represents the known response data, $x$ and $x^*$ represent the predictor values that correspond to $y$ and $y^*$, and $\theta$ represents the parameters. The last factor in the final integral is the posterior distribution of $\theta$ given $y, x, x^*$. As the PPD depends on the posterior distribution of the parameters, this, in turn, depends on the prior distribution of the parameters (and on the chosen data model / likelihood function). This means that for each prior you may choose, your posterior distribution changes (and, as a result, your interval as well).
Usually, when choosing completely uninformative (i.e. flat) priors, along with a Normal likelihood for the response values given the predictors, the results of a Bayesian analysis overlap with those of a frequentist analysis. Then again, flat priors are usually a poor choice for such a model.
When you know which priors you want to use for your analysis, it may be possible to compute the PPD analytically, but in many cases this is simply impossible. I'd recommend using a tool like Stan (http://mc-stan.org) to draw samples from the posterior distribution and then use those to determine a credible interval for your parameters and your new (simulated) data.
Hope this helps!
Best Answer
They live in different spaces and mean different things.
A credible interval $[a,b]$ is a subset of the parameter space such that $$ P(a\leq\Theta\leq b\mid X_1=x_1,\dots,X_n=x_n) = \alpha \, , $$ and it means that, after seeing the data, you believe that with probability $\alpha$ the parameter value is inside this interval.
A prediction interval $[u,v]$ is a subset of the sampling space such that $$ P(u\leq X_{n+1}\leq v\mid X_1=x_1,\dots,X_n=x_n) = \gamma \, , $$ and it means that, after seeing the data, you believe that with probability $\gamma$ the value of a future observation $X_{n+1}$ will be inside this interval.