Let us consider the following Hierarchical Bayesian model:

- $w \sim\ Beta(20, 20)$
- $K = 6$
- $a = w * (K – 2) + 1$
- $b = (1 – w) * (K – 2) + 1$
- $theta \sim\ Beta(a, b)$
- $y \sim\ Bern(theta)$

The above is the example of figure 9.3 in the book Doing Bayesian Data Analysis by John Kruschke.

I use the pymc3 to construct the model as follows:

```
import numpy as np
import pymc3 as pm
import arviz as az
K = 6
D = np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0])
with pm.Model() as mdl:
w = pm.Beta('w', alpha=20., beta =20.)
a = w * (K - 2) + 1
b = (1 - w) * (K - 2) + 1
theta = pm.Beta('theta', alpha=a, beta=b)
y = pm.Bernoulli('y', p=theta, observed=D)
trace = pm.sample(10000, tune=3000)
```

My question is the following:

- How can I compute the conditional posterior probability $p(theta|w=0.25,D)$? In other words how can I obtain samples from the conditional posterior distribution of $theta|w=0.25, D$?

## Best Answer

I think you just need to fix the value of $w$ in your code, i.e.,

You may find the comparison for the two distributions as follows: (orange is the one with $w=0.25$)

In your original code, the resulting samples follow $p(w, \theta \vert D)$, which is given by $$ p(\theta, w \vert D) \propto p(D \vert \theta, w)p(\theta\vert w)p(w). $$ The desired conditional distribution $p(\theta \vert D, w=0.25)$ is given by $$ p(\theta \vert D, w=0.25) \propto p(D \vert \theta, w=0.25)p(\theta\vert w=0.25), $$ that's why you just need to fix the value of $w$ in the code.