For the first question, given that you have a hierarchical model, you simply have to start from the higher level and proceed downwards:
- $(r_i|p, \lambda )\sim\mathcal{B}(p)$ for $i=1,\ldots,n$
- $(x_i|r, \lambda, p)\sim\mathcal{P}(\lambda r_i)$ for $i=1,\ldots,n$
Note that only the $x_i$'s for which $r_i=1$ need to be simulated. As an R code, this can be written as
n=10^2
p=0.3
lam=2
r=as.integer(runif(n)<p)
x=rep(0,n)
x[r==1]=rpois(sum(r==1),lam)
For the second question, the full conditional distributions can be extracted from the joint density$$f(x,r, \lambda, p) = \frac{b^\alpha \lambda^{\alpha-1} e^{-b \lambda}}{\Gamma(\alpha)} \prod_{i=1}^n\frac{e^{-\lambda r_i} (\lambda r_i)^{x_i}}{x_i!} p^{r_i}(1-p)^{1-r_i}$$since the full conditional densities all are proportional to this joint density as functions of $\lambda$, $p$, or the $r_i$'s.
For instance,
$$\pi(\lambda|p,\mathbf{r},\mathbf{x})\propto\lambda^{\alpha-1} e^{-b \lambda}\prod_{i=1}^n e^{-\lambda r_i} (\lambda)^{x_i}\propto\lambda^{\alpha-1} e^{-b \lambda} e^{-\lambda \sum_i r_i} \lambda^{\sum_i x_i}$$ where I only kept the terms that depend on $\lambda$. Hence
$$\pi(\lambda|p,\mathbf{r},\mathbf{x})\propto\lambda^{\alpha-1+\sum_i x_i} e^{-\left[b+\sum_i r_i\right] \lambda}$$ which is proportional to the $$Gamma\left(a+ \sum_{i}x_i, b+ \sum_{i}r_i\right)$$density on $\lambda$.
Similarly$$\pi(p|\lambda,\mathbf{r},\mathbf{x}) \propto \prod_{i=1}^n p^{r_i}(1-p)^{1-r_i}=p^{\sum_i r_i}(1-p)^{n-\sum_i r_i}$$
leading to
$$Beta\left(1+ \sum_{i}r_i, n+1 - \sum_{i}r_i\right)$$as the full conditional posterior on $p$.
The problem is in your usage of $\theta$. Each of the Poisson distributions have a different mean $$\theta_i = \dfrac{n_i \lambda}{100}. $$
The prior is placed on not $\theta_i$ but on the common parameter $\lambda$. Thus, when you write down the Likelihood you need to write it in terms of $\lambda$
\begin{align*}
\text{Likelihood} & \propto \prod_{i=1}^{m} \theta_i^{y_i} e^{-\theta_i}\\
& = \prod_{i=}^{m} \left(\dfrac{n_i \lambda}{100}\right)^{y_i} e^{-\frac{n_i \lambda}{100}}\\
& \propto \lambda^{\sum_{i=1}^{m} y_i} e^{-\lambda \sum_{i=1}^{m} \frac{n_i}{100}}.
\end{align*}
Best Answer
It's based on the following two facts:
fact1. if $T(x)$ is a sufficient statistic for $\lambda$, then posterior $\pi(\lambda|x)=\pi(\lambda|T(x))$, where $x$ is our data.
fact2. $\sum_i X_i$ is the sufficient statistic for $\lambda$, the parameter for Poisson distribution.
Fact 1 is true as
$\pi(\lambda|x)=\frac{f(x|\lambda)p(\lambda)}{\int_{\lambda}f(x|\lambda)p(\lambda)d\lambda}=\frac{g(T(x)|\lambda)h(x)p(\lambda)}{\int_{\lambda}g(T(x)|\lambda)h(x)p(\lambda)d\lambda}=\frac{g(T(x)|\lambda)p(\lambda)}{\int_{\lambda}g(T(x)|\lambda)p(\lambda)d\lambda}=\gamma(\lambda|T(x))$