How can the marginal distribution be derived from conjugate Gaussians

conjugate-priormarginal-distribution

In An Introduction to Empirical Bayes Data Analysis by George Casella (1985), it is given that
\begin{align}
x|\theta &\sim N(\theta,\sigma^2) \\
\theta &\sim N(\mu,\tau^2)
\end{align}

and since the Gaussian distribution is conjugate to itself, then the posterior distribution is
$$
\theta|x \sim N\left(\frac{\theta\tau^2 + \mu\sigma^2}{\sigma^2+\tau^2},\frac{\sigma^2\tau^2}{\sigma^2 + \tau^2}\right)
$$

However, it is also mentioned in the paper that
$$
x \sim N(\mu,\sigma^2 + \tau^2)
$$

I am not sure how $p(x)$ was derived here. Was this derived by re-arranging Bayes rule such that
$$
p(x) = \frac{p(x|\theta)p(\theta)}{p(\theta|x)}
$$

Best Answer

The unconditional distribution of $x$ is also normal (you can see the full proof here) and what you need to do is finding its parameters. The Bayes theorem would solve it, but it isn't mandatory (and neither is the posterior distribution). You can have it simply by using the laws of total expectation and total variance:

$$E[x]=E[E[x|\theta]]=E[\theta]=\mu$$

$$Var(x)=E[Var(x|\theta)]+Var(E[x|\theta])=E[\sigma^2]+Var(\theta)=\sigma^2+\tau^2$$

Related Question