Solved – How to prove independence of marginal/conditional (?) posterior distributions

bayesianindependencemultivariate analysisrandom variableself-study

This is a question about exercises 4.2 and 4.3 of Jim Albert’s “Bayesian Computation With R” (p. 82). Note that while this might be homework, in my case it is not.

We are to prove that, given two independent normal distributions N(µ₁,σ²₁), N(µ₂,σ²₂) and the prior g(µ₁,σ²₁,µ₂,σ²₂)∝1/(σ²₁σ²₂) the resulting posterior distribution is such that the vectors (µ₁,σ²₁), (µ₂,σ²₂) are independent.

My understanding is the following.

Since the distributions are said to be independent, the joint posterior distribution equals the distribution of N(µ₁,σ²₁) times N(µ₂,σ²₂) times the prior distribution 1/(σ²₁σ²₂). The resulting posterior distribution is:
$$g(µ_1,σ_1,µ_2,σ_2|y_1,y_2) ∝
\frac{1}{{σ^{2}_{1}}^{(\frac{N_1}{2}+1)}}exp(-\frac{1}{2σ_1^2}(\sum_{i=1}^{N_1}(y_{1i}-\bar {y_1})^2+n_1(µ_1-\bar{y_1})^2)) *
\frac{1}{{σ^{2}_{2}}^{(\frac{N_2}{2}+1)}}exp(-\frac{1}{2σ_2^2}(\sum_{i=1}^{N_2}(y_{12}-\bar {y_2})^2+n_2(µ_2-\bar{y_2})^2))$$

Now, two vectors are said to be independent if the joint distribution equals the product of the marginal distributions. But that is how I designed the posterior density. But the answer „The vectors (µ₁,σ²₁), (µ₂,σ²₂) are independent because otherwise my posterior distribution would be wrong“ seems a bit sketchy to me.

So there might be another way of proving independence of vectors. The marginal distribution of a continuous variable can be obtained by integrating over the joint distribution [wikipedia]. This is described for the two-variable case. So the marginal distribution of (µ₁,σ²₁) would be

$$∫_{µ₂,σ²₂}g(µ_1,σ_1,µ_2,σ_2|y_1,y_2)dµ₂σ²₂$$

But I do not know how to compute that integral or how to apply the formula of the two-variable case to my multivariate case, further complicated by the fact that my g formula is conditional on y₁ and y₂.

So my question is: Is integration the correct approach to answer that question?

I rather doubt that, because integration as a means to compute marginal distribution has not been introduced by Jim Albert in the book. Instead, he describes a family of dependent priors, that is, Howard’s prior. They can be used in the case of analysis of proportions, not of mean and variance of normal distributions (or so I believe). Howard’s prior may still be interesting for this particular question as we might investigate what mathematical property of Howard’s prior causes dependence.

Howard suggests transforming the proportions to logits $$\theta_1 = log \frac{p_1}{1-p_1},\theta_2 = log \frac{p_2}{1-p_2}$$ which enter the prior density in a multiplicative term $$e^{-(1/2)u^2},u=\frac{1}{σ}(\theta_1-\theta_2)$$
So, could I say that because the difference (or a function thereof) of the random variables enters the equations (first the prior, then the posterior distribution), they are not independent anymore? And how would that help me in answering the original question?

Related questions. Is my theory on independence correct? What is the difference between the marginal and the conditional distribution? Am I right in assuming that the marginal distribution is the mean of the conditional distribution, that is, conditioned on all other variables?

Best Answer

Just notice that your unnormalized posterior factors as $u(\mu_1,\sigma_1) v(\mu_2,\sigma_2)$, for some suitable functions $u$ and $v$, and that's enough to prove the desired independence.

Related Question