Solved – Marginal prior $p(\mu)$ of mean of a normal distribution when both mean and variance are unknown

bayesianbugsconjugate-priornormal distributionself-study

I read that if the data is normally distributed with mean $\mu$ and variance $\sigma^2$ (both unknown) then to have the joint posterior distribution $p(\mu, \sigma^2 | y)$ in closed form, one has to use a inverse gamma prior for variance $\sigma^2$ and a conditional conjugate prior for mean $\mu$. But most of the examples of implementations in books that I see are in the following way

for(i in 1:n){
    y[i]~dnorm(mu, precision)
}

mu~dnorm(0, 0.0001)
precision~dgamma(0.0001,0.0001)

Here the marginal prior $p(\mu)$ is normally distributed. Shouldn't the marginal prior $p(\mu)$ be specified to be t-distributed? so that the conditional conjugate prior of $\mu$ is normally distributed and so that the joint posterior is in closed form?

Best Answer

If you wanted to get a closed-form solution for the posterior distribution then you would use a conjugate prior, in which case $\mu$ and $\sigma^2$ would not be independent. Since you are using an MCMC algorithm to produce an approximation to the posterior distribution, there is no need to have a conjugate prior. I believe the prior in your code is intended to approximate a flat prior, $p(\mu,1/\sigma^2) \propto 1$.

But suppose you wanted to use an MCMC algorithm to simulate draws from the posterior using the conjugate prior just for the heck of it. Then you would write the code for the joint distribution for $(\mu,\sigma^2)$, which is not the product of the two marginal distributions. (You are correct that the marginal prior distribution for $\mu$ would be $t$.)