Solved – Understanding the Beta conjugate prior in Bayesian inference about a frequency

bayesianbeta distributionconjugate-priordistributions

Following is an excerpt from Bolstad's Introduction to Bayesian Statistics.

I am reading t

For all you experts out there, this might be trivial but I don't understand how the author concludes that we don't have to do any integration to calculate the posterior probability for some value of $\pi$. I understand the second expression which is the proportionality and where all the terms came from( likelihood x Prior ). Furthermore, I understand,we don't have to worry about the denominator since only the numerator is directly proportional. But moving on to the third equation, aren't we forgetting about the denominator of the Bayes Rule? Where did it go ? And the value computed by the Gamma functions, isn't that a constant ? Doesn't constants cancel out in the Bayes theorem ?

Best Answer

The point is that we know what the posterior is proportional to and it so happens that we do not need to do the integration to get the (constant) denominator, because we recognise that a distribution with probability density function proportional to $x^{\alpha-1} \times (1-x)^{\beta-1}$ (such as the posterior) is a beta distribution. Since the normalizing constant for such a beta pdf is $\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}$, we get the posterior pdf without integration. And yes, the normalizing constant in Bayes theorem is a constant (given the observed data and the prior assumed) just like the normalizing constant for the posterior density.

Related Question