Solved – Why we need a prior for computing a Bayes Factor (R code provided)

bayesianconditional probabilityintegralprobability

I have a fundamental question about "Bayes Factors". Specifically, I understand if we want to compute the posterior probability of a hypothesis (e.g., $p(H_1 |Data)$, we need a prior probability (i.e., $p(H_1)$) for the hypothesis in question as per Bayes' rule.

BUT I'm wondering when computing only a "Bayes Factor" which is a factor by which we update our prior belief about the hypothesis, why we need a prior for the determination of $H_1$. In other words, when we do NOT want to compute any posterior probability, why we talk about a prior when computing a Bayes Factor?

In fact, I even want to know what does the mathematical integration for the following 2 $H_1$s (one with a prior and the other without a prior) that I have used below (R code) exactly do?:

(For simplicity's sake, below, I compute a Bayes Factor for a binomial experiment that has resulted in 35 successes out of 100 trials.)

## With a Beta prior: 

H1 = integrate(function(p) dbeta(p, 1, 10)*dbinom(35, 100, p), 0, 1)[[1]]
# > 0.00235671

## Without any kind of prior:

H11 = integrate(function(p) dbinom(35, 100, p), 0, 1)[[1]]    
# > 0.00990099


## H0     
H0 = dbinom(35, 100, .5)

## BF10:

BF10 = H1 / H0

BF10 = H11 / H0

Best Answer

In general you use Bayes factors to determine the odds of one hypothesis with respect to another hypothesis. In determining both probabilities you need to calculate an integral or a sum if you put prior probability on a finite number of points.

You say you do not have priors, actually you do but they aren't showing up because they are constants with respect to the parameter(s) you place priors over so the prior in the second integration is uniform on (0,1).

The integrals you have numerically calculated are integrating out the parameter. In general a Bayes factor is integrating out the uncertainty in the parameter. The priors quantify the uncertainty in the value of the parameter. In the code you have written where you integrate over the Binomial probability by placing a prior on the parameter $p$ and integrating over that parameter.

Both priors that you have written are priors conditional on hypotheses, call these hypotheses $M_1$ and $M_2$. Now consider the formula:

$K = \frac{\Pr(D|M_1)}{\Pr(D|M_2)} = \frac{\int \Pr(p_1|M_1)\Pr(D|p_1,M_1)\,dp_1} {\int \Pr(p_2|M_2)\Pr(D|p_2,M_2)\,dp_2}$.

The first entries in the numerator and the denominator are your priors and these priors are conditional on a hypothesis (Beta distribution with hyper-parameters specified). Then to get to a posterior ratio of the two you specify priors on the two hypotheses $M_1$ and $M_2$. If these are the same number ($1/2$) then $K$ is your Bayes factor.

Related Question