Solved – Joint confidence intervals for probabilities

confidence interval

I have two probabilities $p$ and $q$. $p>q$, and they aren't correlated. I'm going to calculate $i$ such that $p^i=q$, which is easily done as $\log_p(q)$.

Now, I'd like to also calculate a confidence interval for $i$, which is necessarily going to be a function of both $p$ and $q$'s confidence intervals. My first approach was to do

p.min <- qbeta(0.025, 152, 29)
p.max <- qbeta(0.975, 152, 29)

q.min <- qbeta(0.025, 37, 19)
q.max <- qbeta(0.975, 37, 19)

## q.max and p.min have the smallest difference
i.min <- log(q.max, base = p.min)

## q.min and p.max have the largest difference
i.max <- log(q.min, base = p.max)

But it occurs to me that 95% confidence intervals for $p$ and $q$ independently probably produces too large a confidence interval for $i$, because the joint confidence interval for $p$ and $q$ will be narrower.

So, how do I go about figuring out the joint confidence interval of $p$ and $q$. They're uncorrelated, which should make things easier. It is as simple as narrowing the quantiles in qbeta()? By how much?

Best Answer

I think there is a confusion between confidence interval and probability interval here.

In the R code, you are indicating that $p\sim Beta(152,29)$ and $q\sim Beta(37,19)$, then you can calculate the distribution of $i=log(q)/log(p)$ using a change of variable and then obtain the corresponding probability interval for $i$ using this distribution.

Another possibility is to approximate this probability interval by Monte Carlo simulation. In this case this interval is approximately $(1.30, 4.23)$

i=log(rbeta(100000, 37, 19))/log(rbeta(100000, 152, 29))

In order to construct a confidence interval for $i$ you would require that $p$ and $q$ are parameters of a sampling model.

Related Question