# Probability – How to Calculate Error for Estimated Number of Events with Known Probabilities?

confidence intervalestimationprobability

I have a long series of entities:

x1, x2, x3, … xn

for each of which, there is a probability of an event occurring. The probability for each x may be different, but they are all independent, and they are all known:

p1, p2, p3, … pn

I might provide a point-estimate for how many of said event occurred in that series by simply summing the probabilities p, but is it possible to also calculate a confidence interval or some information about the error in a close-form fashion?

The resistance to using some sampling method is that the series of x many be prohibitively long, and there may be many such series upon which the same estimates need be delivered.

EDIT – Refactoring of the question

For a toy example of the problem, imagine I have a large batch of n widgets, each of which has a known probability of failing within the year. Each widget may have a different known probability of failing, some high, some low (say they belong to different widget classes, those classes well-studied in the past). I'm tasked with estimating the number of widgets that will fail at the end of the year for that batch.

I believe I can provide a point estimate by simply summing the probabilities of each widget in the batch. Stakeholders also request a confidence interval, which may not be possible, but I should be able to provide a sense of error/uncertainty/variance around that point estimate.

My understanding is that I can treat my widgets failing/not-failing as a series of Bernoulli trials, and that likely the "number that fail" can be modelled as a Poisson Binomial Distribution, but from there I get stuck.

I wanted to sum up what ultimately I've found works well for this case and roll up what's been discussed in some of the comments.

Since I am basically wanting to know how many events occurred out of series of independent Bernoulli trials, we can model this as a Poisson Binomial distribution.

The mean of that distribution (which is my estimate) is the sum of all the probabilities.

The variance of that distribution is also known: the sum of all (1 - pi)pi.

That already gives me the standard deviation (the square root of the variance), so I've gotten a little bit of what the stakeholder wants.

We know that often the Poisson Binomial approaches normal with larger number of parameters (constituent probabilities from the trials), and luckily because of the nature of my problem, I will almost always have a large number of parameters. When that's the case, we can get an interval by simply taking the mean +/- 1.96 * sdev.

That's not an assumption I can just make easily though, so I did a bit of modelling to find out find out what are the typical probabilities for a given widget batch (the example scenario above), and what are the more extreme probability distributions, each at the different likely parameter/probability sizes.

I then calculated the Poisson Binomial mean, sdev, and 95% intervals using the approaches noted and ran monte carlo against them, doing this many times (with different draws from the middle and the extremes) to try and get as much coverage of the possible probability distributions I could run into. I compared the mean, sdev, and interval estimates against those retrieved from monte carlo by their percent error (treating the monte carlo result as the "right" answer).

Long story short, the mean and sdev values are in line with Poisson Binomial quite well. And the confidence intervals, while possibly being up to 5% off in the more extreme cases, are well within our business need for low parameter sizes, and have almost no error for higher parameter sizes (which is most of our cases).

So ultimately I used what I can from the known moments from the Poisson Binomial distribution directly, and then back those into intervals assuming normality (which works well enough for our specific use case).