Solved – the formula for number of trials required given the probability

probability

In an experiment such as tossing an unbiased coin, we know that the probability of getting a 'head' or a 'tail' is 1/2. I understand this to mean if the number of trials is large enough. My question is how is that "large enough" number determined mathematically? Is there a formula for it? I can guess it would have to depend on the sample space, I mean for a dice with 6 faces, that number should be different from the coin of 2 faces. I am sure also that it is at larger or equal the sample space but I want to know the formula for it if such a formula exists.

Best Answer

By the weak law of large numbers

$$ \lim_{n\to\infty}\Pr\!\left(\,|\bar{X}_n-\mu| > \varepsilon\,\right) = 0 $$

so as your sample grows up to infinity, the empirical mean $\bar{X}_n$ will get closer and closer to the true mean $\mu$. And by the strong law of large numbers, as sample size goes to infinity, the empirical mean will converge almost surely to the true mean

$$ \Pr\!\left( \lim_{n\to\infty}\bar{X}_n = \mu \right) = 1 $$

As you can see, neither of the statements say that there is any finite sample size that will let you achieve this. On another hand, they do not say that there is some specific sample size that is needed to observe some event with some probability: if you throw a fair coin only once, the probability of heads is still $1/2$.

The thing that you could estimate is the standard deviation of your estimate of probability of heads $p=1/2$ after $n$ trials using Wald's formula

$$ \sigma = \sqrt{ p(1-p)/n } = \sqrt{0.25/n}$$

so you could find $n$ large enough for $\sigma$ to be small enough for you to consider the variability of $p$ acceptable. But still, this is your subjective choice of what do you consider as "small enough".

Related Question