The maximum likelihood estimate of a Bernoulli process is simply given by $\hat{\theta}=\frac{\sum X_i}{N}$, where N is the total number of bernoulli trial and $X_i$ is the outcome of each trial.
This is an unbiased estimator and the variance of this estimator can be easily computed to be $Var(\hat{\theta}) = \frac{\theta(1-\theta)}{N}$. However, the actual $\theta$ is unknown.
So how do we estimate the variance of the estimator then ? Also, I would like an unbiased estimate of this variance.
Its possible that this question has already been asked. If someone can give a pointer that would be great.
Thanks.
Best Answer
Community wiki answer based on the comments to allow the answer to be accepted:
In this case the parameter happens to be the mean. The unbiased estimator $\hat\theta$ of the parameter is the usual unbiased estimator of the mean, whose variance is $1/N$ times the population variance, so estimating its variance is equivalent to estimating the population variance, which can be done without bias using Bessel's correction of multiplying by $N/(N-1)$ for $N\gt1$.