Calculating mean squared error of estimators

mean square errorparameter estimationprobability distributionsstatistics

Context: 2nd year university statistics course textbook question

So I had to find two estimators (using method-of-moments and maximum likelihood estimation) of $\theta$ for a random sample $X_1, …, X_n$ from a population with pmf $f(X=x)=\theta^x(1-\theta)^{1-x}$ for $x=0$ or $x=1$ where $\theta \in [0, 0.5]$ is a model parameter. I recognise this is a Bernoulli distribution.

I found that both methods gave the same estimator $T=\frac{1}{n} \sum^n_{i=1}X_i$ (the sample mean). The next part of the question required me to find the mean squared error of the two estimators. I have a couple questions:

  1. Since the estimators are the same, does this mean their mean squared erors will be too?
  2. How should I go about calculating the mean squared error? I know $MSE(T) = Var(T)+[Bias(T)]^2$, but for the $Var(T)$ component I don't know how to calculate $E(T^2)$. Or would it be better to calculate it via $E[(T-\theta)^2]$?

Thanks

Best Answer

EDIT

  1. The two estimator are not the same.

    • $\hat{\theta}_{MM}=\overline{X}_n$
    • $\hat{\theta}_{ML}=min[\overline{X}_n;\frac{1}{2}]$
  2. I do not know if the exercise asks you to find analytically the two MSE's, but if $\overline{X}_n\leq\frac{1}{2}$ the two MSE's are the same, and equal to the sample means' variance: $\frac{\theta(1-\theta)}{n}$. On the contrary, if $\overline{X}_n>\frac{1}{2}$ the first estimator does not make sense.


Restricted MLE

in this example, Likelihood's domain is restricted in $\theta \in[0;0.5]$ so it is self evident that if $\overline{X}>0.5$ the likelihood is strictly increasing and its argmax is on the border: $\hat{\theta}_{ML}=0.5$

Let's look at the following example:

Let's draw an unfair coin 10 times. Suppose we have the two following cases

  1. 3 Successes on 10 Draws

  2. 7 Successed on 10 Draws

the two likelihoods are the following

enter image description here

EDIT2:

Let's have a focus on the MSE(ML)

This changes if the estimator "sample mean" is greater than 0.5 or not.

  • If $\overline{X}_n\leq 0.5$ we have $\hat{\theta}_{ML}=\overline{X}_n$ so it is an unbiased estimator and thus its MSE=VAR(Sample mean) that is $\frac{\theta(1-\theta)}{n}$ as well known and easy proved below

$$\mathbb{V}[\overline{X}_n]=\frac{1}{n^2}n\mathbb{V}[X_1]=\frac{\theta(1-\theta)}{n}$$

  • If $\overline{X}_n> 0.5$ we have $\hat{\theta}_{ML}=\frac{1}{2}$ thus its $ MSE= (Bias)^2$ given that its variance is zero. (The estimator is constant). In other words $MSE=(\frac{1}{2}-\theta)^2$
Related Question