Show that these estimators are consistent

parameter estimationstatistics

I'm working on the following question

Consider $ \Theta\in\mathbb{R} $ and a random sample $ X_1,\cdots,X_N $ of uniformly distributed random variables on $ [\Theta,\Theta+1] $.
Is the estimator $ \hat{\Theta}_N(X_1,\cdots,X_N)=\text{min}_{i=1\cdots,N}X_i $ consistent for the parameter $ \Theta $?

My script says the estimator is consistent iff it converges in probability to $ \Theta $. And just looking at the definition of convergence in probability it makes sense to me that it should do that. However, I don't know how to prove this formally. Could someone give me a hint? Thanks in advance!

edit: Took out a part of the question which solved itself right there and then.

Best Answer

A couple of hints first and then I'll put the solution underneath.

Hint 1: $\left|\hat{\Theta}_N - \Theta\right|$ (the first term in the "convergence in probability" definition) is equal to $\hat{\Theta}_N - \Theta$ since by definition, $\hat{\Theta}_N \geq \Theta$.

Hint 2: For any constant $c$, $\mathbb{P}\left(\hat{\Theta}_N > c\right)$ is the probability that the minimum of the $X_i$'s exceeds $c$. The minimum of the $X_i$'s exceeds $c$ if and only if $X_1 > c, X_2 > c, \dots, X_n > c$.

Solution: $\hat{\Theta}_N$ converges in probability to $\Theta$ if, for any $\epsilon > 0$, $$\lim_{N \rightarrow \infty} \mathbb{P}\left(\left|\hat{\Theta}_N - \Theta\right| > \epsilon\right) = 0.$$

Now $$\mathbb{P}\left(\hat{\Theta}_N - \Theta > \epsilon\right) = \mathbb{P}\left(\hat{\Theta}_N > \Theta + \epsilon\right) = \mathbb{P}\left(X_1 > \Theta + \epsilon, X_2 > \Theta + \epsilon, \dots, X_N > \Theta + \epsilon \right).$$

Since the samples are independent and identically distributed, $$\mathbb{P}\left(X_1 > \Theta + \epsilon, \dots, X_N > \Theta + \epsilon \right) = \mathbb{P}\left(X_1 > \Theta + \epsilon\right)\mathbb{P}\left(X_2 > \Theta + \epsilon\right) \cdots \mathbb{P}\left(X_N > \Theta + \epsilon \right).$$

We know that $X_i \sim \mathbb{U}[\Theta, \Theta + 1]$, so $\mathbb{P}\left(X_i > \Theta + \epsilon \right) = 1 - \epsilon$ (as long as $\epsilon < 1$, otherwise the probability is $0$).

Hence $$\mathbb{P}\left(\hat{\Theta}_N - \Theta > \epsilon\right) = \begin{cases} \left(1-\epsilon\right)^N & \text{if } \epsilon < 1,\\ 0 & \text{if } \epsilon \geq 1. \end{cases}.$$

Taking the limit as $N \rightarrow \infty$, $$\lim_{N \rightarrow \infty} \mathbb{P}\left(\left|\hat{\Theta}_N - \Theta\right| > \epsilon\right) = \lim_{N \rightarrow \infty} \begin{cases} \left(1-\epsilon\right)^N & \text{if } \epsilon < 1,\\ 0 & \text{if } \epsilon \geq 1 \end{cases} = 0.$$