# Solved – Can you show that $\bar{X}$ is a consistent estimator for $\lambda$ using Tchebysheff’s inequality

consistencypoisson distributionprobabilityrandom variableself-study

This question was taken from a practice exam in my statistics course.

Given a random sample $X_1, X_2, … X_n$ from a Poisson distribution with mean $\lambda$, can you show that $\bar{X}$ is consistent for $\lambda$?

We are told to use Tchebysheff's inequality. Which is: $Pr(|X-\mu| \geq k\sigma) \le \frac{1}{k^2}$

Also, as far as I know, consistency of an estimator is the property that as we increase the sample size of $\bar{X}$, our estimator should return values closer and closer to the actual value we want to estimate.

So the first thing I did was find the variance for $\bar{X}$ as follows:
$Var(\bar{X})=Var(\frac{\sum(X_i)}{n})=\frac{1}{n^2}Var(\sum(X_i))=\frac{\lambda}{n}$

I notice that as $n \rightarrow \infty$ the variance decreases to $0$, but how does this help me?

Now I guess we can use Tchebysheff's inequality where we need $Pr(|\bar{X}-\lambda| \geq \epsilon) = 0$ and that is where I get stuck…

Any help is appreciated.

Edit: Since it seems the point didn't get across, I'm going to fill in a few more details; it's been a while, so maybe I can venture a little more.

step 1: give a definition of consistency

Like this one from wikipedia's Consistent estimator article:

Suppose $${p_θ: θ ∈ Θ}$$ is a family of distributions (the parametric model), and $$X^θ = {X_1, X_2, \ldots : X_i ~ p_θ}$$ is an infinite sample from the distribution $$p_θ$$. Let $${ T_n(X^θ) }$$ be a sequence of estimators for some parameter $$g(θ)$$. Usually $$T_n$$ will be based on the first $$n$$ observations of a sample. Then this sequence $${T_n}$$ is said to be (weakly) consistent if

$$\underset{n\to\infty}{\operatorname{plim}}\;T_n(X^{\theta}) = g(\theta),\ \ \text{for all}\ \theta\in\Theta$$

step 2: Note (hopefully!) that it relies on convergence in probability, so give a definition for that in turn (wikipedia article on Convergence of random variables).

A sequence $${X_n}$$ of random variables converges in probability towards the random variable $$X$$ if for all $$ε > 0$$

$$\lim_{n\to\infty}\Pr\big(|X_n-X| \geq \varepsilon\big) = 0.$$

step 3: Then write Chebyshev's inequality down:

Let $$X$$ (integrable) be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number $$k > 0$$,

$$\Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}.$$

(wikipedia article on Chebyshev's inequality)

step 4: now look at the rather strong similarity between two expressions in (2) and (3).
Does that not give you a huge clue about a way to approach this?

So let's start

From Chebyshev:

$$Pr(|\bar{X}-\lambda| \geq k\lambda/n) \le \frac{1}{k^2}$$ (you worked out all the parts of this but never actually wrote it down in your question. I can't fathom why you wouldn't)

Let $$\epsilon=k\lambda/n$$ (that's the obvious step from the "huge clue" you were supposed to see by comparing the two things I said to write down ... one had an $$\epsilon$$ where the other had a $$k\lambda/n$$ ... if you'd written them both down, as I'd been suggesting, I expect it would have been obvious as soon as you compared them).

So now, the only thing that's really left is to show that for any $$\epsilon$$, as $$n\to\infty$$, the RHS goes to 0.