Method of moments estimator and Delta Method

delta-methodprobabilityprobability distributionsprobability theory

I’ve been a couple of days trying to figure out a satisfactory solution to this exercise but I’m stuck. By how the exercise is phrased I think it wants me to use the moments method and I also think it is a Delta Method exercise.

The statement

I am given the following discrete distribution with $\theta>0$

$$p(x) = \left(\frac{\theta}{1+\theta}\right) ^{2-x}\left(\frac{1}{1+\theta}\right)^{x-1} \hspace{1cm} x=1,2$$

I need to calculate an estimator of $\theta$ (call it $T_n$) using a sample $x_1,x_2,…,x_n$, deduce its distribution and calculate a confidence interval for $\theta$.

What I did

If $X_1,…,X_n$ are n copies of $X$, the sample is $x_1=X_1(\omega),…, x_n=X_n(\omega)$

Using the method of moments we can relate the sample mean to the expectation

$$\overline X_n = E[X] = 1+\frac{1}{1+\theta}=\mu$$

and define the estimator of $\theta$

$$T_n=\frac{1}{\overline X_n-1}-1$$

supposing n is big enough so that $\overline X_n$ is not 1.

I also calculated the variance of X: $Var(X)=\frac{\theta}{(1+\theta)^2}=\sigma^2$

By the Central Limit Theorem

$$\sqrt{n}\,(\overline X_n-\mu) \rightarrow N(0,\sigma^2)$$

We can apply the Delta Method with the function $g(t)=\frac{1}{t-1}-1$ to get

$$\sqrt{n}\,(g(\overline X_n)-g(\mu)) \rightarrow N(0,\sigma^2g’(\mu)^2)$$

i.e.

$$\sqrt{n}\,(T_n-\theta) \rightarrow N(0,\theta(1+\theta)^2)$$

My Problem

Now I need to determine the distribution of $T_n$ but I don’t know the value of $\theta.$ I could approximate $\theta$ by $T_n$ and use Slutsky’s theorem to conclude that

$$\sqrt{n}\,\frac{(T_n-\theta)}{\sqrt{T_n}(T_n+1)} \rightarrow N(0,1)$$

But again I don’t know how to obtain the distribution of $T_n$.

Best Answer

Your pmf can be rewritten in the following way

$$P(X=x)=\frac{1}{\theta+1}\theta^{2-x}$$

$X=1,2$ that is the following rv

$$ X = \begin{cases} \frac{\theta}{\theta+1}, & \text{if $X=1$} \\ \frac{1}{\theta+1}, & \text{if $X=2$} \end{cases}$$

Now we can transform it into

$$Y= X -1= \begin{cases} \frac{\theta}{\theta+1}, & \text{if $Y=0$} \\ \frac{1}{\theta+1}, & \text{if $Y=1$} \end{cases}$$

That is

$$Y= \begin{cases} 1-p, & \text{if $Y=0$} \\ p, & \text{if $Y=1$} \end{cases}$$

Concluding... $Y$ is bernulli $B(p)$.

A CSS (complete and sufficient statistic) for $p$ is

$$\Sigma_i Y_i=(\Sigma_i X_i) -n$$

this is a nice estimator for $p$. Now you can calculate the confidence interval for $p$ as an approximate interval using CLT (if $n$ is greater enough) or also an exact confidence interval using, i.e. the Statistical Method and a binomial table (or any calculator).

here I calculated a similar exact CI with Excel (both for Poisson and Bernulli distribution, thus I think it can be useful for you).


  1. Confidence interval for $\theta$

$\frac{1}{\theta+1}=p$ is a monotonic function, thus given the CI for $p$ you can easily derive also the CI for $\theta$.


  1. Distribution of $\hat{\theta}$

$\Sigma_i Y_i$ is binomial distributed, thus also the distribution of $\hat{\theta}$ is binomial...only with a modified support.


Note that using this method of estimating $\theta$, you can derive the same estimator you found with MoM.

Related Question