Show that this statistic is complete

parameter estimationprobabilityprobability theoryrandom variablesstatistics

Suppose that $S, {f_θ : θ ∈ Θ})$ is a statistical model, corresponding to an observed random
vector $\mathbf X = (X_1, . . . , X_n).$

Let $\theta_1(\mathbf X)$ and $\theta_2(\mathbf X)$ be unbiased estimators for $\theta$ . Define $\theta_3(\mathbf X) = a \theta_1(\mathbf X) +(1-a)\theta_2(\mathbf X)$.

I have deduced that $\theta_3(\mathbf X)$ is an unbiased estimator of $\theta$:

$a\Bbb E[ \theta_1(\mathbf X)] + \Bbb E[ \theta_2(\mathbf X)] – a \Bbb E[ \theta_2(\mathbf X)]
=$

$= a \theta + \theta – a \theta = \theta$.

Hence $\Bbb E[ \theta_3(\mathbf X)] = \theta$ and it is an unbiased estimator for $\theta$

Now I have the statistic $T(\mathbf X)= (\theta_1(\mathbf X), \theta_2(\mathbf X))$. I'm trying to show whether it's a complete statistic or not,
I tried to show that $T(\mathbf X)$ is a complete statistic using the following definition:

Suppose that $T=T(\mathbf X)$ is a statistic taking values in a set $\Theta$. Then $T$ is a complete statistic for $\theta$ if for any function $g: S \rightarrow \Bbb R$ :

$\Bbb E_{\theta}[g(T)] = 0$ for all $\theta \in T \Rightarrow \Bbb >P_{\theta}[g(T)=0]=1$ for all $\theta \in \Theta$

So from this point i'm kind of stuck. Also, should I maybe use the fact that $\theta_3(\mathbf X)$ is unbiased? I have a feeling that constructing $g$ for $T(\mathbf X)$ based on $\theta_3(\mathbf X)$ being unbiased would help?

Any help is appreciated

Best Answer

There exists $g(x,y)=x-y$ such that $$\mathbb E[g(\theta_1(\mathbf X),\theta_2(\mathbf X))]=\mathbb E[\theta_1(\mathbf X)]-\mathbb E[ \theta_2(\mathbf X)]=0$$ for any $\theta$, and $$\mathbb P(g(\theta_1(\mathbf X),\theta_2(\mathbf X))=0)\neq 1$$ since the estimators do not coincide a.s., then $T(\mathbf X)=(\theta_1(\mathbf X),\theta_2(\mathbf X))$ is not complete by definition.