Let $\hat{\theta}_n= -\frac{n}{\sum_{i=1}^n \log(X_i)}$, where $X_i$ are i.i.d. samples from distribution with pdf $\theta x^{\theta-1}$ for $x \in (0,1)$. How to prove that $\hat{\theta}_n$ is consistent estimator of $\theta$.
Statistics – How to Show Estimator Consistency
consistencyconvergenceself-study
Related Solutions
Suppose $X_1, X_2, \cdots X_n \stackrel{iid}{\sim} N(\mu, 1)$ and our goal is to estimate $\mu$. Consider the estimator $$\hat\mu_n = \begin{cases} \bar X_n, & \text{with probability $\frac{n-1}{n}$} \\[1.2ex] n, & \text{with probability $\frac{1}{n}$} \end{cases}$$
By introducing $Y_n \sim Bern\left(\frac{n-1}{n}\right)$ with $Y_i \perp \!\!\! \perp Y_j$ (for all $i\neq j$) and $Y_i \perp \!\!\! \perp X_j$ (for all $i$ and $j$) we can rewrite this estimator as $$\hat\mu_n = \bar X_n Y_n + n(1-Y_n) .$$
This estimator is simply consistent.
To see this, consider \begin{align*} P(|\hat\mu_n - \mu| < \epsilon) &= P(|\bar X_n - \mu| < \epsilon)\frac{n-1}{n} + P(|n-\mu|<\epsilon)\frac{1}{n} \\[1.3ex] &=\frac{n-1}{n}\left\{\Phi\left(\sqrt n \epsilon \right)- \Phi\left(-\sqrt n \epsilon \right) \right\} + P(|n-\mu|<\epsilon)\frac{1}{n} \end{align*}
Taking the limit gives $$\lim_{n\rightarrow\infty} P(|\hat\mu_n - \mu| < \epsilon) = 1,$$ and thus $\hat\mu_n \stackrel{p}{\rightarrow} \mu$.
This estimator is not MSE-consistent
We start by finding the bias.
\begin{align*} E(\hat\mu_n) &= E(\bar X_n Y_n + n(1-Y_n)) \\[1.2ex] &= E(\bar X_n)E(Y_n) + nE(1-Y_n) \\[1.2ex] &= \mu\frac{n-1}{n} + n\frac{n-1}{n} \\[1.2ex] &= \frac{n-1}{n}\mu + 1 \end{align*}
Therefore $B_\mu(\hat\mu_n) = 1 - \mu/n$ which does not converge to $0$ as $n \rightarrow \infty$. Since $Var(\hat\mu_n)$ must be non-negative this is enough to conclude that $MSE(\hat\mu_n) \not\rightarrow 0$, and therefore the estimator is not MSE-consistent.
You generally use the law of large numbers(https://en.wikipedia.org/wiki/Law_of_large_numbers) to prove consistency. The LLN gives convergence in probability, and if you can use it to show convergence in probability to what you want ($\theta$ here) then you're basically done.
Here, as $X_i \sim \mathcal{N}(0,\theta)$ are all iid the law of large numbers will apply. In particular, it tells you $$ \frac{1}{n}\sum_{i=1}^{n} X_i \overset{P}{\rightarrow} \mathbb{E}(X)=0, $$ and also $$ \frac{1}{n}\sum_{i=1}^{n} X_i^2 \overset{P}{\rightarrow} \mathbb{E}(X^2)=\theta. $$ This is almost exactly what you want. You just need to take care of the other terms in the Bayes estimator as $n\rightarrow \infty$.
Best Answer
Probability limits distribute, so
$$\text{plim}\left (\frac {1}{\hat \theta}\right) = \frac {1}{\text{plim}\hat \theta}$$
Also, $$\hat{\theta}_n= -\frac{n}{\sum_{i=1}^n \log(X_i)} \implies \frac {1}{\hat \theta} = -\frac 1n\sum_{i=1}^n \log(X_i) $$
The sample is i.i.d. and so ergodic and so
$$\text{plim}\left (\frac {1}{\hat \theta}\right) = \text{plim} \left(-\frac 1n\sum_{i=1}^n \log(X_i)\right) = E\left[-\log(X_i)\right]$$
So you just have to find the distribution of $Y = -\log(X_i)$ as suggested in a comment, and calculate its expected value.