[Math] Find the maximum likelihood estimator for Pareto distribution and a unbiased estimator

estimation-theoryorder-statisticsparameter estimationstatistical-inferencestatistics

Let $X_1,…X_n$ be a random sample from the Pareto distribution with parameters $\alpha$ and $\theta$, where $\alpha$ is known.

Find the maximum likelihood estimator for $\theta$ and say if it is unbiased, if not find an unbiased estimator

My Approach:

$$f(x;\alpha, \theta) = \alpha \theta^\alpha x^{-(\alpha +1)},\quad x \ge \beta$$

$$L(\theta) = \alpha^n \theta^{\alpha n} \left(\prod_{i=1}^n x_i\right)^{-(\alpha+1)}$$

Taking log for $L(\alpha)$ gives

$$\ln L(\theta) = n \ln(\alpha) + \alpha n \ln(\theta) + \sum_{i=1}^n -(\alpha+1) \ln(x_i)$$

Then since $\ln L(\theta)$ is an increasing function if $\theta$ increases, and for a Pareto distribution we have that $\theta \le x$ we conclude that the maximum likelihood estimator is $\hat\theta=\min {x_i}$ (the first order statistic)
Am I right?

Then to prove that it is and unbised statistic we have to prove that $E(\hat{\theta}) = \theta$. I donot know how to do it I just thought using the p.d.f of the firs order statistic and integrate from $\theta $ to infinite but I'm not sure about this.
Any ideas?

Best Answer

You've got some notation errors and the work is a bit sloppy, but it is essentially the correct idea. You should have written $$f(x; \alpha, \theta) = \alpha \theta^\alpha x^{-(\alpha+1)}, \quad x \ge \color{red}{\theta},$$ and $$\ell(\theta) = \log \mathcal L(\theta) = n \log \alpha + \alpha n \log \theta - (\alpha + 1) \sum_{i=1}^n \log x_i.$$ In fact, I would have dispensed with this altogether and noted that when $\alpha$ is known, the likelihood is proportional to $$\mathcal L(\theta) \propto \theta^\alpha \mathbb 1(x_{(1)} \ge \theta),$$ hence for $\alpha > 0$, $\mathcal L$ is monotone increasing on the interval $\theta \in (0, x_{(1)}]$ and the MLE is $\hat\theta = x_{(1)}$. No need to take log-likelihoods.

$\hat \theta = x_{(1)}$ is necessarily biased because $\Pr[X_{(1)} > \theta] > 0$ but $\Pr[X_{(1)} < \theta] = 0$. That is to say, the sample minimum can never be less than $\theta$, whereas being greater than it is certainly possible; so taking the expected value of the sample minimum, you can never hope to be equal to $\theta$ on average.

Formally, though, you would need to compute $\operatorname{E}[X_{(1)}]$ by first computing the probability density of the first order statistic. This in turn can be found by considering $$\Pr[X_{(1)} > x] = \Pr[(X_1 > x) \cap (X_2 > x) \cap \ldots \cap (X_n > x)] = ?$$

Related Question