*"What am I doing wrong?"* Nothing here. Your approach is correct: $\hat{\theta}_1$ is an unbiased estimator, for the reason you give (basically, it has to do with linearity of expectation). When $X_1,\dots,X_n$ are identically distributed and have finite expectation $\mu$,
$$
\mathbb{E}\!\left[\frac{1}{n}\sum_{k=1}^n X_k\right] = \frac{1}{n}\sum_{k=1}^n \mathbb{E}[X_k] = \frac{1}{n}\sum_{k=1}^n \mu = \mu
$$

For the same reason, $\hat{\theta}_2$ is unbiased as well:
$$
\mathbb{E}\!\left[\hat{\theta}_2\right] = \frac{2\mathbb{E}[X_1] - \mathbb{E}[X_6]+\mathbb{E}[X_4]}{2} = \frac{2\mu - \mu+\mu}{2} = \mu\ .
$$

Note that while both are unbiased estimators, the first one may be better, as it will have smaller variance... i.e., while both expectations are the desired value $\mu$, the second will be much less accurate (more subject to random fluctuations).

-- Attempt at addressing the larger question: assume the goal is to estimate a quantity $\theta$ of the distribution, assuming you have access to identically distributed (and, most of the time, assumed to be independent) r.v.s/samples from it, $X_1,X_2,\dots$. You basically want to come up with a nice function $f$ or sequence of functions $(f_n)$ such that either

- $f(X_1,\dots, X_k)$ is with high probability close to $\theta$, or
- $f_n(X_1,\dots,X_n)$ converges (when $n$ grows) towards $\theta$, in some specific sense.

Here, requiring an unbiased estimate you also want $\mathbb{E}[f_n(X_1,\dots, X_n)] = \theta$ (for all $n$), when the probability is over the realizations of $X_1,X_2,\dots$. Basically, *your estimator is also a random variable*, which depends on the $X_i$'s (being a function of them); and you can as such compute its expectation, variance, etc.

Now, for the case of estimating the expected value, things can seem a bit circular, since your estimator is basically a linear function of the $X_i$'s, and computing its expected value amounts to compute their expected value, which is what you are trying to estimate. But it does make sense nonetheless (it's not because it looks to simple to be true that it's not true); for instance, one could come up with other estimators for the expected value, say for instance
$
Y = \ln \frac{e^{X_1}+e^{X_2}+\dots+e^{X_n}}{n}.
$
Computing the expected value of this new random variable will not be as straightforward; yet eventually it will also boil down to using the fact that the expectation of each $X_i$ is $\mu$ (as well, maybe, as other assumptions like independence of the $X_i$'s).

Your result is correct

Letting $Y=\hat{\theta}_n=\max (X_i) -1$ we have

$$P(Y \le y) = P(\max (X_i) \le y +1)=\prod P(X_i \le y+1) = (y +1 - \theta)^n $$

Hence $$f_Y(y) = n (y+1-\theta)^{n-1} \hspace{1cm } \theta-1\le y \le \theta$$

And $E(Y)=\theta - \frac{1}{n+1}$

Hence the estimator is biased (but also asymptotically unbiased)

(Both results, and the sign of the bias are intuitively obvious : for one thing, note that always $X < \theta+1 \implies \hat{\theta_n} < \theta$. Also, it's easy to see that for large $n$ the maximum will be very near $\theta+1$, hence we should expect $E(\hat{\theta}_n) \to \theta$)

To make it unbiased, you can try some linear transformation $Z=a(Y+b)$

## Best Answer

Here is one way to do it. First, start by noting that a sum of iid exponential random variables is a gamma random variable. More precisely, if $X_1, \ldots, X_n \sim \exp(\lambda)$, then $$\sum X_i \sim \textrm{Gamma}(n, 1/\lambda)$$ Setting $Y := \sum_{i=1}^n X_i$ for a moment, this means that the PDF of $Y$ is: $$ f_Y(y) = \frac{1}{\Gamma(n)(1/\lambda)^n} y^{n-1} e^{-y/(1/\lambda)} $$ Then the expected value of $1/Y$ is the following integral: $$ E(1/Y) = \int_0^{\infty} \frac{1}{\Gamma(n)(1/\lambda)^n} y^{n-1} e^{-y/(1/\lambda)} \cdot \frac{1}{y} \; dy$$ Doing a little algebra here and using the fact that $\Gamma(n) = (n-1)!$, you can rewrite this as $$ E(1/Y) = \frac{\lambda}{n-1} \cdot \int_0^{\infty} \frac{1}{\Gamma(n-1)(1/\lambda)^{n-1}} y^{(n-1)-1} e^{-y/(1/\lambda)} dy$$ Now the trick is to recognize that the integrand is just the PDF of a Gamma($n-1$, $1/\lambda$) random variable, so it integrates to one and therefore you're just left with $E(1/Y) = \lambda/(n-1)$. Now you can do the rest.