Recall that $$I(\theta)=-\mathbb{E}\left[\frac{\partial^2}{\partial \theta^2}l(X\,| \,\theta)\right]\,$$
under certain regularity conditions (that apply here), where $I$ is the Fisher information and $l$ is the log-likelihood function of $X$. The log-likelihood function in this case is given by $$\begin{align} l(X\,|\,\theta) &=\text{log}f(X\,|\,\theta) \\&=\log\left(\frac{1}{2\theta}\text{exp}\left(-\frac{|X|}{\theta}\right)\right) \\ &= -\frac{|X|}{\theta} - \text{log}(2\theta)\,\,. \end{align}$$
It follows that $$\frac{\partial}{\partial \theta}l(X \,|\,\theta) = \frac{|X|}{\theta^2}-\frac{1}{\theta} \implies \frac{\partial^2}{\partial \theta^2}l(X \,|\,\theta) = -\frac{2|X|}{\theta^3}+\frac{1}{\theta^2}\,.$$
So, we have
$$I(\theta)=-\mathbb{E}\left[-\frac{2|X|}{\theta^3}+\frac{1}{\theta^2}\right]=\mathbb{E}\left[\frac{2|X|}{\theta^3}-\frac{1}{\theta^2}\right]=\frac{2}{\theta^3}\mathbb{E}(\,|X|\,)-\frac{1}{\theta^2}\,.$$
It remains to compute the expectation of $|X|$. To this end, I will set up the integral. By definition of expected value for transformations of continuous random variables, we have
$$\mathbb{E}(\,|X|\,)=\int_{-\infty}^{\infty}|x|\,f(x \,|\, \theta)\,\text{d}x=\int_{-\infty}^{\infty}\frac{|x|}{2\theta}\text{exp}\left(-\frac{|x|}{\theta}\right)\,\text{d}x = \theta\,.$$
Note: to compute the integral, alter its form by taking advantage of the fact that $|x|$ is symmetric (and, you can also decompose the integral based on cases for $|x|$).
Thus, the Fisher information is $$I(\theta)= \frac{2}{\theta^3}\mathbb{E}(\,|X|\,)-\frac{1}{\theta^2} = \frac{2}{\theta^2}-\frac{1}{\theta^2}=\frac{1}{\theta^2}$$
For a sample $X_1,X_2,...,X_n$ of size $n$, the Fisher information is then
$$I(\theta \,|\,n)=nI(\theta)=\frac{n}{\theta^2}\,.$$
Therefore, by the Cramér–Rao inequality, the variance of any unbiased estimator $\hat{\theta}$ of $\theta$ is bounded by the reciprocal of the Fisher information (this includes the MLE that you have computed, which achieves this lower bound, and is said to be an efficient estimator). In other words, $$\text{Var}(\hat{\theta}) \geq \frac{1}{nI(\theta)} = \frac{\theta^2}{n}\,\,.$$
Yes it's correct. Very well done.
This doesn't simplify the work a lot in this case, but here's an interesting result . . . In the case of $n$ i.i.d. random variables $y_1,\dots,y_n$ , you can obtain the Fisher information $i_{\vec y}(\theta)$ for $\vec y$ via $n \cdot i_y (\theta$) where $y$ is a single observation from your distribution.
Here $\ell(\theta) = \ln( \frac{1}{\theta} e^{-y/\theta}) = -y/\theta - \ln(\theta) \implies \frac{\partial}{\partial \theta} \ell (\theta) = \frac{y}{\theta^2} - \frac{1}{\theta}
\implies \frac{\partial^2}{\partial \theta^2} \ell(\theta) = - \frac{2y}{\theta^3} + \frac{1}{\theta^2}$
\begin{align*}
i_y(\theta) &= - E \left[ \frac{\partial^2}{\partial \theta^2} \ell(\theta) \right] = -E \left[ - \frac{2y}{\theta^3} + \frac{1}{\theta^2} \right] = \dfrac{2 \theta}{\theta^3} - \dfrac{1}{\theta^2} = \dfrac{1}{\theta^2}
\end{align*}
and multiplying by $n$ gives Fisher information $n/\theta^2$.
Best Answer
Hint for the solution
$$L(\alpha;\lambda)=\lambda^ne^{-\lambda \sum_i x_i}e^{n \alpha \lambda}\mathbb{1}_{(-\infty; x_{(1)}]}(\alpha)$$
Observing that
$$L(\alpha)\propto e^{n \alpha \lambda}\mathbb{1}_{(-\infty; x_{(1)}]}(\alpha)$$
this likelihood is strictly increasing in $\alpha$ so the MLE is
$$\hat{\alpha}=x_{(1)}=min(x)$$
Fix $\alpha$ with $\hat{\alpha}$ and find with the usual procedure the MLE for $\lambda$
the fisher information is well defined only for $\lambda$....calculate it with the definition. That is because the general regularity conditions are not satisfied in this model, with respect to $\alpha$