Recall that $$I(\theta)=-\mathbb{E}\left[\frac{\partial^2}{\partial \theta^2}l(X\,| \,\theta)\right]\,$$
under certain regularity conditions (that apply here), where $I$ is the Fisher information and $l$ is the log-likelihood function of $X$. The log-likelihood function in this case is given by $$\begin{align} l(X\,|\,\theta) &=\text{log}f(X\,|\,\theta) \\&=\log\left(\frac{1}{2\theta}\text{exp}\left(-\frac{|X|}{\theta}\right)\right) \\ &= -\frac{|X|}{\theta} - \text{log}(2\theta)\,\,. \end{align}$$
It follows that $$\frac{\partial}{\partial \theta}l(X \,|\,\theta) = \frac{|X|}{\theta^2}-\frac{1}{\theta} \implies \frac{\partial^2}{\partial \theta^2}l(X \,|\,\theta) = -\frac{2|X|}{\theta^3}+\frac{1}{\theta^2}\,.$$
So, we have
$$I(\theta)=-\mathbb{E}\left[-\frac{2|X|}{\theta^3}+\frac{1}{\theta^2}\right]=\mathbb{E}\left[\frac{2|X|}{\theta^3}-\frac{1}{\theta^2}\right]=\frac{2}{\theta^3}\mathbb{E}(\,|X|\,)-\frac{1}{\theta^2}\,.$$
It remains to compute the expectation of $|X|$. To this end, I will set up the integral. By definition of expected value for transformations of continuous random variables, we have
$$\mathbb{E}(\,|X|\,)=\int_{-\infty}^{\infty}|x|\,f(x \,|\, \theta)\,\text{d}x=\int_{-\infty}^{\infty}\frac{|x|}{2\theta}\text{exp}\left(-\frac{|x|}{\theta}\right)\,\text{d}x = \theta\,.$$
Note: to compute the integral, alter its form by taking advantage of the fact that $|x|$ is symmetric (and, you can also decompose the integral based on cases for $|x|$).
Thus, the Fisher information is $$I(\theta)= \frac{2}{\theta^3}\mathbb{E}(\,|X|\,)-\frac{1}{\theta^2} = \frac{2}{\theta^2}-\frac{1}{\theta^2}=\frac{1}{\theta^2}$$
For a sample $X_1,X_2,...,X_n$ of size $n$, the Fisher information is then
$$I(\theta \,|\,n)=nI(\theta)=\frac{n}{\theta^2}\,.$$
Therefore, by the Cramér–Rao inequality, the variance of any unbiased estimator $\hat{\theta}$ of $\theta$ is bounded by the reciprocal of the Fisher information (this includes the MLE that you have computed, which achieves this lower bound, and is said to be an efficient estimator). In other words, $$\text{Var}(\hat{\theta}) \geq \frac{1}{nI(\theta)} = \frac{\theta^2}{n}\,\,.$$
Yes it's correct. Very well done.
This doesn't simplify the work a lot in this case, but here's an interesting result . . . In the case of $n$ i.i.d. random variables $y_1,\dots,y_n$ , you can obtain the Fisher information $i_{\vec y}(\theta)$ for $\vec y$ via $n \cdot i_y (\theta$) where $y$ is a single observation from your distribution.
Here $\ell(\theta) = \ln( \frac{1}{\theta} e^{-y/\theta}) = -y/\theta - \ln(\theta) \implies \frac{\partial}{\partial \theta} \ell (\theta) = \frac{y}{\theta^2} - \frac{1}{\theta}
\implies \frac{\partial^2}{\partial \theta^2} \ell(\theta) = - \frac{2y}{\theta^3} + \frac{1}{\theta^2}$
\begin{align*}
i_y(\theta) &= - E \left[ \frac{\partial^2}{\partial \theta^2} \ell(\theta) \right] = -E \left[ - \frac{2y}{\theta^3} + \frac{1}{\theta^2} \right] = \dfrac{2 \theta}{\theta^3} - \dfrac{1}{\theta^2} = \dfrac{1}{\theta^2}
\end{align*}
and multiplying by $n$ gives Fisher information $n/\theta^2$.
Best Answer
If $$f(x;θ)=\frac{1}{2θ}exp(-\frac{|x|}{θ})$$
then $$l(\theta):=\log f(x;θ) = -\log2 - \log\theta - \frac{|x|}{\theta}$$ $$\frac{\partial l(\theta)}{\partial \theta} = -\frac{1}{\theta} + \frac{|x|}{\theta^2}$$ $$\frac{\partial^2 l(\theta)}{\partial \theta^2} = \frac{1}{\theta^2} - 2\frac{|x|}{\theta^3}$$ then for each measurement the expected information is, $$I_\theta = E_{X|\theta}[-\frac{\partial^2 l(\theta)}{\partial \theta^2}] = E_{X|\theta}[2\frac{|x|}{\theta^3}-\frac{1}{\theta^2}] \\ = \frac{2}{\theta^3} \int\limits_{-\infty}^\infty f(x,\theta) |x|~dx - \frac{1}{\theta^2} \\=\frac{2}{\theta^3} \int\limits_{-\infty }^\infty \frac{1}{2θ}exp(-\frac{|x|}{θ}) |x|~dx - \frac{1}{\theta^2}\\ = \frac{1}{\theta^4}\int\limits_{-\infty}^\infty exp(-\frac{|x|}{θ}) |x|~dx- \frac{1}{\theta^2}\\ = \frac{2}{\theta^4} \int\limits_0^\infty exp(-\frac{x}{θ}) x~dx - \frac{1}{\theta^2} \\ (integrating ~ by ~ parts)= \frac{2}{\theta^4} \theta^2 - \frac{1}{\theta^2} \\ = \frac{1}{\theta^2}$$
Then for independent measurements the expected information simply adds and so because they are iid, from measurments of $X_1,...,X_n$ the expected information in then,
$$ nI_\theta = \frac{n}{\theta^2} $$