Checking the MLE: From your specification of the problem, your log-likelihood function is:
$$\begin{equation} \begin{aligned}
\mathcal{l}_{\boldsymbol{x},\boldsymbol{y}}(\theta, \lambda)
&= \sum_{i=1}^m \ln p (x_i | \lambda) + \sum_{i=1}^n \ln p (y_i | \theta, \lambda) \\[8pt]
&= \sum_{i=1}^m (\ln \lambda - \lambda x_i) + \sum_{i=1}^n (\ln \theta + \ln \lambda - \theta \lambda y_i) \\[8pt]
&= m ( \ln \lambda - \lambda \bar{x} ) + n ( \ln \theta + \ln \lambda - \theta \lambda \bar{y}).
\end{aligned} \end{equation}$$
This gives the score functions:
$$\begin{equation} \begin{aligned}
\frac{\partial \mathcal{l}_{\boldsymbol{x},\boldsymbol{y}}}{\partial \theta}(\theta, \lambda)
&= n \Big( \frac{1}{\theta} - \lambda \bar{y} \Big), \\[8pt]
\frac{\partial \mathcal{l}_{\boldsymbol{x},\boldsymbol{y}}}{\partial \lambda}(\theta, \lambda)
&= m \Big( \frac{1}{\lambda} - \bar{x} \Big) + n \Big( \frac{1}{\lambda} - \theta \bar{y} \Big).
\end{aligned} \end{equation}$$
Setting both partial derivatives to zero and solving the resulting score equations yields the MLEs:
$$\hat{\theta}_{m,n} = \frac{\bar{x}}{\bar{y}} \quad \quad \quad \hat{\lambda}_{m,n} = \frac{1}{\bar{x}}.$$
(Note that in the case where $\bar{y} = 0$ the first of the score equations is strictly positive and so the MLE for $\theta$ does not exist.) This confirms your calculations of the MLE.
Adjusting the MLE to remove bias: Treating the MLE as a random variable we have:
$$\hat{\theta}_{m,n} = \frac{n}{m} \cdot \frac{\dot{X}}{\dot{Y}},$$
where $\dot{X} \equiv m \bar{X} \sim \text{Gamma} (m, \lambda)$ and $\dot{Y} \equiv n \bar{Y} \sim \text{Gamma} (n, \theta \lambda)$ are independent random variables. From this equation, the MLE is a scaled beta-prime random variable:
$$\hat{\theta}_{m,n} \sim \theta \cdot \frac{n}{m} \cdot \text{Beta-Prime}(m, n).$$
This estimator has expected value $\mathbb{E} (\hat{\theta}_{m,n}) = \frac{n}{n-1} \cdot \theta$, which means that it has positive bias. We can correct this bias by using the bias-adjusted MLE:
$$\tilde{\theta}_{m,n} = \frac{n-1}{n} \cdot \frac{\bar{X}}{\bar{Y}} \sim \theta \cdot \frac{n-1}{m} \cdot \text{Beta-Prime}(m, n).$$
Standard-Error of the adjusted MLE: The adjusted MLE is unbiased. It has variance:
$$\begin{equation} \begin{aligned}
\mathbb{V}(\tilde{\theta}_{m,n})
&= \int \limits_0^\infty \Big( \theta \cdot \frac{n-1}{m} \cdot r - \theta \Big)^2 \text{Beta-Prime} ( r | m, n) dr \\[8pt]
&= \theta^2 \cdot \frac{\Gamma(m) \Gamma(n)}{\Gamma(m+n)} \int \limits_0^\infty \Big( 1 - \frac{n-1}{m} \cdot r \Big)^2 r^{m-1} ( 1 + r )^{-m-n} dr \\[8pt]
&= \theta^2 \cdot \frac{n+m-1}{m(n-2)}.
\end{aligned} \end{equation}$$
The corresponding standard error is:
$$\text{se}(\tilde{\theta}_{m,n}) = \tilde{\theta}_{m,n} \cdot \sqrt{\frac{n+m-1}{m(n-2)}}.$$
Letting $\phi \equiv m/n$ and taking the limit as $n \rightarrow \infty$ we obtain the asymptotic approximation:
$$\text{se}(\tilde{\theta}_{m,n}) \approx \frac{\tilde{\theta}_{m,n}}{\sqrt{n-2}} \cdot \sqrt{\frac{1+\phi}{\phi}}.$$
This gives you both exact and approximate expressions for the standard error. I hope that is helpful. (Please make sure to review my algebra to make sure I haven't made a mistake!)
For this example
$$L(\theta;x_i)=\theta^{2n}\cdot \prod_{i=1}^n x_i\cdot e^{-\theta
\sum_{i=1}^nx_i}$$
This is not right. We have $f(x)=\theta^2 x e^{-\theta x}$ Now we calculate the product for every $x_i$
$$ L(\theta;x_i)=\prod_{i=1}^n \theta^2 x_i\cdot e^{-\theta x_i}=\theta^{2n}\cdot \prod_{i=1}^n x_i\cdot e^{-\theta x_i}$$
You see that there is as yet no sigma sign involved. There is either an sigma sign or a product sign.
At the next step, taking logarithm, there is a mistake. It is right that the $\theta^{2n}$ becomes the summand $2n\cdot \ln(\theta)$. Now we calculate
$\ln\left(\prod\limits_{i=1}^n x_i\cdot \large{e^{-\theta x_i}}\right)$
Firstly we use the logarithm rule $\log(a\cdot b)=\log(a)+\log(b)$ to eliminate the product sign.
$$= \sum_{i=1}^n \ln \left( x_i\cdot \large{e^{-\theta x_i}} \right)$$
We use the same rule again for a further simplification.
$$= \sum_{i=1}^n \ln \left( x_i \right) + \sum_{i=1}^n \ln\left( \large{e^{-\theta x_i}} \right)$$
$$= \sum_{i=1}^n \ln \left( x_i \right) -\theta \sum_{i=1}^n x_i$$
With the summnand $2n\cdot \ln (\theta)$ we have
$$\ln \left(L(\theta;x_i)\right)=2n\cdot \ln (\theta) +\sum_{i=1}^n \ln \left( x_i \right) -\theta \sum_{i=1}^n x_i$$
Now the derivative w.r.t. $\theta$ is
$$\frac{2n}{\theta}-\sum_{i=1}^n x_i=0$$
For the rest there are no logarithm rules required.
Best Answer
Obviously, you can't have negative information.
Your likelihood is $$\mathcal L(\lambda \mid \boldsymbol x) \propto \lambda^{-n} e^{-n \bar x/\lambda} \mathbb 1(x_{(1)} \ge 0)$$ where I have written this in terms of the sufficient statistic $\bar x$. Then the log-likelihood is $$\ell(\lambda \mid \boldsymbol x) = -n \log \lambda - \frac{n \bar x}{\lambda},$$ and $$\frac{\partial \ell}{\partial \lambda} = -\frac{n}{\lambda} + \frac{n\bar x}{\lambda^2}.$$ The second derivative is $$\frac{\partial^2 \ell}{\partial \lambda^2} = \frac{n}{\lambda^2} - \frac{2n \bar x}{\lambda^3}.$$ Then your Fisher information is $$I(\hat \lambda) = - \left( \frac{n}{\lambda^2} - \frac{2n \operatorname{E}[\bar x]}{\lambda^3} \right) = - \frac{n}{\lambda^2} + \frac{2n \lambda}{\lambda^3} = \frac{n}{\lambda^2}.$$
That said, the standard error of $\hat \lambda$ is simply $$\operatorname{SE}(\hat \lambda) = \sqrt{\operatorname{Var}[\bar x]} \overset{\text{iid}}{=} \sqrt{\frac{\operatorname{Var}[X]}{n}} = \sqrt{\frac{\lambda^2}{n}} = \frac{\lambda}{\sqrt{n}}.$$ There is no need to calculate the Fisher information.