So, I'll modify your problem slightly to avoid dealing with boundary issues. Instead of your constraint $\theta > 0$, I'll replace it with $\theta \geq 0$.
You want to maximize the likelihood subject to $\theta \geq 0$.
After taking the logarithm of your likelihood and ignoring constant terms, we get the problem:
$$ \min_{\theta} f(\theta) \text{ s.t. } \theta \geq 0$$
where
$$f(\theta) := \sum_{i=1}^n (X_i - \theta)^2.$$
You are correct that if we didn't have the constraint, we could simply differentiate the objective function and get $\theta^{unconstrained} := \bar{X}$.
However, due to the constraint, we can't just differentiate. So, let us consider the two cases separately:
If $\theta^{unconstrained}$ is positive, then it is also the solution for your constrained MLE problem (the additional constraint can only increase the value of the minimization problem above).
If $\theta^{unconstrainted} < 0$, then it doesn't satisfy your constraint and is not feasible. However, you can check for yourself (a bit of algebra) that $f(\theta) \geq f(0)$ for all $\theta \geq 0$ when $\theta^{constrainted} < 0$. Therefore, $\theta^0=0$ minimizes $f(\theta)$ over $\theta \geq 0$.
So for this problem, the MLE for $\theta$ is $\theta^{ML} = \max{(\bar{X}, 0)}$. And using the equivariance property, the MLE of $\sqrt{\theta}$ is $\sqrt{\theta^{ML}}$.
Although the OP did not respond, I am answering this to showcase the method I proposed (and indicate what statistical intuition it may contain).
First, it is important to distinguish on which entity is the constraint imposed. In a deterministic optimization setting, there is no such issue : there is no "true value", and an estimator of it. We just have to find the optimizer. But in a stochastic setting, there are conceivably two different cases:
a) "Estimate the parameter given a sample that has been generated by a population that has a non-negative mean" (i.e. $\theta \ge 0$) and
b) "Estimate the parameter under the constraint that your estimator cannot take negative values"(i.e. $\hat \theta \ge 0$).
In the first case, imposing the constraint is including prior knowledge on the unknown parameter. In the second case, the constraint can be seen as reflecting a prior belief on the unknown parameter (or some technical, or "strategic", limitation of the estimator).
The mechanics of the solution are the same, though:The objective function, (the log-likelihood augmented by the non-negativity constraint on $\theta$) is
$$\tilde L(\theta|\mathbf{x})=-\frac n2 \ln(2\pi)-\frac{1}{2}\sum_{i=1}^{n}(x_i-\theta)^2 +\xi\theta,\qquad \xi\ge 0 $$
Given concavity, f.o.c is also sufficient for a global maximum. We have
$$\frac {\partial}{\partial \theta}\tilde L(\theta|\mathbf{x})=\sum_{i=1}^{n}(x_i-\theta) +\xi = 0 \Rightarrow \hat \theta = \bar x+\frac{\xi}{n} $$
1) If the solution lies in an interior point ($\Rightarrow \hat \theta >0$), then $\xi=0$ and so the solution is $\{\hat \theta= \bar x>0,\; \xi^*=0\}$.
2) If the solution lies on the boundary ($\Rightarrow \hat \theta =0$) then we obtain the value of the multiplier at the solution $\xi^* = -n\hat x$, and so the full solution is $\{\hat \theta= 0,\; \xi^*=-n\bar x\}$. But since the multiplier must be non-negative, this necessarily implies that in this case we would have $\bar x\le 0$
(There is nothing special about setting the constraint to zero. If say the constraint was $\theta \ge -2$, then if the solution lied on the boundary, $\hat \theta = -2$, it would imply (in order for the multiplier to have a positive value), that $\bar x \le -2$).
So, if the optimizer is $0$ what are we facing here?
If we are in "constraint type-a", i.e we have been told that the sample comes from a population that it has a non-negative mean, then with $\hat \theta =0$ chances are that the sample may not be representative of this population.
If we are in "constraint type-b", i.e. we had the belief that the population has a non-negative mean, with $\hat \theta =0$ this belief is questioned.
(This is essentially an alternative way to deal with prior beliefs, outside the formal bayesian approach).
Regarding the properties of the estimator, one should carefully distinguish this constrained estimation case, with the case where the true parameter lies on the boundary of the parameter space.
Best Answer
Instead of tedious derivations, simply invoke the invariance property of MLEs. Then solve for $x$ using basic algebra. However, note that this approach will lead to the same estimator you derived, i.e., $\hat{x}=-\log(2m/n -1)$.
So what to do? First, ignore estimation for the moment. Look at what the true value of $x$ would be if you knew the true value of $\theta$.
Suppose $\theta = 0$. Then, $e^{-x} = -1$, which implies that $x = - \log (-1)$. As you noted, this is not a real value.
So what about $\theta = 1/2$? We have that $e^{-x} = 0$, which is a problem for you yet again.
The point here is that with a couple of special cases based on known values, we can see that $x$ is undefined in a large number of cases. Your estimation is not wrong per se. The value of $x$ is simply undefined (not real) for certain values of $\theta$.