As @whuber pointed out, the most practical solution would be to change how you define your test. However, if there is some reason why you would prefer not to do so, I explain what would happen below.
To start with my conclusions, I find that for most any reasonable choice of significance level, you can obtain a p-value for your hypothesis via a one-tailed "z-test" using the asymptotic normal distribution of $\bar X$ under the null (which is given by the central limit theorem).
To restate the problem, let $X_1,...,X_n \stackrel{iid}{\sim} exp(\theta)$ and consider the hypotheses
$$
H_0:\; \theta=1\;\;\;H_1 \theta > 1
$$
So the likelihood is $$L(\theta|\mathbf{X})=\theta^{-1}\exp(\theta^{-1}\sum_{i=1}^n X_i)$$ and the corresponding likelihood ratio test statistic is
$$
\lambda=-2\ln\bigg[\frac{\sup\{\,L(\theta\mid \mathbf{X}):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid \mathbf{X}):\theta\in\Theta\,\}}\bigg]=
-2\ln\bigg[\frac{L(\theta=1 \mid \mathbf{X})}{\sup\{\,L(\theta\mid \mathbf{X}):\theta\ge 1\,\}}\bigg]
=
$$
$$
-2\ln\bigg[\frac{L(\theta=1 \mid \mathbf{X})}{L(\theta=\max(\bar X, 1) \mid \mathbf{X})}\bigg]
$$
Notice that my maximum likelihood estimate of $\theta$ is $\hat \theta = \max(\bar X,1)$ as opposed to just $\bar X$. This is because when you define the null and alternative hypothesis you are assuming that
$$
\Theta = \Theta_0 \cap \Theta_1 = \{1\} \cap (1,\infty)=[1,\infty)
$$
In layman's terms, it is impossible by your definition that $\theta<1$, so it would not make sense to use $\bar X < 1$ as an estimate for $\theta$. In the case that $\bar X< 1$, the estimate which yields the highest likelihood, but still resides in the parameter space, is $\hat \theta=1$
Now the boundary problem issue you are referring to is well defined. Under the null, $\lambda$ will not converge to a chi-squared distribution. This problem is well researched. In my answer here I provide many sources which discuss the problem.
In this specif case, it can be shown (using the central limit theorem) that the asymptotic distribution of $\hat \theta$ is a censored normal distribution;
$$
p(\hat \theta \mid \theta=1)=\begin{cases} \phi(\hat \theta \mid \mu=1,\sigma=1/\sqrt{n}) \;\;\mathrm{if}\;\;\hat \theta > 1 \\
0.5\times \delta(\hat \theta-1)\;\;\mathrm{if}\;\;\hat \theta=1 \\
0\;\;\mathrm{otherwise}\end{cases}
$$
where $\phi(x \mid \mu,\sigma)=\frac{1}{\sigma\sqrt{2\pi}}\exp\bigg(-\frac{(x-\mu)^2}{2\sigma^2} \bigg)$ is the Gaussian pdf, and $\delta(x)$ is the Dirac delta function. The limiting CDF is then;
$$
P(\hat \Theta \le \hat \theta \mid \theta=1)=\begin{cases} \Phi(\hat \Theta \le \hat \theta \mid \mu=1,\sigma=1/\sqrt{n}) \;\;\mathrm{if}\;\;\hat \theta > 1 \\
0.5\;\;\mathrm{if}\;\;\hat \theta=1 \\
0\;\;\mathrm{otherwise}\end{cases}
$$
where $\Phi(X \le x\mid \mu,\sigma)$ is the Gaussian cdf.
Using this asymptotic distribution, the p-value for your hypothesis test is simply $1-P(\hat \Theta \le \hat \theta | \theta=1)$.
It should be noted that for any significance level less than $0.5$ (why would it not be?), using the above censored normal is equivalent to simply using the Gaussian pdf $\phi(\hat \theta \mid \mu=1,\sigma=1/\sqrt{n})$ which, as it happens, is the limiting distribution of $\bar X$.
Best Answer
The p.d.f for one $x_i$ is given as
$$ f(x| \theta) = \begin{cases} \frac{1}{\theta} & & \text{if } 0 \leq x \leq \theta \\ 0 & & \text{otherwise} \end{cases} $$ Let's call $\vec{x} = (x_1, ..., x_n)$.
The $n$ observations are i.i.d. so the likelihood of observing the $n$-vector $\vec{x} = (x_1, ... x_n)$ is the product of the component-wise probabilities. Ignoring the issue of support for the moment, note that this product can be simply written as a power:
$$ f(\vec{x}| \theta) = \prod_i^n \frac{1}{\theta} = \frac{1}{\theta^n} = \theta^{-n} $$
Next, we turn our attention to the support of this function. If any single component is outside its interval of support $(0, 1/\theta)$, then its contribution to this equation is a 0 factor, so the product of the whole will be zero. Therefore $f(\vec{x})$ only has support when all components are inside $(0, 1/\theta)$.
$$ f(\vec{x}| \theta) = \begin{cases} \theta^{-n} & & \text{if } \forall i, \ 0 \leq x_i \leq \theta \\ 0 & & \text{otherwise} \end{cases} $$
By definition, this is also our likelihood:
$$ \mathcal{L}(\theta; \vec{x}) = f(\vec{x}| \theta) = \begin{cases} \theta^{-n} & & \text{if } \forall i, \ 0 \leq x_i \leq \theta \\ 0 & & \text{otherwise} \end{cases} $$
The MLE problem is to maximize $\mathcal{L}$ with respect to $\theta$. But because $\theta > 0$ (given in the title of the problem) then $\theta^{-n} > 0$ therefore 0 will never be the maximum. Thus, this is a constrained optimization problem:
$$ \hat{\theta} = \text{argmin}_\theta \,\, \theta^{-n} \text{ s.t. } \forall i \,\, 0 \leq x_i \leq \theta $$
This is easy to solve as a special case so we don't need to talk about the simplex method but can present a more elementary argument. Let $t = \text{max} \,\, \{x_1,...,x_n\}$. Suppose we have a candidate solution $\theta_1 = t - \epsilon$. Then let $\theta_2 = t - \epsilon/2$. Clearly both $\theta_1$ and $\theta_2$ are on the interior of the feasible region. Furthermore we have $\theta_2 > \theta_1 \implies \theta_2^{-n} < \theta_2^{-n}$. Therefore $\theta_1$ is not at the minimum. We conclude that the minimum cannot be at any interior point and in particular must not be strictly less than $t$. Yet $t$ itself is in the feasible region, so it must be the minimum. Therefore,
$$\hat{\theta} = \text{max} \,\, \{x_1,..., x_n\}$$
is the maximum likelihood estimator.
Note that if any observed $x_i$ is less than 0, then $\mathcal{L}$ is a constant 0 and the optimization problem has no unique solution.