The chi-squared limiting distribution is valid only for a special type of composite hypotheses. $H_0: \theta_1=\theta_1^0,\ldots,\theta_r=\theta_r^0,\theta_{r+1},\ldots,\theta_k$, that is when the first $r$ parameters are specified and the rest are not, versus $H_a: \theta_1,\ldots,\theta_k$ that leaves all of them unspecified.
Your null hypothesis does not have that form, and it is easy to see that the limiting distribution is not chi-squared under the null hypothesis. Suppose $p$ is small (less than 0.2), and $n$ is large. Then the maximum likelihood estimate will be almost always less than 0.2 either under the null or the without restriction. Then your test statistic will be almost always 0! In fact, its limiting distribution is degenerate.
In general, likelihood-ratio tests are not convenient for one-sided alternatives, because the direction of the difference is obliterated by the squaring. However the correct way for developing a one-sided likelihood-ratio test is to note that $p=0.2$ is closest to the alternative, so we test $H_0: p=0.2$ versus $H_a: p\geq 0.2$. In this case, however, the value of interest is on the edge of the parameter space, so the limiting distribution is not chi-squared. In fact, for this case it is a mixture distribution of $0.5 I({0}) + 0.5\chi^2_1$.
As @whuber pointed out, the most practical solution would be to change how you define your test. However, if there is some reason why you would prefer not to do so, I explain what would happen below.
To start with my conclusions, I find that for most any reasonable choice of significance level, you can obtain a p-value for your hypothesis via a one-tailed "z-test" using the asymptotic normal distribution of $\bar X$ under the null (which is given by the central limit theorem).
To restate the problem, let $X_1,...,X_n \stackrel{iid}{\sim} exp(\theta)$ and consider the hypotheses
$$
H_0:\; \theta=1\;\;\;H_1 \theta > 1
$$
So the likelihood is $$L(\theta|\mathbf{X})=\theta^{-1}\exp(\theta^{-1}\sum_{i=1}^n X_i)$$ and the corresponding likelihood ratio test statistic is
$$
\lambda=-2\ln\bigg[\frac{\sup\{\,L(\theta\mid \mathbf{X}):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid \mathbf{X}):\theta\in\Theta\,\}}\bigg]=
-2\ln\bigg[\frac{L(\theta=1 \mid \mathbf{X})}{\sup\{\,L(\theta\mid \mathbf{X}):\theta\ge 1\,\}}\bigg]
=
$$
$$
-2\ln\bigg[\frac{L(\theta=1 \mid \mathbf{X})}{L(\theta=\max(\bar X, 1) \mid \mathbf{X})}\bigg]
$$
Notice that my maximum likelihood estimate of $\theta$ is $\hat \theta = \max(\bar X,1)$ as opposed to just $\bar X$. This is because when you define the null and alternative hypothesis you are assuming that
$$
\Theta = \Theta_0 \cap \Theta_1 = \{1\} \cap (1,\infty)=[1,\infty)
$$
In layman's terms, it is impossible by your definition that $\theta<1$, so it would not make sense to use $\bar X < 1$ as an estimate for $\theta$. In the case that $\bar X< 1$, the estimate which yields the highest likelihood, but still resides in the parameter space, is $\hat \theta=1$
Now the boundary problem issue you are referring to is well defined. Under the null, $\lambda$ will not converge to a chi-squared distribution. This problem is well researched. In my answer here I provide many sources which discuss the problem.
In this specif case, it can be shown (using the central limit theorem) that the asymptotic distribution of $\hat \theta$ is a censored normal distribution;
$$
p(\hat \theta \mid \theta=1)=\begin{cases} \phi(\hat \theta \mid \mu=1,\sigma=1/\sqrt{n}) \;\;\mathrm{if}\;\;\hat \theta > 1 \\
0.5\times \delta(\hat \theta-1)\;\;\mathrm{if}\;\;\hat \theta=1 \\
0\;\;\mathrm{otherwise}\end{cases}
$$
where $\phi(x \mid \mu,\sigma)=\frac{1}{\sigma\sqrt{2\pi}}\exp\bigg(-\frac{(x-\mu)^2}{2\sigma^2} \bigg)$ is the Gaussian pdf, and $\delta(x)$ is the Dirac delta function. The limiting CDF is then;
$$
P(\hat \Theta \le \hat \theta \mid \theta=1)=\begin{cases} \Phi(\hat \Theta \le \hat \theta \mid \mu=1,\sigma=1/\sqrt{n}) \;\;\mathrm{if}\;\;\hat \theta > 1 \\
0.5\;\;\mathrm{if}\;\;\hat \theta=1 \\
0\;\;\mathrm{otherwise}\end{cases}
$$
where $\Phi(X \le x\mid \mu,\sigma)$ is the Gaussian cdf.
Using this asymptotic distribution, the p-value for your hypothesis test is simply $1-P(\hat \Theta \le \hat \theta | \theta=1)$.
It should be noted that for any significance level less than $0.5$ (why would it not be?), using the above censored normal is equivalent to simply using the Gaussian pdf $\phi(\hat \theta \mid \mu=1,\sigma=1/\sqrt{n})$ which, as it happens, is the limiting distribution of $\bar X$.
Best Answer
Let us consider the example, suppose we are testing $$H_0: p = 0.5\\H_1: p<0.5$$ and suppose that $\bar{x} = 0.4$. $$L(X, p) = p^{\sum X_i}(1-p)^{n - \sum X_i}$$ maximizing it wrt $p$ subject to $p \le 0.5$ yileds $\hat{p} = \bar{x} < 0.5$ and hence $$\Lambda = \frac{0.4^{\sum X_i}0.6^{n - \sum X_i}}{0.5^n} \ge k \iff \sum X_i \ln 0.4 + (n - \sum X_i)\ln 0.6 \ge k_1 \iff \sum X_i \le C$$ And it remains to find such a $C$ that $\mathbb{P}\left(\sum X_i \le C\right|p=0.5) = \alpha$, which we can do using, for example the CLT $$\frac{\sum X_i - np}{\sqrt{p(1-p)}} \approx \mathcal{N}(0, 1) \to 2\sum X_i - n \underset{H_0}{\approx} \mathcal{N}(0, 1) \to \mathbb{P}\left(\sum X_i \le C\right) = \mathbb{P}\left(2 \sum X_i - n \le 2C - n\right) = 0.05 \to 2C - n = 1.64 \to C = \frac{n+1.64}{2}$$.
Thus, we reject the null if $\sum X_i \le \frac{n+1.64}{2}$