To my knowledge there is not a general method for finding a Lyapunov function. In this case one can solve the differential equations and use that to find a Lyapunov function. Namely $x_2$ is decoupled from $x_1$ and can be shown to have the following solution
$$
x_2(t) = C_1\,e^{-t},
$$
where $C_1$ is a constant and depends on the initial condition of $x_2$. Substituting the above equation into the expression for $\dot{x}_1$ gives
$$
\dot{x}_1 = x_1 (C_1\,e^{-t} -1)
$$
which is a separable differential equation, namely
$$
\frac{dx_1}{x_1} = (C_1\,e^{-t} -1) dt.
$$
Integrating on both sides gives
$$
\log(x_1) = -C_1\,e^{-t} -t+C_2.
$$
Solving for $x_1$ gives
\begin{align}
x_1(t) &= e^{-C_1\,e^{-t} -t+C_2}, \\ &= C_3\,e^{-C_1\,e^{-t} -t}, \\
&= C_3\,e^{-t}\,e^{-C_1\,e^{-t}},
\end{align}
or when using the definition for $x_2$ then it can also be expressed as $x_1(t)=C_3\,e^{-t}\,e^{-x_2}$. So the quantities $x_2$ and $x_1\,e^{x_2}$ will both decay exponentially fast, so the following Lyapunov function can be used
$$
V(x) = x_2^2 + x_1^2\,e^{2\,x_2},
$$
for which it can be shown that its derivative is
$$
\dot{V}(x) = -2\,x_2^2 - 2\,x_1^2\,e^{2\,x_2}.
$$
I will leave proving that $V(x)$ is radially unbounded to you.
Instead of solving for the solution for $Q$ and $P$ one could solve for $Q^{-1}$ and $P^{-1}$. This would make your boundary constraints linear and thus easier to solve. To avoid having to keep denoting the inverse I define new variables as $\mathcal{Q} = Q^{-1}$ and $\mathcal{P} = P^{-1}$. For the dynamics of these new variable one can use that $X\,X^{-1} = I$, so $d/dt\,(X\,X^{-1}) = \dot{X}\,X^{-1} + X\,\dot{X}^{-1} = 0$ which solved for $\dot{X}^{-1}$ gives $\dot{X}^{-1} = -X^{-1}\,\dot{X}\,X^{-1}$. Using this and the initial differential equations gives
\begin{align}
\dot{\mathcal{Q}} &= -\mathcal{Q}\,A - A^\top \mathcal{Q} + \mathcal{Q}\,B\,B^\top \mathcal{Q}, \\
\dot{\mathcal{P}} &= -\mathcal{P}\,A - A^\top \mathcal{P} - \mathcal{P}\,B\,B^\top \mathcal{P}.
\end{align}
It can be noted that the original differential equations are linear in $Q$ and $P$, but the ones in $\mathcal{Q}$ and $\mathcal{P}$ are not. So there is a trade-off between easier boundary conditions and more difficult dynamics.
Instead of making your boundary constraints easier to solve you could also use an analytical solution to the differential equations. Namely, by solving for stationary solutions and vectorizing both differential equation can be transformed into an autonomous linear time invariant system. The stationary solutions are denoted with $\bar{Q}$ and $\bar{P}$, which can be found by solving $A\,\bar{Q} + \bar{Q}\,A^\top = B\,B^\top$ and $A\,\bar{P} + \bar{P}\,A^\top = -B\,B^\top$ respectively. Those equations can be seen as Sylvester equations, which have one unique solution if the spectra of $A$ and $-A^\top$ are disjoint, which should be true if $A$ is non-singular. It can be noted that $\bar{Q} = -\bar{P}$ due to the difference of minus sign in front of the $B\,B^\top$ term. By defining intermediate variables $\hat{Q} = Q - \bar{Q}$ and $\hat{P} = P - \bar{P}$, their dynamics can be shown to be
\begin{align}
\dot{\hat{Q}} &= \hat{Q}\,A + A^\top \hat{Q}, \\
\dot{\hat{P}} &= \hat{P}\,A + A^\top \hat{P}.
\end{align}
By using vectorization and the Kronecker product those differential equations can also be written as
\begin{align}
\dot{x}_1 &= M\,x_1, \\
\dot{x}_2 &= M\,x_2,
\end{align}
with $x_1 = \text{vec}(\hat{Q})$, $x_2 = \text{vec}(\hat{P})$ and $M = I \otimes A + A \otimes I$. Their solutions can be written as $x_i(t) = e^{M\,t}\,x_i(0)$, with $e^{X}$ the matrix exponential of $X$. Therefore, the values of $Q(T)$ and $P(T)$ can be found with
\begin{align}
Q(T) &= \text{vec}^{-1}\!\left(e^{M\,T}\,\text{vec}(Q(0) - \bar{Q})\right) + \bar{Q}, \\
P(T) &= \text{vec}^{-1}\!\left(e^{M\,T}\,\text{vec}(P(0) - \bar{P})\right) + \bar{P}.
\end{align}
It can be noted that such analytical solution also exists for the dynamics of $\mathcal{Q}$ and $\mathcal{P}$, but in that case one would have to use $\hat{\mathcal{Q}}^{-1} = \mathcal{Q} - \bar{\mathcal{Q}}$ and $\hat{\mathcal{P}}^{-1} = \mathcal{P} - \bar{\mathcal{P}}$. However, that would make the boundary constraints nonlinear again which defeats to purpose of that coordinate transformation.
Best Answer
Let us start with the expression $L(P):=A^TP+PA$ where $A$ is Hurwitz stable and define $P^*=\int_0^\infty e^{A^Ts}Qe^{As}ds$, which is well-defined since $A$ is Hurwitz stable; i.e. $e^{As}\to0$ as $s\to \infty$. So, we are interested in solving for $P$ in $L(P)=-Q$. We will show that $P^*$ is the unique solution. Indeed,
$$ \begin{array}{rcl} L(P^*)&=&A^TP^*+P^*A\\ &=& A^T\int_0^\infty e^{A^Ts}Qe^{As}ds+\int_0^\infty e^{A^Ts}Qe^{As}dsA\\ &=& \int_0^\infty \left(\dfrac{d}{ds}e^{A^Ts}Qe^{As}\right)ds\\ &=& 0-Q\\ &=&-Q \end{array} $$ where we have used the fact that $\dfrac{d}{ds}e^{As}=Ae^{As}$. Moreover, if $Q$ is positive (semi)definite, then so is $P$.
The expression $L(P)=-Q$ can be solved for all matrices $Q$ provided that $A$ has no eigenvalues on the imaginary axis. This can be seen from the vectorized expression given by $(A^T\oplus A^T)p=-q$ where $p=\mathrm{vec}(P)$, $q=\mathrm{vec}(Q)$, $\mathrm{vec}$ is the vectorization operator and $\oplus$ is the Kronecker sum. The eigenvalues of $A^T\oplus A^T$ are the same as those of $A\oplus A$ and are given by $\lambda_i(A)+\lambda_j(A)$ for all $i,j=1,\ldots$, where $\lambda_i$ denotes the $i$-th eigenvalue. So, $A^T\oplus A^T$ is invertible if and only if $A$ has no eigenvalues on the imaginary axis.