There is no analytic method and for real-world problems in control theory, you do not need an analytic solution.
Here is the solution suggested from Vasile Sima which I wrote the C++ codes based upon here.
/*
This file contains a continuous-time algebraic Riccati equation solver based on the explanations from Dr. Vasile Sima.
Further reference: Sima, Vasile. Algorithms for linear-quadratic optimization. Vol. 200. CRC Press, 1996.
*/
// A C++ implementation of a Continuous-time Algebraic Riccati Equation (CARE) solver based on Schur vectors.
// Solves a CARE, A'X + XA - XBR^(-1)B'X + Q = 0, using Schur vectors method, with A and Q n-by-n, B n-by-m, and R m-by-m real matrices, and Q and R symmetric, R positive definite. It is assumed, e.g., that the matrix pair (A,B) is stabilizable, and the matrix pair (C,A) is detectable, where Q = C'C, and rank(C) = rank(Q).
// Main steps:
// 1. Construct the Hamiltonian matrix H, H = [ A -B*(R\B'); Q -A' ]; (in MATLAB notation).
// 2. Reduce H to a real Schur form S and accumulate the transformations in U, S = U'HU, U orthogonal.
// 3. Reorder the real Schur form S so that all n stable eigenvalues are moved to the leading part of S, accumulating the transformations in U.
// 4. Compute the unique positive-semidefinite stabilizing solution X, X = U(n+1:2n,1:n)*U(1:n,1:n)^(-1).
// 5. Symmetrize X, X = ( X + X' )/2.
If you are using MATLAB, you can simply use care, dare and even lqg commands.
An ad-hoc solution is using Genetic Algorithm to solve these problems via C++ (solver, NLP demo) or MATLAB.
Your question is incorrect to begin with, as the continuous-time ARE is
$\quad A^TP + P A - PBR^{-1}B^TP + Q = 0$
Hence, just as bad form.
The LMI formulations of LQ in both discrete-time and continuous-time would typically be done in both the Riccati matrix and the feedback matrix.
Find minimum-trace $P$ such that
$ (A-BK)^T P (A-BK) - P + K^TRK + Q \preceq 0$
Multiply with $P^{-1}$ from left and right, define $Y = P^{-1}$ and $F = KP^{-1}$, and apply suitable Schur complements to arrive at linear form.
Best Answer
The linked problem of the infinite horizon linear quadratic regulator (LQR) problem can be generalized slightly to allowing complex numbers using
\begin{align} \min_{u(t)}&\, \frac{1}{2}\!\int_0^\infty x^\dagger(t)\,Q\,x(t) + u^\dagger(t)\,R\,u(t)\,dt, \\ \text{s.t.}\ &\ \dot{x}(t) = A\,x(t) + B\,u(t), \end{align}
with $x(t) \in \mathbb{C}^n$, $u(t) \in \mathbb{C}^m$, $A \in \mathbb{C}^{n \times n}$, $B \in \mathbb{C}^{n \times m}$, $Q \in \mathbb{C}^{n \times n}$, $R \in \mathbb{C}^{m \times m}$, such that $Q^\dagger = Q\succeq0$, $(A,Q)$ observable and $R^\dagger = R\succ0$. The constraints on $Q$ and $R$ ensure that the expression inside the cost function integral is positive definite.
The linked solution can be obtained by using Pontryagin's maximum principle. Adapting this solution to the generalized complex infinite horizon LQR problem yields
$$ \mathcal{H}(x(t),u(t),\lambda(t)) = \frac{1}{2}\,(x^\dagger(t)\,Q\,x(t) + u^\dagger(t)\,R\,u(t)) + \lambda^\dagger(t)\,(A\,x(t) + B\,u(t)), $$
with
\begin{align} \dot{x}(t) &= \mathcal{H}_\lambda^\dagger(x(t),u(t),\lambda(t)) = A\,x(t) + B\,u(t), \\ \dot{\lambda}(t) &= -\mathcal{H}_x^\dagger(x(t),u(t),\lambda(t)) = -Q\,x(t) - A^\dagger\,\lambda(t), \\ 0 &= \mathcal{H}_u^\dagger(x(t),u(t),\lambda(t)) = R\,u(t) + B^\dagger\,\lambda(t). \end{align}
Solving the last equation for the control input yields $u(t) = -R^{-1} B^\dagger\,\lambda(t)$. Substituting this back into the expressions for the dynamics of the state and co-state and factoring $z(t) = \begin{bmatrix}x^\dagger(t) & \lambda^\dagger(t)\end{bmatrix}^\dagger$ out yields
$$ \dot{z}(t) = \underbrace{\begin{bmatrix} A & -B\,R^{-1} B^\dagger \\ -Q & -A^\dagger \end{bmatrix}}_H z(t). $$
The solution of the LQR problem requires that in the limit of $t\to\infty$ it hold that $z(t)=0$. This can be done by choosing the initial condition of the co-state $\lambda(0)$ such that only stable modes are excited. This is equivalent to that $z(0)$ lies in the span of eigenvectors of $H$ whose associated eigenvalues all have a negative real part. Here the eigenvalue decomposition of $H$ is used such that $H\,V = V\,\Lambda$, with $\Lambda$ a diagonal matrix in $\mathbb{C}^{2n\times2n}$ and $V\in\mathbb{C}^{2n\times2n}$, such that each $i$th diagonal component of $\Lambda$ and $i$th column of $V$ form and eigenvalue-eigenvector pair.
Analytically solving an eigenvalue problem is equivalent to solving for the roots of a polynomial of the same order as the dimension of the matrix. In general it is only possible to solve up to a fourth order polynomials analytically. This would imply that it would only be possible to solve the eigenvalue decomposition of $H$ analytically for $W,X,F\in\mathbb{C}^{n\times n}$ with $n=1$ or $n=2$, since $H\in\mathbb{C}^{2n\times 2n}$.
It can be noted that $H$ is a Hamiltonian matrix, which has to property that if $\mu$ is an eigenvalue of $H$ then $-\mu^\dagger$ is as well. This would reduce the number of unknowns by half, since the characteristic equation can be written as
$$ \det(\lambda\,I-H) = \prod_k (\lambda - \mu_k)\,(\lambda + \mu_k^\dagger) = \prod_k \lambda^2 - 2\,j\,\mathcal{I}(\mu_k)\,\lambda - |\mu_k|^2, $$
with $j$ the imaginary unit and $\mathcal{I}(x)$ the imaginary component of $x$. So it might be possible the extend the analytical solutions also to the cases of $n=3$ or $n=4$.