Solution to an algebraic Riccati equation with complex matrices

control theorymatrix equationsoptimal control

I am trying to find the analytical solution for the following Riccati equation:
$$
0 = F + W^\dagger P(t) + P(t) W + P(t)X P(t).
$$

In my particular problem I know that it has a solution. In this post a similar equation is solved by writting a Hamiltanian
$$
Z =
\begin{bmatrix}
W & X \\ -F & -W^\dagger
\end{bmatrix} = V\,\Lambda\,V^{-1}, \tag{8}
$$

except that in that case W is real and instead of a dagger there is only a transpose. My problem could be how to construct V. There it is said that it is the matrix of eigenvalues of $Z$, (which I am assuming is meant the row eigenvectors since $Z$ is not normal).

Another problem is if this method even generalizes to the case where $W$ is not real (the other matrices are positive). Is there a solution to this case?

Best Answer

The linked problem of the infinite horizon linear quadratic regulator (LQR) problem can be generalized slightly to allowing complex numbers using

\begin{align} \min_{u(t)}&\, \frac{1}{2}\!\int_0^\infty x^\dagger(t)\,Q\,x(t) + u^\dagger(t)\,R\,u(t)\,dt, \\ \text{s.t.}\ &\ \dot{x}(t) = A\,x(t) + B\,u(t), \end{align}

with $x(t) \in \mathbb{C}^n$, $u(t) \in \mathbb{C}^m$, $A \in \mathbb{C}^{n \times n}$, $B \in \mathbb{C}^{n \times m}$, $Q \in \mathbb{C}^{n \times n}$, $R \in \mathbb{C}^{m \times m}$, such that $Q^\dagger = Q\succeq0$, $(A,Q)$ observable and $R^\dagger = R\succ0$. The constraints on $Q$ and $R$ ensure that the expression inside the cost function integral is positive definite.

The linked solution can be obtained by using Pontryagin's maximum principle. Adapting this solution to the generalized complex infinite horizon LQR problem yields

$$ \mathcal{H}(x(t),u(t),\lambda(t)) = \frac{1}{2}\,(x^\dagger(t)\,Q\,x(t) + u^\dagger(t)\,R\,u(t)) + \lambda^\dagger(t)\,(A\,x(t) + B\,u(t)), $$

with

\begin{align} \dot{x}(t) &= \mathcal{H}_\lambda^\dagger(x(t),u(t),\lambda(t)) = A\,x(t) + B\,u(t), \\ \dot{\lambda}(t) &= -\mathcal{H}_x^\dagger(x(t),u(t),\lambda(t)) = -Q\,x(t) - A^\dagger\,\lambda(t), \\ 0 &= \mathcal{H}_u^\dagger(x(t),u(t),\lambda(t)) = R\,u(t) + B^\dagger\,\lambda(t). \end{align}

Solving the last equation for the control input yields $u(t) = -R^{-1} B^\dagger\,\lambda(t)$. Substituting this back into the expressions for the dynamics of the state and co-state and factoring $z(t) = \begin{bmatrix}x^\dagger(t) & \lambda^\dagger(t)\end{bmatrix}^\dagger$ out yields

$$ \dot{z}(t) = \underbrace{\begin{bmatrix} A & -B\,R^{-1} B^\dagger \\ -Q & -A^\dagger \end{bmatrix}}_H z(t). $$

The solution of the LQR problem requires that in the limit of $t\to\infty$ it hold that $z(t)=0$. This can be done by choosing the initial condition of the co-state $\lambda(0)$ such that only stable modes are excited. This is equivalent to that $z(0)$ lies in the span of eigenvectors of $H$ whose associated eigenvalues all have a negative real part. Here the eigenvalue decomposition of $H$ is used such that $H\,V = V\,\Lambda$, with $\Lambda$ a diagonal matrix in $\mathbb{C}^{2n\times2n}$ and $V\in\mathbb{C}^{2n\times2n}$, such that each $i$th diagonal component of $\Lambda$ and $i$th column of $V$ form and eigenvalue-eigenvector pair.


Analytically solving an eigenvalue problem is equivalent to solving for the roots of a polynomial of the same order as the dimension of the matrix. In general it is only possible to solve up to a fourth order polynomials analytically. This would imply that it would only be possible to solve the eigenvalue decomposition of $H$ analytically for $W,X,F\in\mathbb{C}^{n\times n}$ with $n=1$ or $n=2$, since $H\in\mathbb{C}^{2n\times 2n}$.

It can be noted that $H$ is a Hamiltonian matrix, which has to property that if $\mu$ is an eigenvalue of $H$ then $-\mu^\dagger$ is as well. This would reduce the number of unknowns by half, since the characteristic equation can be written as

$$ \det(\lambda\,I-H) = \prod_k (\lambda - \mu_k)\,(\lambda + \mu_k^\dagger) = \prod_k \lambda^2 - 2\,j\,\mathcal{I}(\mu_k)\,\lambda - |\mu_k|^2, $$

with $j$ the imaginary unit and $\mathcal{I}(x)$ the imaginary component of $x$. So it might be possible the extend the analytical solutions also to the cases of $n=3$ or $n=4$.