Converse linear quadratic optimal control

control theorymatrix equationsoptimal controloptimization

It is well known that for a linear time invariant system

$$
\dot{x} = A x + B u \tag{1}
$$

with $(A, B)$ controllable, there exists a static state feedback $u = -K x$ such that the cost function

$$
J = \int_0^{\infty} x^T Q x + u^T R u \, dt \tag{2}
$$

is minimized, assuming $Q \geq 0$ (positive semi-definite) and $R > 0$ (positive definite). The gain $K$ is the solution of the algebraic Riccati equation:

$$
\begin{align}
0 &= A^T P + P A – P B R^{-1} B^T P + Q \\
K &= R^{-1} B^T P \\
P &= P^T \geq 0
\end{align}
$$

known as linear quadratic regulator (LQR). However, I wonder whether the converse also holds?

That is, given a stabilizing $K_s$ (such that $A – B K_s$ is Hurwitz), do there exist matrices $Q \geq 0$ and $R > 0$ such that $u = -K_s x$ minimizes $(2)$ given $(1)$? Or put differently:

Question: Is every stabilizing linear state feedback optimal in some sense?

Best Answer

See the paper: Kalman, R. E. (1964). When is a linear control system optimal?. Journal of Basic Engineering, 86(1), 51-60.

The answer is positive at least for a class of systems. As far as I remember, the answer is also positive for a general LTI system, but I cannot find a reference at the moment.

UPDATE: Every linear system with nondynamic feedback is optimal with respect to a quadratic performance index that includes a cross-product term between the state and control, see [R1].

If you do not allow for the cross-product term, then several sufficient and necessary conditions are known, see for example [R2] and the references there.

[R1] Kreindler, E., & Jameson, A. (1972). Optimality of linear control systems. IEEE Transactions on Automatic Control, 17(3), 349-351.

[R2] Priess, M. C., Conway, R., Choi, J., Popovich, J. M., & Radcliffe, C. (2015). Solutions to the inverse lqr problem with application to biological systems analysis. IEEE Transactions on control systems technology, 23(2), 770-777.

Related Question