[Math] the difference between optimal control and robust control

control theorykalman filteroptimal controloptimization

What is the difference between optimal control and robust control?

I know that Optimal Control have the controllers:

  • LQR – State feedback controller
  • LQG – State feedback observer controller
  • LQGI – State feedback observer integrator controller
  • LQGI/LTR – State feedback observer integrator loop transfer recovery controller (for increase robustness)

And Robust Control have:

  • $H_{2}$ controller
  • $H_{\infty}$ controller

But what are they? When are they better that LQ controllers? Have the H-contollers a Kalman filter? Is the H-controllers multivariable? Are they faster that LQ-controllers?

Best Answer

There's a huge difference. Optimal control seeks to optimize a performance index over a span of time, while robust control seek to optimize the stability and quality of the controller (its "robustness") given uncertainty in the plant model, feedback sensors, and actuators.

Optimal control assumes your model is perfect and optimizes a functional you provide. If your model is imperfect your optimal controller is not necessarily optimal! It is also only optimal for the specific cost functional you provide! LQ optimal control is ONLY truly optimal for a completely linear plant (unlikely) and a quadratic cost index. Anything else and there's no rigorous claim to optimality.

Robust control assumes your model is imperfect. Suppose, for instance, some parameters in your model are believed to be in a certain range but are not known for sure. An $H_2$ or $H_{\infty}$ controller will decide which control signals are admissible based on the level of uncertainty in the core parameters. For example, if you have the plant $$ P(s) = \frac{1}{s+a} $$ but only know $a \in [b,c]$ for some given $b$ and $c$, a robust controller will clamp overly aggressive control signals that would risk pushing the pole at $-a$ into the right-hand plane.

Related Question