[Math] What’s the reason why linear control works for nonlinear models

control theorylinear-controlnonlinear optimizationnonlinear systemoptimal control

All the time in control theory, we are using linear control for nonlinear models.

The reason is logic if we have saturation limits in our system. But what's if our model is nonlinear like this:

$$M\ddot {x} + \text{sat} (B\dot {x}^2) + Kx = F $$

Then the model has nonlinearities as the quadratic function and saturation limit. Saturation limit won't be a problem if we have a controller who won't reach the limits. But the quadratic function would be a huge problem.

To solve this problem, we need to linearize the equation.
But linearize in a specific point result that we can only control the system near that point. Other locations in control would result in a bad controlling because of the fixed control law.

So for my questions:
What's the reason why we are using linear models to find linear control laws for nonlinear systems? Even if the model has high nonlinearities?

Isn't quadratic programming with nonlinear state space models the only option for do optimal nonlinear control and make it really works for all systems?

Best Answer

TL;DR (short answer): If you want to control an inverted pendulum to balance itself you can use linear controllers for stabilizing the system. If you want to land on the moon then you will have to use nonlinear control. Depending on the design requirements it might be sufficient to use a linear controller, but for high-end applications, you will most likely have to use nonlinear control design.


Long answer: Linear controllers simply work for many nonlinear plants. This is possible because many nonlinear systems can be described pretty accurate by a linear approximation at a specific operating point. There are control architectures like gain scheduling which are based on this principle. The downside of linearization is that the control effort is often not very efficient and that the performance of the system is not as good as it could be. An example is given in robotics in which the end effector (for example a cutting tool) has to follow a circular trajectory. A simple PID controller is not able to provide a sufficient quality for the reference tracking task. Another example is given in time optimal control. If you want to move a point mass form an initial position to the origin in minimal time, while the input is $u$ is constrained. The solution to this problem is given by a switching bang-bang control, which is nonlinear.

At the same time, linear control theory is almost a completed field of research (there are only a few open questions), that means control designers have a lot of knowledge about what and how they can achieve something. Hence, linear control theory is something like a quite universal toolbox to many control problems and it can be easily applied.

Nonlinear control theory is still a very active field of research. Hence, there are only a few methods like exact feedback linearization, backstepping and sliding mode control which can be applied to more general nonlinear systems. But there are still nonlinear systems that cannot be controlled by these nonlinear control architectures. These methods also have drawbacks. For example, exact feedback linearization requires the designer to have very precise knowledge about the system parameters and it also eliminates useful nonlinearities that would reduce the control effort. Backstepping and sliding mode control can be made very robust against plant uncertainties. Backstepping has the drawback that only works for systems in strict-feedback form and you have to come up with a Control Lyapunov function for the first subsystem and that the resulting closed-loop dynamics are nonlinear and difficult to predict. Whereas sliding mode control shows chattering close to the switching surface.

Related Question