[Math] In control theory, why do we linearize around the equilibrium for a nonlinear system

control theorynonlinear system

For example, in these notes:
enter image description here

In the first example with the pendulum, they define the equilibrium as where the pendulum is at the vertical position (x=0), with a angular velocity of 0 (x'=0) and the input torque is 0 (u=0). Why do we want to study the behaviour of a system at rest? Why not linearize somewhere else. Couldn't linearization be performed at a state where the system is not at rest in case you wanted to calculate for information in such a configuration?

Best Answer

In general, you can linearize around any known solution. The idea is that once a solution $\theta_0(t)$ is known, nearby solutions $\theta$ approximately follow a linear equation: namely, writing $\theta (t) = \theta_0(t) + h(t)$ we get $$ I\theta''+Mgl\sin\theta \approx (I\theta_0''+Mgl\sin\theta_0) + Ih''+(Mgl\cos\theta_0 )h $$ which leads to approximate linear equation $Ih''+(Mgl\cos\theta_0 )h =0$ because $(I\theta_0''+Mgl\sin\theta_0)=u$.

The catch is: do you know $\theta_0$ to begin with? An equilibrium solution is easy to find. Finding a generic solution... well, that's just the original problem.

But you will occasionally see linearization along a non-constant periodic orbit called a limit cycle or even an arbitrary trajectory. This is generally referred to as tracking-control.