TL;DR (short answer): If you want to control an inverted pendulum to balance itself you can use linear controllers for stabilizing the system. If you want to land on the moon then you will have to use nonlinear control. Depending on the design requirements it might be sufficient to use a linear controller, but for high-end applications, you will most likely have to use nonlinear control design.
Long answer:
Linear controllers simply work for many nonlinear plants. This is possible because many nonlinear systems can be described pretty accurate by a linear approximation at a specific operating point. There are control architectures like gain scheduling which are based on this principle. The downside of linearization is that the control effort is often not very efficient and that the performance of the system is not as good as it could be. An example is given in robotics in which the end effector (for example a cutting tool) has to follow a circular trajectory. A simple PID controller is not able to provide a sufficient quality for the reference tracking task. Another example is given in time optimal control. If you want to move a point mass form an initial position to the origin in minimal time, while the input is $u$ is constrained. The solution to this problem is given by a switching bang-bang control, which is nonlinear.
At the same time, linear control theory is almost a completed field of research (there are only a few open questions), that means control designers have a lot of knowledge about what and how they can achieve something. Hence, linear control theory is something like a quite universal toolbox to many control problems and it can be easily applied.
Nonlinear control theory is still a very active field of research. Hence, there are only a few methods like exact feedback linearization, backstepping and sliding mode control which can be applied to more general nonlinear systems. But there are still nonlinear systems that cannot be controlled by these nonlinear control architectures. These methods also have drawbacks. For example, exact feedback linearization requires the designer to have very precise knowledge about the system parameters and it also eliminates useful nonlinearities that would reduce the control effort. Backstepping and sliding mode control can be made very robust against plant uncertainties. Backstepping has the drawback that only works for systems in strict-feedback form and you have to come up with a Control Lyapunov function for the first subsystem and that the resulting closed-loop dynamics are nonlinear and difficult to predict. Whereas sliding mode control shows chattering close to the switching surface.
For linear quadratic integral (LQI) control to work, the augmented system has to be stabilizible. This limitation can be also be found in the Matlab documentation of the LQI function.
A linear time invariant (LTI) system is stabilizible, if all its uncontrollable modes are stable. You can first check with the ctrb function if there are uncontrollable modes:
A = [-1.34, 0.672, -12.9669, 9.775;
-2.07, -3.275, 1.707, 0;
4.405, 0.2345, -4.3911, 0;
0, 1, 0.0713, 0];
B = [0, -3.0234;
18.624, 24.11;
14.073, -7.06;
0, 0];
C = eye(4);
Aa = [A, zeros(4, 4); -C, zeros(4, 4)];
Ba = [B; zeros(4, 2)];
Ca = [C, zeros(4, 4)];
rank(ctrb(Aa, Ba))
This script gives the output
ans =
6
so your augmented system has two uncontrollable modes. You can also use the ctrbf function to get a Kalman decomposition, which seperates controllable and uncontrollable portions: it finds a similarity transform $T$ such that
$$
\bar{A}_a = T A_a T^T = \begin{bmatrix}
A_{a,uc} & 0 \\
A_{a,21} & A_{a,c}
\end{bmatrix}
$$
where $A_{a,c}$ is the controllable and $A_{a,uc}$ the uncontrollable portion of your augmented system matrix $A_a$. In code:
[Aa_bar, Ba_bar, Ca_bar, T, k] = ctrbf(Aa, Ba, Ca);
n_uc = size(Aa, 1) - sum(k); % Number of uncontrollable modes is 8 - 6 = 2
Aa_uc = Aa_bar(1:n_uc, 1:n_uc)
which outputs
Aa_uc =
1.0e-16 *
0.330254728851448 0.215513706491097
0.433511198605747 0.089609131250558
So $A_{a, uc}$ is (practically, up to numerics) a zero matrix, so the uncontrollable modes are not stable because $A_{a, uc}$ is not a Hurwitz matrix.
Best Answer
LQR stands for linear quadratic regulator, where: linear refers to the linear dynamics of the system (which can both be invariant or variant in time); quadratic refers to the cost function which is an integral of a quadratic form, which the LQR minimizes; regulator refers to the goal of the control input to bring the system to zero.
Since your system is not linear you can't directly use LQR for that system. You would either have to resort to linearizing your model around an equilibrium point, or use another technique like some model predictive control (MPC). Linearization will in general only stabilize the system locally around the equilibrium point, while nonlinear MPC might be none convex and therefore potentially really hard to solve.