Well, you're asking specifically how to use the lsim
command and not about the linearization of your model, so I'm working from the assumption that your model is correctly linearized. SO, starting there, you would use the lsim command as follows:
lsim(sys,u,t,x0)
From the documentation:
The vector t specifies the time samples for the simulation ... and consists of regularly spaced time samples:
t = 0:dt:Tfinal;
Then,
The input u
is an array having as many rows as time samples (length(t)
) and as many columns as system inputs. For instance, if sys
is a SISO system, then u
is a t
-by-1 vector. If sys
has three inputs, then u
is a t
-by-3 array. Each row u(i,:)
specifies the input value(s) at the time sample t(i)
.
And finally,
x0
is an initial condition for the system states. This syntax applies only when sys
is a state-space model.
So, first define your system. Your question isn't formatted in a state space form that Matlab can accept because your state and input vectors $\bf{x}$ and $\bf{u}$ have constants in them. I'm going to make this a little easier on me by just masking everything behind constants. Multiply everything out and you get:
$$
\left[\begin{array}{cc}
a & b \\
c & d \\
\end{array}\right]\left[\begin{array}{c}
x_1 + e \\
x_2 + f \\
\end{array}\right] =
\left[\begin{array}{cc}
a(x_1 + e) + b(x_2 + f) \\
c(x_1 + e) + d(x_2 + f) \\
\end{array}\right]
$$
$$
\left[\begin{array}{cc}
a(x_1 + e) + b(x_2 + f) \\
c(x_1 + e) + d(x_2 + f) \\
\end{array}\right] =
\left[\begin{array}{cc}
a x_1 + ae + b x_2 + bf \\
c x_1 + ce + d x_2 + df \\
\end{array}\right] =
\left[\begin{array}{cc}
a x_1 + b x_2 + ae + bf \\
c x_1 + d x_2 + ce + df \\
\end{array}\right]
$$
So, the tricky bit here might be to recognize that those last terms, ae+bf
and ce+df
are multiplied by one, so really you wind up with something like:
$$
\left[\begin{array}{cc}
(a)x_1 + (b)x_2 + (ae + bf)1 \\
(c)x_1 + (d)x_2 + (ce + df)1 \\
\end{array}\right]
$$
Now, if you were to have a variable that didn't change over time, you could set its initial value to one and then use that. You do this by adding a state to your state matrix $\bf{A}$ and leaving the corresponding row for that state equal to zeros. Now the new state ($x_3$) doesn't change over time, and as long as you set its initial value to one, you are left with the form you need in proper state space form:
$$
\left[\begin{array}{cc}
(a)x_1 + (b)x_2 + (ae + bf)x_3 \\
(c)x_1 + (d)x_2 + (ce + df)x_3 \\
\end{array}\right]
$$
You can easily separate that out:
$$
\left[\begin{array}{ccc}
a & b & ae + bf \\
c & d & ce + df \\
0 & 0 & 0 \\
\end{array}\right]\left[\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}\right]
$$
Now you need to be careful to include the constants you also have in your input vector:
$$
\left[\begin{array}{cc}
g & h \\
j & k \\
\end{array}\right]\left[\begin{array}{c}
u_1 + m \\
u_2 + n \\
\end{array}\right] =
\left[\begin{array}{cc}
g(u_1 + m) + h(u_2 + n) \\
j(u_1 + m) + k(u_2 + n) \\
\end{array}\right]
$$
Same steps as before:
$$
\left[\begin{array}{cc}
g(u_1 + m) + h(u_2 + n) \\
j(u_1 + m) + k(u_2 + n) \\
\end{array}\right] =
\left[\begin{array}{cc}
g(u_1) + gm + h(u_2) + hn \\
j(u_1) + jm + k(u_2) + kn \\
\end{array}\right]
$$
$$ = \left[\begin{array}{cc}
g(u_1) + h(u_2) + (gm + hn)x_3 \\
j(u_1) + k(u_2) + (jm + kn)x_3 \\
\end{array}\right]
$$
Put it all together:
$$
\left[\begin{array}{c}
\dot{x_1} \\
\dot{x_2} \\
\dot{x_3} \\
\end{array}\right] = \left[\begin{array}{ccc}
a & b & ae + bf + gm + hn \\
c & d & ce + df + jm + kn \\
0 & 0 & 0 \\
\end{array}\right]\left[\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}\right] + \left[\begin{array}{cc}
g & h \\
j & k \\
0 & 0 \\
\end{array}\right]\left[\begin{array}{c}
u_1 \\
u_2 \\
\end{array}\right]
$$
This was probably the tricky part in getting your equations to the point of being able to use lsim
. From here, everything else is self-explanatory.
- Determine your sample interval
dt
or your sample rate fs
and calculate the sample interval as dt = 1/fs;
.
- Define your time vector
t = 0:dt:tFinal;
.
- Define your initial conditions. If you want to start with $x_1$ and $x_2$ at zero, then you would use
x0 = [0;0;1];
because again, $x_3$ must be one.
- Define your inputs. If you want to step only $u_1$, then you would use
u = [ones(length(t),1),zeros(length(t),1)];
. Swap those to step $u_2$, or whatever other combination you want.
- Define your matrices and then your system;
A = [...]; B = [...];
etc., then sys = ss(A,B,C,D);
, where the matrices are as I've defined above.
- Invoke lsim -
lsim(sys,u,t,x0);
.
Done!
Best Answer
As mentioned by Johan Löfberg the gain you are interested in is defined as
$$ G(s) = K \frac{\prod (s - z_i)}{\prod (s - p_i)}. $$
For a SISO transfer function this $K$ can be found by dividing the coefficient of the highest powers of $s$ in the numerator by the coefficient of the highest powers of $s$ in the denominator. When a transfer function is given from a state space form
$$ G(s) = C\, (s\,I - A)^{-1} B + D $$
then the coefficient of the highest powers of $s$ in the denominator should be equal to one, because the denominator should be the determinant of $(s\,I - A)$. The numerators of $(s\,I - A)^{-1}$ always have a smaller order is $s$ then the determinant of $(s\,I - A)$. Therefore if $D \neq 0$ then the highest powers of $s$ in the numerator should match the highest powers of $s$ in the denominator. This then also implies that $K = D$.
If $D=0$ then it might get a little bit more complicated. I am not sure if this would be the most robust and efficient method but it should work. Namely if the state space model would be transformed into the controllable or observable canonical form then extracting the coefficients of the numerator should be straightforward. Namely for a state space model in the controllable canonical form the numerator coefficients should equal to $C$ when $D=0$. Therefore the gain $K$ should be equal to the first non-zero entry of $C$ starting from the right. In order or to convert a state space model into its controllable canonical form the following similarity transformation could be used
$$ \vec{v} = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix} \begin{bmatrix} B & A\,B & \cdots & A^{n-1} B \end{bmatrix}^{-1}, $$
$$ T = \begin{bmatrix} \vec{v} \\ \vec{v}\,A \\ \vdots \\ \vec{v}\,A^{n-1} \end{bmatrix}. $$
For more detail see this question. The new $C$ matrix in the controllable canonical form is then given by
$$ C_\text{ctrb} = C\,T^{-1}. $$
It can be noted that this method only works if the system is controllable, since the inverse of the controllability matrix is used.
When applying this method to your example state space model one gets that the controllability matrix is not full rank. However the observability matrix is full rank, so will use the dual system instead of finding the controllable and observable part of the system. This changes the equations into
$$ \vec{v} = \begin{bmatrix} C \\ C\,A \\ \vdots \\ C\,A^{n-1} \end{bmatrix}^{-1} \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ -2 & -2 \end{bmatrix}^{-1} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ -\frac{1}{2} \end{bmatrix}, $$
$$ T = \begin{bmatrix} \vec{v} & A\,\vec{v} & \cdots & A^{n-1} \vec{v} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{1}{2} & 2 \end{bmatrix}, $$
$$ C_\text{ctrb} = B^\top T^{-\top} = \begin{bmatrix} 2 & 1 \end{bmatrix}. $$
The first non-zero entry of $C_\text{ctrb}$ starting from the right is $1$, so therefore $K=1$. If the state space model would be both uncontrollable and unobservable, then first the minimal space space realization would be required for this method. In that case it might just be faster to evaluate $G(s)$ and look at the resulting numerator coefficients.