Rewriting Linear ODE for Control

control theoryordinary differential equations

I'm working on a model for an engineering control project and I've run into the issue of having the control multiplied by the state variable in a differential equation. The simplified system looks something like:

$\dot{y}_1=y_2\\
\dot{y}_2=\left(a+g\left(t\right)\right)y_1+\left(b+cg\left(t\right)\right)y_2$

Here $y_1$ and $y_2$ are state variables, $g\left(t\right)$ is the control, and $a,b,c$ are constants. Ideally it'd be nice to take a Laplace transform and work with the standard $\dot{x}=Ax+Bu$ type system, but as it stands I have something that has an $x*u$ term. It seems like linearization in the differential equation sense is unnecessary since as it stands the system is linear, it just has variable coefficients.

One thought I had was to take a Taylor Series approximation, but I wasn't sure if that could be applied directly in this context.

Edit: If I do a Taylor series expansion of $H\left(t\right)=g\left(t\right)y_1\left(t\right)$ around some point $d$ I get:

$H\left(t\right)\approx H\left(d\right)+H'\left(d\right)\left(t-d\right)=g\left(d\right)y_1\left(d\right)+\left(g'\left(d\right)h\left(d\right)+g\left(d\right)h'\left(d\right)\right)\left(t-d\right)$

Which doesn't seem to help given that I've "lost" in some sense the control function all together.

Best Answer

You have the system

$$ \begin{align} \dot{x}_1 &= x_2 \\ \dot{x}_2 &= (a + u) x_1 + (b c) u x_2 \end{align} $$

At steady state, $x_2 = 0$ so if you are interested in steady state analysis it makes sense to control $x_1$. You could use the control

$$ u = \frac{k_1(x_{1,des} - x_1) - k_2 x_2 - a x_1}{x_1 + (b c) x_2} $$

This makes your system behave like

$$ \begin{pmatrix} \dot{x}_1 \\ \dot{x}_2 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -k_1 & -k_2 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} 0 \\ k_1 \end{pmatrix} x_{1,des} $$

as long as $x_1 + (b c) x_2 \neq 0$. This system is linear and has the transfer function:

$$ \frac{k_1}{s^2 + k_2 s + k_1} $$

which is stable if $k_1 > 0$ and $k_2 > 0$.


We can test this. For example assume $a = 1, b = 2, c = 3, k_1 = 10, k_2 = 10$ and for the initial conditions $x_1(0) = 3$ and $x_2(0) = 0$. With the following trajectory we can see if tracking is possible:

$$ x_{1,des} = \begin{cases} 3 & t \in[0, 5) \\ 4 & t \in[5, 15) \\ 5 & t \in[15, 25) \\ 4.5 & t \in[25, 35) \\ 4 & t \geq 35 \end{cases} $$

Simulate:

enter image description here

You can see that the controller works. It is possible to reach different steady state values of $x_1$. In the last plot you can see that the $x_1 + (bc) x_2$ remains far enough away from zero. This is something you have to make sure by selecting the $x_{1,des}$ trajectory and the parameters carefully.

It should probably work if you make only "small" steps with $x_{1,des}$ and wait until $x_2$ has decayed to zero before making the next step.

Related Question