I’m trying to test different optimization methods (including my own) by estimating parameters of a state space model. My initial idea was to include both the update rules and optimization methods in the ODE function, like below:
function dpdt = MyODE(t, p, u, y, Ts)% p = [w11, w12, w21, w22, theta11, theta12, theta21, theta22]
if t==0 adj_t=1;else adj_t=floor(t/Ts) + 1;end pn1 = 1;pn2 = 2; dpdt = zeros(8, 1); lambda = 0.1; % Learning rate
dpdt(1) = p(2);dpdt(3) = p(4);dpdt(2) = u(adj_t) - (p(2)*(pn1+pn2) + p(1)*pn1*pn2);dpdt(4) = u(adj_t) - (p(4)*(pn1+pn2) + p(3)*pn1*pn2);dpdt(5:end) = -lambda * (dot(p(5:end), p(1:4)) - y(adj_t)).*p(1:4); % Gradient Descent
end
In this function, u, y and are passed-in global parameters. The model is continuous but input signal u and output signal y are sampled. Therefore I need to deal with time. I can call it using
[t, p] = ode23s(@(t, p) MyODE(t, p, u, y, Ts), tspan, p0);
But the problem is that the parameters that I’m estimating are not converging. I’m sure the input and output don’t have problems, because I have tried to use the functions (idss, ssest, etc) in System Identification toolbox to successfully identify the parameters. I doubt that it is because the way I deal with time messed up the ODE solver’s own step size. It would be great if anyone can give me some advice on this.
I am also aware that this way of testing the optimization methods is not ideal. Several functions (pem, idgrey, etc.) in the system identification toolbox are much better than what I have done. The problem is that I want to plot out how the parameter are changing during the optimization process, yet from these functions I cannot get how the parameters change (at least not in an easy way). If I were intended to extend the optimization functions used by pem or ssest, where should I get started?
Best Answer