[Math] What are the differences between mathematical systems theory, Dynamical Systems, and Optimization and Control and how are they related to each other

control theorydynamical systems

One of the areas of research in Systems and Controls group at Georgia Institute of Technology is Mathematical systems theory. It seems that the electrical engineering department at Georgia Institute of Technology is the only electrical engineering department that studies Mathematical systems theory.

On the other hand, arxiv archive has categories like Dynamical Systems, Systems and Control and Optimization and Control; some of the authors of the papers in these categories are from electrical engineering departments.

What are the differences between mathematical systems theory, Dynamical Systems, Systems and Control, and Optimization and Control and how are they related to each other? what universities are well known for their research in these areas? are these applied math?

Best Answer

In general any system of the form: $$\dot x = f(x)$$ is a dynamical system. For example the simple harmonic oscillator is a dynamical system $\ddot x = - kx$. When the right hand side is linear we call it a linear system. When the right hand side takes the form, $$ \dot x = f\left( {x,u} \right) $$ where $u$ is a input we can choose, we call it a controlled dynamical system (and here it ties in with control systems theory). Moreover when the right hand side is linear for the controlled dynamical system the theory we have is rich and encompasses many methods (from classical transfer functions to advanced state-space methods). When the right hand side has an explicit dependence on time $\dot x = f(x,t,u)$ we have non-autonomous systems.

When we search over the space of possible inputs $u$ under some cost functionals or optimality constraints, the succeeding theory is the one of optimal control which has ties with the calculus of variations and functional optimization.

So maybe one way to think about is that the first case, $\dot x = f (x) $ is at the most abstract level, the mathematicians domain whereas with some additional structure (maybe linearity for example) to the formulation; tools like transfer functions, Bode plots etc. becomes useful and then we kind of move over into the engineer's domain. This is most apt when talking about linear systems. With a non-linear dynamical system, the theory that is useful depends heavily on real and functional analysis (Lyapunov theory, zero dynamics, non-minimum phase systems, etc.), and is generally not approachable at the undergraduate level and will require some level of mathematical sophistication.

Indeed they are all related to each other because they are all their very heart dealing with deterministic differential equations of some form or the other. Mathematical systems theory tries to deal with dynamical systems at the most abstract level (with or without control) and this is related to bifurcation theory, chaos, etc. which as you pointed out is really the domain of applied mathematics. When we have an input output relation, the systems we consider generally has some real-world application so we move over into the engineering side and try to make it less abstract and more approachable.


I am not sure how to answer the question about research at universities because it depends on what kind of research. Are you talking about cutting edge controls and dynamics? Then you have Google, Boston Dynamics, MIT, ETH Zurich etc. working on very applied controls. Are you talking about systems theory research? Then that's much harder to answer, but there are applied mathematicians (often allied with the Electrical or Mechanical Engineering departments) all over the country chipping away at such research.