Jacobian of a system of equations including an ODE for Newton-Raphson

jacobianlinear algebramultivariable-calculusnewton raphsonnumerical methods

I want to use the Newton-Raphson method to solve a system of equations and in order to do so, I need to calculate the Jacobian. The system given describes the stages (called $k$) of an implicit numerical method, which I want to solve for using Newton-Raphson:

$0 = – \binom{k_1}{k_2} + f\left(\binom{y_{n1}}{y_{n2}} + h \left( B_{11} \binom{k_1}{k_2} + B_{12} \binom{k_3}{k_4} \right) \right) \\
0 = – \binom{k_3}{k_4} + f\left(\binom{y_{n1}}{y_{n2}} + h \left( B_{22} \binom{k_3}{k_4} + B_{21} \binom{k_1}{k_2} \right) \right)$

where the ODE $f$ is the Van der Pol oscillator $ \ddot{x}(t) – \mu (1-x^2(t)) \dot{x}(t)+x(t)=0$,
$y_{n}$ are approximations computed by the numerical method (always known), $h$ is the stepsize (scalar) and $B_{ij}$ are also constant scalars (entries of a 2×2 matrix).

To solve for the four entries of the k vector numerically, the system's Jacobian is required and I could not figure it out correctly. I expect it to look something like this:

$ J = -I_4 + h B J_f$

where $I_4$ denotes the identity matrix of dimension 4, $h$ and some matrix form of $B$ might be needed due to the chain rule (really not sure) and $J_f$ is the Jacobian of the vdP oscialltor:

$$ J_f = \left[
\begin{matrix}
0 & 1 \\
-2 \mu x \dot{x}-1 & \mu(1-x^2) \\
\end{matrix}
\right] $$

I am mostly unsure concerning four things:

  1. Do I apply the chain rule?
  2. What is the argument of $J_f$?
  3. How can I get the correct dimensions ($J$ should be 4×4 I think). Maybe by some multiplication of $B$ and $J_f$?
  4. I need to plug in my guess somewhere in the Jacobian, so the four entries of the vector $k$ must appear somewhere.

In case it helps: I use MATLAB to run the whole numerical method and also to compute the Newton-Raphson inside the method.

Best Answer

The general idea of Newton-like methods is that you approximate the equation by some linearization. This linearization does not need to be very exact to still get a reasonably fast convergence. Here just iterating the stage equations as a fixed-point iteration gives linear convergence with a factor $O(h)$. If the linearization is also $O(h)$ exact, then the resulting Newton-like iteration will still be linear, but now with a factor $O(h^2)$. Depending on how the initial guess, the predictor, is computed, this can result in a drastic reduction of required corrector steps in the Newton-like iteration to get to an error level that is below the method step truncation error.

$\newcommand{\D}{\mathit{\Delta}}$ So if $J$ is the Jacobian at $y_n$, then you can decompose $f(y+\D y)=f(y)+J\D y+R(\D y)$. For the given 2-stage method this gives the system $$ \vec k_1-hJ(B_{11}\vec k_1+B_{12}\vec k_2)=f(\vec y_n)+R(h(B_{11}\vec k_1+B_{12}\vec k_2)) \\ \vec k_2-hJ(B_{21}\vec k_1+B_{22}\vec k_2)=f(\vec y_n)+R(h(B_{21}\vec k_1+B_{22}\vec k_2)) $$ In an iteration step the left side is a linear system for the new values, while the right side is computed from the old values. This form demonstrates that the right side is constants plus small superlinear corrections, in the practical computation the right side will be computed as $f(y)+R(\D y^{old})=f(y+\D y^{old})-J\D y^{old}$. \begin{align} \pmatrix{I_2-hB_{11}J&-hB_{12}J\\-hB_{21}J&I_2-hB_{22}J} \pmatrix{\vec k_1\\\vec k_2} = \pmatrix{f(y+\D_1 y^{old})-J\D_1 y^{old}\\f(y+\D_2 y^{old})-J\D_2 y^{old}} \end{align} Using a Kronecker product, the matrix on the left could also be written as $I_4-hB\otimes J$.

Related Question