Riemann Problem for Linear Hyperbolic Systems

cauchy problemhyperbolic-equationsnumerical methodspartial differential equations

I am following LeVeque's text "Numerical Methods for Conservation Laws" so I will be following his notation.

Suppose we are solving
$$u_t + Au_x = 0.$$
where $A \in \mathbb{R}^{m \times m}$. By diagonalizing $A = R \Lambda R^{-1}$ and setting $v = R^{-1}u$, the solution to the above is
$$u(x,t) = \sum_{p=1}^m v_p(x-\lambda_pt, 0)r_p$$
where $r_p$ is the $p$th eigenvector.

Now for the actual Riemann problem, suppose we have our original PDE along with the initial data:
$$u(x,0) = \begin{cases} u_l &\quad x<0 \\ u_r &\quad x>0\end{cases}$$

The text claims we can decompose $u_l$ and $u_r$ as
$$u_l = \sum_{p=1}^m \alpha_p r_p \quad u_r = \sum_{p=1}^m \beta_br_p$$
my first question is why are we able to do this?

This leads to
$$v_p(x,0) = \begin{cases} \alpha_p &\quad x<0 \\ \beta_p &\quad x>0\end{cases}$$
and so
$$v_p(x,t) = \begin{cases} \alpha_p &\quad x-\lambda_pt<0 \\ \beta_p &\quad x-\lambda_pt>0\end{cases}$$

Next, he lets $P(x,t)$ be the maximum value of $p$ for which $x-\lambda_pt > 0$ so that
$$u(x,t) = \sum_{p=1}^{P(x,t)} \beta_p r_p + \sum_{P(x,t)+1}^m \alpha_p r_p$$
My second question is what is the motivation behind $P$ here? I am unable to see its role.

Next, he says across the $p$th characteristic the solution jumps with the jump given by
$$[u] = (\beta_p – \alpha_p)r_p$$
My third question is that I assume this is because we are jumping from one part of the discontinuity in the Riemann solution to the other, is this true?

Lastly, since $f(u) = Au$ we have
$$[f] = A[u] = A(\beta_p – \alpha_p)r_p = \lambda_p[u]$$
so that the solution $u(x,t)$ can be written in terms of these jumps as
$$u(x,t) = u_l + \sum_{\lambda_p < x/t} (\beta_p – \alpha_p)r_p = u_r – \sum_{\lambda_p \geq x/t} (\beta_p – \alpha_p)r_p$$
My fourth question is that I am unclear how he derives the above expression for $u(x,t)$ using the jump condition. How does one imply the other?

Thanks!

Best Answer

1.) Because the vectors $r_p$ form a basis, the eigen-basis of the eigen-decomposition.

2.) Along the way it must have been somehow justified that $A$ only has real eigenvalues, and that in the eigen-decomposition the eigenvalues are sorted in ascending order. Then $P$ separates which eigenvalues give a shock front to the left and which to the right of $(x,t)$

3.) Yes.

4.) This is just arithmetic manipulation with some geometric interpretation. \begin{align} u(x,t) &= \sum_{p=1}^{P(x,t)} \beta_p r_p + \sum_{p=P(x,t)+1}^m \alpha_p r_p \\ &=\sum_{p=1}^{P(x,t)} (\beta_p-\alpha_p) r_p + \sum_{p=1}^m \alpha_p r_p \\ &=\sum_{p:\,x-\lambda_pt>0} (\beta_p-\alpha_p) r_p + u_l \end{align}