Section 3.1.1. “Complete Integrals” from Evans PDE book

linear algebrapartial differential equationsproof-explanationproof-verification

Question about an observation taken from p. 92-93 of the book {1} in the title.
With

$$
F(Du,u,x) = 0 \tag{1}
$$

We denote a non linear first order PDE. Here $u$ is the scalar function of vector variable $x$ and $Du = (u_{x_1}, \ldots, u_{x_n})$ is the gradient.

From the book

NOTATION. We write
$$(D_a u, D^2_{xa}u) =
\begin{pmatrix}
u_{a_1} & u_{x_1a_1} &\dots & u_{x_n a_1} \\
\vdots & \vdots & \ddots & \vdots \\
u_{a_n} & u_{x_1a_n} &\dots & u_{x_n a_n}
\end{pmatrix}_{n \times (n+1)} \tag{2}
$$

where $u(x;a)$ is a solution for $F$ parametrized by $a \in A \subset \mathbb{R}^n$.

DEFINITION. A $C^2$ function $u = u(x;a)$ is called complete integral in $U\times A$ provided

  • (i) $\quad u(x;a)$ solves $(1)$ for each $a \in A$
  • (ii) $\quad \text{rank}(D_a u, D^2_{xa}u) = n \quad (x \in U, a \in A)$

After this definition we have

Interpretation. Condition (ii) ensures $u(x;a)$ "depends on all the $n$ independent parameters $a_1,\ldots,a_n$". To see this suppose $B \subset \mathbb{R}^{n-1}$ is open, and for each $b \in B$ assume $v = v(x;b)$, $(x \in U)$ is a solution of $(1)$. Suppose also there exists a $C^1$ mapping $\psi : A \to B$, $\psi = (\psi^1,\ldots,\psi^{n-1})$ such that
$$
u(x;a) = v(x;\psi(a)) \quad (x \in U, a \in A) \tag{3}
$$

That is, we are supposing the function $u(x;a)$ "really depends only on the $n-1$ parameters $b_1,\ldots, b_{n-1}$". But then
$$
u_{x_i a_j}(x;a) = \sum_{k=1}^{n-1} v_{x_i b_k}(x;\psi(a)) \psi_{a_j}^k(a) \quad (i,j = 1,\ldots, n) \tag{*}
$$

Consequently
$$
\det(D_{xa}^2 u) = \sum_{k_1,\ldots, k_n = 1}^{n-1} v_{x_1 b_{k_1}} \ldots v_{x_n b_{k_n}} \det \begin{pmatrix}
\psi_{a_1}^{k_1} & \ldots & \psi_{a_n}^{k_1} \\
& \ddots & \\
\psi_{a_1}^{k_n} & \ldots & \psi_{a_n}^{k_n}
\end{pmatrix} \tag{**}
$$

I don't get how $(**)$ follows from $(*)$. To me $(*)$ is the $(i,j)$ entry of matrix defined as product between two matrices namely

$$
V(x;a) =
\begin{pmatrix}
v_{x_1,b_1}(x;a) & \ldots & v_{x_1,b_{n-1}}(x;a) \\
\vdots & \ddots & \vdots \\
v_{x_n,b_1}(x;a) & \ldots & v_{x_n,b_{n-1}}(x;a)
\end{pmatrix}
$$

and
$$
\Psi(a) =
\begin{pmatrix}
\psi_{a_1}^1(a) & \ldots & \psi_{a_n}^{1}(a) \\
\vdots & \ddots & \vdots \\
\psi_{a_1}^{n-1}(a) & \ldots & \psi_{a_n}^{n-1}(a)
\end{pmatrix}
$$

So

$$
D^2_{xa} u = V(x;a) \Psi(a)
$$

But I cannot manage from this to derive the determinant, because the matrices are not square, but rectangular so I cannot apply the Binet rule for the product of determinants.

Any suggestion?

Also continuing with the chapter it seems to me that in order to use this method I'd need to find a complete integral first, so I can generate other solutions.

However at this point the question is should I use the characteristic method described later to find a complete integral first?

{1} L.C. Evans, Partial Differential Equations, 2nd ed., American Mathematical Soc., 2010.

Update: I'd still like to understand where (**) comes from, however I've found a workaround that doesn't use any explicit computation.

I can use the dimension theorem to reach the same conclusion, since I can regards $V(x;a)$ and $\Psi(a)$ as linear maps.

Since $V(x;a) : \mathbb{R}^{n-1} \to \mathbb{R}^n$ and $\Psi(a) : \mathbb{R}^n \to \mathbb{R}^{n-1}$ then we must have $rank(V(x;a)) \leq n – 1$ and $rank(\Psi(a)) \leq n – 1$, more specifically we have $dim(Ker(\Psi(a))) \geq 1$ (which means an entire subspace of dimension at least $1$ is mapped to $0$). This yields to state $dim(ker(V(x;a)\Psi(a))) \geq 1 \Rightarrow rank(V(x;a)\Psi(a)) \leq n – 1$ which in turnes yields $rank(D^2_{xa}u) \leq n – 1$, since $D^2_{xa}u : \mathbb{R}^n \to \mathbb{R}^n$ then the determinant must be 0.

I think the arguments works fine, but I think Evans uses computation like (**) in the book so understanding where it comes from might make my life easier in the future.

Best Answer

for a matrix $A=(a_{ij})_{i,j=1,\dots,n}$, the deteminant is $$ \det A = \epsilon_{i_1\dots i_n} a_{1i_1}\dots a_{n i_n} $$ where Einstein summation (see also Levi-Civita symbol) is used for the $n$ indices $i_1,\dots, i_n$. Sub in $A = D^2_{xa} u$, with $$a_{ij} = u_{x_i,a_j} = \sum_{k=1}^{n-1} v_{x_i b_k} \psi_{a_j}^k $$ we obtain (relabelling the dummy variable $k$ in each term) \begin{align} \det D^2_{xa} u &= \epsilon_{i_1\dots i_n} a_{1i_1}\dots a_{n i_n} \\ &= \epsilon_{i_1\dots i_n} \sum_{k_1=1}^{n-1}\color{blue}{v_{x_1 b_{k_1}} }\color{red}{\psi_{a_{i_1}}^{k_1}} \dots \sum_{k_n=1}^{n-1}\color{blue}{v_{x_n b_{k_n}}} \color{red}{\psi_{a_{i_n}}^{k_n}} \\ &= \sum_{k_1,\dots,k_n=1}^{n-1} \epsilon_{i_1\dots i_n} \color{blue}{v_{x_1 b_{k_1}} \dots v_{x_n b_{k_n} }} \color{red}{\psi_{a_{i_1}}^{k_1} \dots \psi_{a_{i_n}}^{k_n}} \\ & = \sum_{k_1,\dots,k_n=1}^{n-1} v_{x_1 b_{k_1}} \dots v_{x_n b_{k_n} } \epsilon_{i_1\dots i_n} \psi_{a_{i_1}}^{k_1} \dots \psi_{a_{i_n}}^{k_n} \\ & = \sum_{k_1,\dots,k_n = 1}^{n-1}v_{x_1 b_{k_1}} \dots v_{x_n b_{k_n} } \det B(k_1,k_2,\dots,k_n) \end{align} where $B(k_1,\dots,k_n) = (b_{IJ})_{I,J=1,\dots n}$ is the matrix $$ b_{IJ} = \psi_{a_J}^{k_I}$$ which is his formula. Maybe its worth noting that the above manipulations are valid because the implicit sums handle all the 'vectoriness' and we need only care about multiplication of real numbers, which is commutative.

Related Question