[Math] reference for linear algebra books that teach reverse Hermite method for symmetric matrices

linear algebraquadratic-formsreference-request

May 11, 2019. Evidently the original method should be attributed to Lagrange in 1759. I got confused, Hermite is much more recent.

January 13, 2016: book that does this mentioned in a question today, Linear Algebra Done Wrong by Sergei Treil. He calls it non-orthogonal diagonalization of a quadratic form, calls his first method completion of squares, pages 201-202, section 2.2.1. In section 2.2.2, pages 202-205, he describes this method, calling it Diagonalization using row/column operations.

The method I mean is useful for symmetric matrices with integer, or at least rational entries. It diagonalizes but does NOT orthogonally diagonalize. The direction I do it, I usually call it Hermite reduction or Hermite's method. At the end, I need to find the inverse of my matrix (which usually has determinant one so it is not so bad). This other method produces an answer directly, a cookbook method not conceptually different from row reduction of matrices, especially using that to find its inverse. This method is very similar to Gauss reduction for positive binary quadratic forms, just allowing rational coefficients in the elementary matrices used; Gauss stuck with integers.

The method is mostly Gauss reduction, intended for binary positive forms. We deal with two variables (row/column pairs) at a time. As long as one of the two diagonal entries is nonzero there is no trouble, no choices to be made.
We start with a symmetric matrix $A_0.$ At each step, call it step $n,$ we are going to use some elementary matrix $E_n,$ same as in row reduction, such that $A_n =E_n^T A_{n-1} E_n$ has one fewer pair of off-diagonal nonzero entries. We also began with $P_0=I,$ then each step we take $P_n=P_{n-1}E_n.$ Eventually we get to some $n=N$ such that $A_N=D$ is diagonal and $P_N=P,$ with $P^T A P = D$ by construction. Oh, also by construction, $P$ has determinant $1.$

I JUST PUT AN EXAMPLE AT Find the transitional matrix that would transform this form to a diagonal form. not yet typeset, it is input and output from gp-pari and should not be too difficult to read, indeed one may copy the individual commands into pari and see how it progresses. I also put a 4 by 4 answer, the final answer typeset otherwise gp-pari output, at Given a $4\times 4$ symmetric matrix, is there an efficient way to find its eigenvalues and diagonalize it?

Let me go through the two examples, the second involves a choice because we get a zero diagonal element at one point.

First:

Let $$A = \left(\begin{array}{cc} 2&3 \\ 3&4 \end{array}\right) \in
M_n(\mathbb{C})$$

Find $P$ such that $P^TAP = D$ where $D$ is a diagonal matrix.

So here's the solution:

$$A = \left(\begin{array}{cc|cc} 2&3&1&0\\ 3&4&0&1 \end{array}\right) \sim \left(\begin{array}{cc|cc} 2&0&1&-3/2\\ 0&-1/2&0&1 \end{array}\right)$$

Therefore, $$P = \left(\begin{array}{cc} 1&-3/2\\ 0&1 \end{array}\right) \\ P^TAP = \left(\begin{array}{cc} 2&0\\ 0&-1/2 \end{array}\right) $$

So, this one was just Gauss reduction, allowing a rational off-diagonal entry in my $E_1$ in order to force the $1,2$ and $2,1$ pair of positions to become zero. As long as the upper left of the two diagonal coefficients is nonzero, we may take our $E_n$ to be upper triangular. If we are faced with a zero diagonal entry in the first row/diagonal that possesses any nonzero (therefore off-diagonal) entries, we need to do an extra step to force a nonzero diagonal element.

So, let's do the ever popular form $2xy$ this way.
$$ A = A_0 =
\left(
\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}
\right)
$$

As both diagonal entries are zero, switching row/columns 1 and 2 will still give $0$ in the 1,1 position. We do not like that. Instead, we take
a lower triangular $E_n,$ here
$$ E_1 =
\left(
\begin{array}{cc}
1 & 0 \\
1 & 1
\end{array}
\right)
$$

The way i am numbering the matrices, this gives
$$ A_1 = E_1^T A E_1 =
\left(
\begin{array}{cc}
2 & 1 \\
1 & 0
\end{array}
\right),
$$

also $$ P_1 = E_1. $$ Next, we go back to the more common upper triangular elementary matrices, with
$$ E_2 =
\left(
\begin{array}{cc}
1 & -\frac{1}{2} \\
0 & 1
\end{array}
\right).
$$

$$ D= A_2 = E_2^T A_1 E_2 =
\left(
\begin{array}{cc}
2 & 0 \\
0 & -\frac{1}{2}
\end{array}
\right),
$$

also

$$ P = P_2 = P_1 E_2 = E_1 E_2 =
\left(
\begin{array}{cc}
1 & -\frac{1}{2} \\
1 & \frac{1}{2}
\end{array}
\right),
$$

Note that, from $A_1 = E_1^T A E_1 $ and $D= A_2 = E_2^T A_1 E_2$ we indeed have
$$\color{red}{
D= A_2 = E_2^T (E_1^T A E_1) E_2 = E_2^T E_1^T A E_1 E_2 = (E_1 E_2)^T A (E_1 E_2)}
$$

which is why $P = E_1 E_2.$

The solution manual that has this would use "augmented" matrices, 4 by 2, not record the individual $E_i,$ just the $A_i$ augmented by $P_i.$ At least, given how I am numbering things, this is how I prefer to write such a summary, it may be slightly different for the examples in the other question:

$$ (A_0|P_0) = \left(\begin{array}{cc|cc} 0&1&1&0\\ 1&0&0&1 \end{array}\right)$$
$$ \mapsto (A_1|P_1) = \left(\begin{array}{cc|cc} 2&1&1&0\\ 1&0&1&1 \end{array}\right)$$
$$ \mapsto (A_2|P_2) = \left(\begin{array}{cc|cc} 2&0&1&-\frac{1}{2}\\ 0&-\frac{1}{2}&1&\frac{1}{2} \end{array}\right)$$
I have been seeing this method lately, but do not know any book that teaches it (or in what language). It would appear to be a book about matrix theory or linear algebra, and may never mention quadratic forms, hard to predict. Or, it may do quadratic forms over the reals, as is pretty common, and ignore the case of integer coefficients. I suspect nobody on MSE has taught this method, perhaps it is a recent book.

Here are recent occurrences, apparently two by the same guy, then two by another person. To find others, look for my answers that use the phrase Hermite reduction. One of the latter is answered my way, just called repeated completing the square, which is exactly right.

Finding $P$ such that $P^TAP$ is a diagonal matrix

Diagonalize a symmetric matrix

Find the transitional matrix that would transform this form to a diagonal form.

Diagonalizing a particular $3\times3$-matrix.

Very similar to the method in a Schaum's outline as seen in this answer: Given a $4\times 4$ symmetric matrix, is there an efficient way to find its eigenvalues and diagonalize it?

Indeed, here is the image uploaded by el.Salvador there:

enter image description here

Best Answer

I think I have the energy today to fill in the details of this png image of a calculation

enter image description here

from this question: Finding $P$ such that $P^TAP$ is a diagonal matrix


$$ P^T H P = D $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ - 4 & 1 & 1 \\ 0 & - \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 2 & 2 \\ 2 & 4 & 8 \\ 2 & 8 & 4 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & - 4 & 0 \\ 0 & 1 & - \frac{ 1 }{ 2 } \\ 0 & 1 & \frac{ 1 }{ 2 } \\ \end{array} \right) = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & - 2 \\ \end{array} \right) $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ 2 & \frac{ 1 }{ 2 } & - 1 \\ 2 & \frac{ 1 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & - 2 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 2 & 2 \\ 0 & \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } \\ 0 & - 1 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 2 & 4 & 8 \\ 2 & 8 & 4 \\ \end{array} \right) $$

$$ H = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 2 & 4 & 8 \\ 2 & 8 & 4 \\ \end{array} \right) $$ $$ D_0 = H $$

$$ E_j^T D_{j-1} E_j = D_j $$ $$ P_{j-1} E_j = P_j $$ $$ E_j^{-1} Q_{j-1} = Q_j $$ $$ P_j Q_j = Q_j P_j = I $$ $$ P_j^T H P_j = D_j $$ $$ Q_j^T D_j Q_j = H $$

$$ H = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 2 & 4 & 8 \\ 2 & 8 & 4 \\ \end{array} \right) $$


$$ E_{1} = \left( \begin{array}{rrr} 1 & - 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P_{1} = \left( \begin{array}{rrr} 1 & - 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{1} = \left( \begin{array}{rrr} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{1} = \left( \begin{array}{rrr} 1 & 0 & 2 \\ 0 & 0 & 4 \\ 2 & 4 & 4 \\ \end{array} \right) $$


$$ E_{2} = \left( \begin{array}{rrr} 1 & 0 & - 2 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P_{2} = \left( \begin{array}{rrr} 1 & - 2 & - 2 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{2} = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{2} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 4 \\ 0 & 4 & 0 \\ \end{array} \right) $$


$$ E_{3} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \\ \end{array} \right) $$ $$ P_{3} = \left( \begin{array}{rrr} 1 & - 4 & - 2 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \\ \end{array} \right) , \; \; \; Q_{3} = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 0 & 1 & 0 \\ 0 & - 1 & 1 \\ \end{array} \right) , \; \; \; D_{3} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 8 & 4 \\ 0 & 4 & 0 \\ \end{array} \right) $$


$$ E_{4} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & - \frac{ 1 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P_{4} = \left( \begin{array}{rrr} 1 & - 4 & 0 \\ 0 & 1 & - \frac{ 1 }{ 2 } \\ 0 & 1 & \frac{ 1 }{ 2 } \\ \end{array} \right) , \; \; \; Q_{4} = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 0 & \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } \\ 0 & - 1 & 1 \\ \end{array} \right) , \; \; \; D_{4} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & - 2 \\ \end{array} \right) $$


$$ P^T H P = D $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ - 4 & 1 & 1 \\ 0 & - \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 2 & 2 \\ 2 & 4 & 8 \\ 2 & 8 & 4 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & - 4 & 0 \\ 0 & 1 & - \frac{ 1 }{ 2 } \\ 0 & 1 & \frac{ 1 }{ 2 } \\ \end{array} \right) = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & - 2 \\ \end{array} \right) $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ 2 & \frac{ 1 }{ 2 } & - 1 \\ 2 & \frac{ 1 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & - 2 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 2 & 2 \\ 0 & \frac{ 1 }{ 2 } & \frac{ 1 }{ 2 } \\ 0 & - 1 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 1 & 2 & 2 \\ 2 & 4 & 8 \\ 2 & 8 & 4 \\ \end{array} \right) $$

Related Question