Linear Algebra – Characterizing Real Symmetric Matrix as A = XX^T – YY^T

algorithmseigenvalues-eigenvectorslinear algebramatrices

In my personal research and quest to better understand the subject, I have noticed something concerning the Cholesky factorization of symmetric matrices. Everything I have read states that a symmetric real matrix has such a factorization only if it is positive definite.

Assuming first that the matrix may be factored in LU form (lower triangular times an upper triangular) which is possible for "most" matrices (since it requires all principal minors to have non-zero determinant). Writing $A=LU$ gives the possibility for the Cholesky factorization with use of a diagonal matrix with the purpose of setting the diagonals of both the L and U matrices to be equal: $$A=L \underbrace{DD^{-1}}_I U$$
And from the original symmetry the two factors are magically transposes of each other.

Now I do notice that if the mattrix $A$ is not positive definite (and I do not mind forcing it to be full rank as well) then $U$ has negative elements along the diagonal, and as such $D$ then needs complex values to work. I am fully aware that if such $D$ is chosen that the $A=W^TW \ne W^*W$, in other words it is not factored as a matrix and its conjugate-transpose, but for the formula $A = XX^T – YY^T$ it is not needed to be as such. Since the columns of $LD$ are either a pure real or a pure imaginary value (if this isn't too obvious, try some simple examples first maybe), the columns can further be ordered to the form of
$$LD = \pmatrix{X & iY}$$
Which then gives (with the same and symmetric row ordering or the other factor):
$$A=(LD)(LD)^T=\pmatrix{ X & iY} \pmatrix{X^T \\ iY^T}=XX^T – YY^T$$
which is the title formula of my question. This seems like a natural separation of the "positive part" and the "negative part" since all values in the final form are strictly real. It separates the symmetric matrix into positive definite symmetric and negative definite symmetric.

I have not seen this before, and as I stated in the first paragraph, I always hear that the Cholesky factorization just does not exist unless the matrix is positive definite.

Without considering the cases that do not have LU factorization for now, is this formula even at all known? Is there an error in my argument that it exists? I will be interested in any specific and good examples or references (please if "search for such and such term" is your answer, keep it short as I have done much searching already, and if your suggestion is new, then I would like a good explanation or just a short "search for such and such".)

I will post this question and not look at answers for 6 hours from posting (work time), so feel free to take your time with your response, and thank you for reading this far.


EDIT

What interests me about the equation is the possibility of separating the negative eigenvalues from the positive ones. The answers so far are pointing out the obvious separation, but I like that the formula is one that does not calculate eigenvalues first. It looks to me like it separates the eigenvalue problem into two smaller sets, a sort of binary search on all eigenvalues at once, with just the computational cost of LU factoring.

It also looks like a good possibility for a norm. This actually may reduce to another norm, or need adjustment (like adding a power 2 or something similar) but consider:
$$\sqrt{|XX^T|+|YY^T|}$$

I submit that this is a norm (without proof at the moment) since it is the natural extension of the "complex square root" of the matrix $A$ as $\pmatrix{ X & iY}$

This all seems like logical directions to me, and would like to know if anything is looking familiar to anyone as if it has been done before?


EDIT #2
I see now that $XY^T=\mathbf 0$ is necessary for the Cholesky form to be the same as the eigenvalue form of this equation. This is patently untrue in the Cholesky factoring. I was hoping the separation of two positive semi-definite symmetric matrices was unique, since then I would be guaranteed that it was the eigen-decomposition. The Cholesky finds a unique one within its range of solutions, but I now see it is not the same.

I will be crediting user dineshdileep within a few days if no better answer comes along.

Best Answer

Let us take a real symmetric $N \times N$ matrix A. So it can have positive, zero and negative eigen values. For the moment, assume it has only non-zero eigen values, i.e. either positive or negative. so if $u_i$'s are its eigen vectors and $\lambda_i$ are its eigen values. Also take $\lambda_{1}\geq \lambda_{2}\ldots \lambda_{r}>0$ and $0>\lambda_{r+1} \geq \lambda_{r+2} \geq \lambda_{N}$. So we take A has $r$ has positive eigen values and $N-r$ negative eigen values. We can write

\begin{align} A &=\sum_{i=1}^{N}\lambda_{i}u_{i}u_{i}^{T} \\ \nonumber &= \sum_{i=1}^{r}\lambda_{i}u_{i}u_{i}^{T} - \sum_{i=r+1}^{N}|\lambda_{i}|u_{i}u_{i}^{T} \\ \nonumber &= A_1 - A_2 \end{align} where \begin{align} A_1 &= \sum_{i=1}^{r}\lambda_{i}u_{i}u_{i}^{T} \\ \nonumber A_2 &= \sum_{i=r+1}^{N}|\lambda_{i}|u_{i}u_{i}^{T} \end{align} I used the fact $\lambda_{i}=-|\lambda_{i}|$, if $\lambda_{i}<0$ for any $i$. Now $A_1$ and $A_2$ are positive definite matrices, since they have positive eigen values. Hence, they should have cholesky decompositions, say $A_1=XX^T$ and $A_2=YY^T$, so that $A=XX^T-YY^T$. Hope, this answers your question. I believe this holds even if A has zero eigen values.