Can every square matrix be expressed as a unique product of a symmetric and an orthogonal matrix

eigenvalues-eigenvectorsintuitionlinear algebraorthogonal matricessymmetric matrices

Background

My undergrad education in linear algebra ended with eigenvalues, symmetric matrices, and orthogonal matrices. Now I'm trying to gain an intuitive understanding of singular values, and reached a new perspective/understanding of square matrices described below. However in case I have some misconceptions, or in case I am misguiding myself, I would like to test it.

Questions

Can every square matrix be expressed (uniquely?) as a product:

$$A=SR$$

where

  • $S$ is a symmetric matrix (and therefore has orthogonal eigenspaces, like an inertia tensor or variance matrix, i.e. can be diagonalized by an orthogonal matrix)
  • $R$ is an orthogonal matrix (and therefore is a distance-preserving rotation/reflection)

If yes, is this decomposition unique?

I like this perspective because:

  • A symmetric matrix and an orthogonal matrix both exert actions associated with simple geometric intuitions.
  • It makes evident what singular values are. (They are just the eigenvalues of $S$.)
  • The geometric effect of the transpose of a matrix becomes more apparent: $A^T=R^{-1}S$. As does its relationship to the inverse $A^{-1}=R^{-1}S^{-1}$, as inverting a symmetric matrix corresponds to simple geometric intuitions (inverting each eigenvalue).

Any guidance/comment on a shortcoming, caveate, or misconception you see in my understanding would be appreciated, or any pointer to related concepts you think could be useful to be aware of, before I internalize this perspective.

Best Answer

Sure, use the SVD decomposition: if $A=U\Sigma V^\top$ for $U,V$ orthogonal and $\Sigma$ diagonal, then we can rearrange this to get $U\Sigma U^\top \cdot U V^\top$, while $U \Sigma U^\top$ is symmetric whose spectral decomposition is given, and $U V^\top$ is unitary as a product of such.

In general the geometric intuition of what you said is already evident from $U \Sigma V^\top$ without the need to do this trick, $A \vec{x}$ is just:

  1. Projecting $\vec{x}$ onto the columns of $V$ to get $\vec{\pi}=V^\top \vec{x}$

  2. We then stretch each entry $i$ of the result $\vec{\pi}$ by $\sigma_i$

  3. Use the resulting vector entries as coordinates, to make a linear combination of the columns of $U$

Related Question