[Math] Prove That the Gram Matrix Determines Vectors Up to Isometry / Multiplication by Unitary Matrix

isometrylinear algebrapositive-semidefinitequadratic-formsunitary-matrices

According to http://mathworld.wolfram.com/GramMatrix.html, the gram matrix determines a set of vectors up to an isometry. I'm trying to prove this statement.

More specifically, let $A, B \in \mathbb{R}^{m,n}$. Here, I'm thinking of $A$ and $B$ as a stacking of $m$ row vectors, each row vector living in $\mathbb{R}^{n}$. Given that $AA^T = BB^T$, I want to show there exists an orthonormal matrix $P$ such that $||Pa_i|| = ||b_i||$ for $i=1,…,m$, where $a_i$ (resp. $b_i$) denotes the $i$-th row of $A$ (resp. $B$).

Here's what I've tried. Let $A = U_1 \Sigma_1 V_1^T$ and $B = U_2 \Sigma_2 V_2^T$ be the SVDs of $A$ and $B$. Then, since $U_1$ is an orthonormal basis for $AA^T$ and likewise for $U_2$, there must exist a orthonormal matrix $P$ such that $U_2 = P U_1 = P U$. Furthermore, since $\sigma_i(A)^2 = \lambda_i(AA^T)$, $\Sigma_1 = \Sigma_2 = \Sigma$. So since $A V_1 = U \Sigma$, plugging into $B$'s SVD we get $B = P A V_1 V_2^T$.

This is where I got stuck, as I don't know how to deal with the $V_1 V_2^T$ on the right. I believe $V_1$ and $V_2$ will be orthonormal basis for the same subspace, but I don't think it's true that they will be the same basis. I could also make the same argument for the $U_i$'s and say there is some orthonormal $Q$ which maps between $V_1$ and $V_2$, leaving something like $B = P A Q$.

Best Answer

What you want to show is that if $A$, $B$ are $m\times n$ then
$$A\cdot A^T= B\cdot B^T$$ if and only if there exists $U$ $m\times m$ matrix so that $U\cdot U^T = I_m$ and $B = A \cdot U$

The implication $\Leftarrow$ is easy but instructive:

$$B \cdot B^T = A U U^T A^T = A (U U^T) A^T = A \cdot I_m \cdot A^T = A \cdot A^T$$

Let's do $\Rightarrow$. We'll use your idea with SVD. Let \begin{eqnarray*} A = U_1 \Sigma_1 V_1^T \\ B = U_2 \Sigma_2 V_2^T \end{eqnarray*} Then \begin{eqnarray*} AA^T = U_1 \Sigma_1 V_1^T\cdot V_1 \Sigma_1 U_1^T= U_1 \Sigma_1^2 U_1^T\\ BB^T = U_2 \Sigma_2 V_2^T\cdot V_2 \Sigma_2 U_2^T= U_2 \Sigma_2^2 U_2^T \end{eqnarray*} Let's recall the notion of a positive square root of a positive semidefinite matrix. Basically \begin{eqnarray*} \sqrt{AA^T} = U_1 \Sigma_1 U_1^T \\ \sqrt{BB^T} = U_2 \Sigma_2 U_2^T \end{eqnarray*} We have $AA^T = BB^T$ so $\sqrt{AA^T} = \sqrt{BB^T} $ or $$U_1 \Sigma_1 U_1^T = U_2 \Sigma_2 U_2^T $$ We are done now: \begin{eqnarray*} B = U_2 \Sigma_2 V_2^T= U_2 \Sigma_2 U_2^T \cdot U_2 V_2^T = U_1 \Sigma_1 U_1^T \cdot (U_2 V_2^T) = \\ = U_1 \Sigma_1 V_1^T \cdot (V_1 U_1^T) \cdot (U_2 V_2^T) = A \cdot U \end{eqnarray*}

Really this is about the polar decomposition $A = \sqrt{A A^T} \cdot W_1$ and similar for $B$.

Related Question