[Math] Matrix identity involing vec and Kronecker product, or a norm bound

linear algebra

Consider two matrices $A$ and $B$ of the same size. Let $\text{vec}(A)$ denote the operator that maps $A$ to a column vector by stacking its columns. Is there a wy to express $\text{vec}(A) \text{vec}(B)^T$ in terms of the Kronecker (tensor) product of the original matrices?

More interestingly, can we give an upper bound on the operator norm of $\text{vec}(A) \text{vec}(B)^T$?

Best Answer

Let $$A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix}, ~~~~ B = \begin{pmatrix} b_{11} & b_{12} & \cdots & b_{1n} \\ b_{21} & b_{22} & \cdots & b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ b_{n1} & b_{n2} & \cdots & b_{nn} \end{pmatrix}. $$ So we have $$\begin{align} \operatorname{vec}(A) \operatorname{vec}(B)^T &= \begin{pmatrix} a_{11} \\ \vdots \\ a_{n1} \\ a_{12} \\ \vdots \\ a_{n2} \\ \vdots \\ a_{1n} \\ \vdots \\ a_{nn} \end{pmatrix} \begin{pmatrix} b_{11} & \cdots & b_{n1} & b_{12} & \cdots & b_{n2} & \cdots & b_{1n} & \cdots & b_{nn} \end{pmatrix} \\ &= \begin{pmatrix} a_{11}b_{11} & a_{11}b_{21} & \cdots & a_{11}b_{nn} \\ a_{21}b_{11} & a_{21}b_{21} & \cdots & a_{21}b_{nn} \\ \vdots & \vdots & \ddots & \vdots \\ a_{nn}b_{11} & a_{nn}b_{21} & \cdots & a_{nn}b_{nn} \end{pmatrix} \end{align}$$

We see the rows of this $n^2 \times n^2$ matrix are $a_{ij} \operatorname{vec}(B)^T$. Taking a look at $$A \otimes B = \begin{pmatrix} a_{11} b_{11} & a_{11} b_{12} & \cdots & a_{11} b_{1n} & \cdots & \cdots & a_{1n} b_{11} & a_{1n} b_{12} & \cdots & a_{1n} b_{1n} \\ a_{11} b_{21} & a_{11} b_{22} & \cdots & a_{11} b_{2n} & \cdots & \cdots & a_{1n} b_{21} & a_{1n} b_{22} & \cdots & a_{1n} b_{2n} \\ \vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\ a_{11} b_{n1} & a_{11} b_{n2} & \cdots & a_{11} b_{nn} & \cdots & \cdots & a_{1n} b_{p1} & a_{1n} b_{n2} & \cdots & a_{1n} b_{nn} \\ \vdots & \vdots & & \vdots & \ddots & & \vdots & \vdots & & \vdots \\ \vdots & \vdots & & \vdots & & \ddots & \vdots & \vdots & & \vdots \\ a_{n1} b_{11} & a_{n1} b_{12} & \cdots & a_{n1} b_{1n} & \cdots & \cdots & a_{nn} b_{11} & a_{nn} b_{12} & \cdots & a_{nn} b_{1n} \\ a_{n1} b_{21} & a_{n1} b_{22} & \cdots & a_{n1} b_{2n} & \cdots & \cdots & a_{nn} b_{21} & a_{nn} b_{22} & \cdots & a_{nn} b_{2n} \\ \vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\ a_{n1} b_{n1} & a_{n1} b_{n2} & \cdots & a_{n1} b_{nn} & \cdots & \cdots & a_{nn} b_{n1} & a_{nn} b_{n2} & \cdots & a_{nn} b_{nn} \end{pmatrix}$$ we see that $a_{ij}\operatorname{vec}(B)^T$ is the transposed vectorization of the $n \times n$ submatrix at position $i, j$ in the $n^2 \times n^2$ matrix $A \otimes B$.

In other words, the rows of $\operatorname{vec}(A)\operatorname{vec}(B)^T$ are the transposed vectorizations of the $n \times n$ submatrices of $A \otimes B$ taken top to bottom, then left to right.

Especially, $\operatorname{vec}(A)\operatorname{vec}(B)^T$ and $A \otimes B$ have the same elements, but in different places, thus their Frobenius norm will be the same.

The norm $\|\operatorname{vec}(A)\operatorname{vec}(B)^T\|_\infty$ is the greatest absolute row sum of the matrix. The absolute row sum of a row is: $$|a_{ij}| \sum_{k, l} |b_{kl}|.$$ So, the maximum $|a_{ij}|$ will give the norm. A similar argument can be done for $\|\operatorname{vec}(A)\operatorname{vec}(B)^T\|_1$, which is the greatest absolute column sum.

$\operatorname{vec}(A)\operatorname{vec}(B)^T$ and $A \otimes B$ will in general have different ranks. $\operatorname{vec}(A)\operatorname{vec}(B)^T$ will have rank one and $A \otimes B$ will have rank $\operatorname{rank}(A)\operatorname{rank}(B)$.

Since $\operatorname{vec}(A)\operatorname{vec}(B)^T$ has rank one, it has only one non-zero singular value, which will decide its spectral norm. Of course, we have a spectral value decomposition given by $$\operatorname{vec}(A)\operatorname{vec}(B)^T = \|\operatorname{vec}(A)\| \|\operatorname{vec}(B)\| \frac{\operatorname{vec}(A)}{\|\operatorname{vec}(A)\| }\frac{\operatorname{vec}(B)^T}{\|\operatorname{vec}(B)\| },$$ i.e. the singular value, and the spectral norm, is $\|\operatorname{vec}(A)\| \|\operatorname{vec}(B)\|$. $\|\operatorname{vec}(A)\|$ is the same as the Frobenius norm of $A$, which is the square root of the sum of the squares of the singular values of $A$.

Related Question