[Math] Making MATLAB svd robust to transpose operation

linear algebrana.numerical-analysis

I'm playing with MATLAB's svd function to compute the svd of

 [ 1     4     7    10
   2     5     8    11
   3     6     9    12 ]

When I type [U1, ~, ~] = svd(X), I get

U1 =

 -0.5045    0.7608    0.4082
 -0.5745    0.0571   -0.8165
 -0.6445   -0.6465    0.4082

But when I compute the svd of the transpose of X with [~, ~, U2] = svd(X'), I get

U2 =

  0.5045    0.7608    0.4082
  0.5745    0.0571   -0.8165
  0.6445   -0.6465    0.4082

The first singular vectors seem to be pointing to the opposite directions but the others are the same. I know that svd is not unique and the solution is correct because the first component of V1 and V2 are pointing opposite directions as well. But, I would expect MATLAB to return the same answers. I thought to add a postprocessing and check the singular vector pairs and turn their directions to make them consistent under transpose but I couldn't find a mathematically reasonable way of doing it.

Do you know why MATLAB (or LAPACK as it says MATLAB uses LAPACK) computes this way? Do you have any suggestions how to make it consistent? Thanks.

Update: the reason I ask for it is that I wanted to see what happens if I apply higher order singular value decomposition (HOSVD) to 2 dimensional matrices. In theory, it holds. Using the same mathematical notation, SVD is formulated as follows:

Every $\mathbf{X} \in \mathbb{C}^{I_1 \times I_2}$ can be written as the product

.$$\mathbf{X} = \mathbf{U}^{(1)} \cdot \mathbf{S} \cdot \mathbf{U}^{(2)^H} = \mathbf{S} \times_1 \mathbf{U}^{(1)} \times_2 \mathbf{U}^{(2)}$$

in which

$\mathbf{U}^{(1)} = \left[ \begin{array}{cccc} U_{1}^{(1)} & U_{2}^{(1)} & \dots & U_{I_1}^{(1)} \end{array} \right] \in \mathbb{C}^{I_1\times I_1}$ is unitary.

$\mathbf{U}^{(2)} = \left[ \begin{array}{cccc} U_{1}^{(2)} & U_{2}^{(2)} & \dots & U_{I_I}^{(2)} \end{array} \right] \in \mathbb{C}^{I_2\times I_2}$ is unitary.

$\mathbf{S} \in \mathbb{C}^{I_1\times I_2}$ has the the following properties:

  • pseudodiagonality: $\newcommand{\diag}{\mathop{\mathrm{diag}}} \mathbf{S} = \diag\left(\ \sigma_1, \sigma_2, \dots, \sigma_{\min(I_1,I_2)} \right)$

  • ordering: $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_{\min(I_1, I_2)} \geq 0$

So, to compute it I found the right and singular vectors as if they were computed independently because I want to extend it to multidimensions later. To compute them independently, I calculated the eigenvalues of $\mathbf{XX'}$ and $\mathbf{X'X}$ as follows:

[U{1}, ~] = eig(X*X');
U{1} = fliplr(U{1});
[U{2}, ~] = eig(X'*X);
U{2} = fliplr(U{2});

but they seem to be as I explained above. Actually, it doesn't effect what I need but I would like to make the singular values positive. This way, they might become negative since the right and left singular vectors are not as I expected.

Update 2: Alternatively, I used the code with svd() function as below and it gives similar results. With the 'econ' option, only min(nDim1,nDim2) singular vectors are returned and I set the remaining as 0 vector. The economy option (i.e. computing the non-zero vectors makes it practical when the higher order matrices [n-way arrays] are turned into matrices and svd is applied to them.)

[U{1}, ~, ~] = svd(X, 'econ');
U{1} = [U{1} zeros(size(U{1},1), size(U{1},1)-size(U{1},2))];
[U{2}, ~, ~] = svd(X', 'econ');
U{2} = [U{2} zeros(size(U{2},1), size(U{2},1)-size(U{2},2))];

Best Answer

You may be interested in the following article which addresses this issue:

R. Bro, E. Acar and T. G. Kolda. Resolving the sign ambiguity in the singular value decomposition. Journal of Chemometrics 22(2):135-140, February 2008. (doi:10.1002/cem.1122)

The MATLAB code can be downloaded here:

http://www.mathworks.com/matlabcentral/fileexchange/22118-sign-correction-in-svd-and-pca

Best wishes, Tammy Kolda

Related Question