Aside stating the obvious: eig
gives the results in ascending order while svd
in descending one; the svd
eigenvalues (and eigenvectors obviously) are dissimilar to those of eig
decomposition because your matrix ingredients
is not symmetric to start with. To paraphrase wikipedia a bit: "When the $X$ is a normal and/or a positive semi-definite matrix, the decomposition $\ {X} = {U} {D} {U}^*$ is also a singular value decomposition", not otherwise. ($U$ being the eigenvectors of $XX^\mathbf{T}$)
So example if you did something like:
rng(0,'twister') %just set the seed.
Q = random('normal', 0,1,5);
X = Q' * Q; %so X is PSD
[U S V]= svd(X);
[A,B]= eig(X);
max( abs(diag(S)- fliplr(diag(B)')' ))
% ans = 7.1054e-15 % AKA equal to numerical precision.
you would find that svd
and eig
do give you back the same results. While before exactly because matrix ingredients
was not at least PSD (or even square for that matter), well.. you didn't get the same results. :)
Just to state it in another way: $X= U\Sigma V^*$ practically translates into: $X = \sum_1^r u_i s_i v_i^T$ ($r$ being the rank of $X$). Which itself means that you are (pretty awesomely) allowed to write $X v_i = \sigma_i u_i$. Clear to get back to the eigen-decomposition $X u_i = \lambda_i u_i$ you need first all $u_i$ == $v_i$. Something that non-normal matrices do not guarantee. As final note: The small numerical differences are due to eig
and svd
having different algorithms working in the background; a variant of the QR algorithm of svd
and a (usually) generalized Schur decomposition for eig
.
Specific to your problem what you want is something akin to:
load hald;
[u s v]=svd(ingredients);
sigma=(ingredients' * ingredients);
lambda =eig(sigma);
max( abs(diag(s)- fliplr(sqrt(lambda)')' ))
% ans = 5.6843e-14
As you see this is nothing to do with centring you data to have mean $0$ at this point; the matrix ingredients
is not centered.
Now if you use the covariance matrix (and not a simple inner product matrix as I did) you will have to centre your data. Let's say that ingredients2
is your zero-meaned sample.
ingredients2 = ingredients - repmat(mean(ingredients), 13,1);
Then indeed you need this normalization by $1/(n-1)$
[u s v] =svd(ingredients2 );
sigma = cov(ingredients); % You don't care about centring here
lambda =eig(sigma);
max( abs( diag(s)- fliplr(sqrt(lambda *12)')')) % n = 13 so multiply by n-1
% ans = 4.7962e-14
So yeah, it the centring now. I was a bit misleading originally because I worked with the notion of PSD matrices rather than covariance matrices. The answer before the editing was fine. It addressed exactly why your eigen-decomposition did not fit your singular value decomposition. With the editting I show why your singular value decomposition did not fit the eigen-decomposition. Clearly one can view the same problem in two different ways. :D
Best Answer
You can legitimately perform SVD on a matrix that has some negative values. Here's an example in R:
That doesn't necessarily mean it doesn't do what you want if you have a matrix that's all positive.
Note that the singular values (the diagonal of $\Sigma$ in $A=U\Sigma V^T$, which is $S$ in your notation) should always be non-negative. The vector
d
in the R example above contains that diagonal for the example. Since $\Sigma$ is diagonal, all the entries in it will be non-negative.Perhaps you should say more about what you're trying to do and why. It seems difficult to give much helpful advice with what you have said so far.