MATLAB: Is det(x) better than rcond(x) in determining non-singularity here

analyticaldetdeterminanteiginvMATLABprecisionrcondsingular

The code below demonstrates the following problem:
Analytically, the matrix x is NOT singular [1]. Matlab believes it is singular due to limits on floating point precision. Documentation dictates that using det(x) to determine if a matrix is singular is poor practice, and using rcond(x) (for instance) is better [2]. However, in this case, det(x) yields the analytical solution of the determinant. Why does det(x) accurately determine that x is not singular while rcond(x) does not?
a = -1.3589e-10;
b = -1.7108e-9;
c = 12.5893;
d = -1e11;
x = [a b; c d];
analytical_determinant = a*d - b*c; %[3]
matlab_determinant = det(x);
eigens = eig(x);
eigen_determinant = eigens(1)*eigens(2); %[4]
rcond_result = rcond(x);
disp('Let''s try to take an inverse:')
inv(x);
fprintf('rcond result: %f\nanalytical det: %f\ndet(x) result: %f\ndeterminant from eig: %f\n',rcond_result,analytical_determinant,matlab_determinant,eigen_determinant);

Best Answer

Let me suggest that you misunderstand det, and why it is a terribly poor tool to use to learn if a matrix is singular or not. Yes, we are taught to use det by teachers, in school. Hey it works there, so why not use it in practice? The problem is those same teachers stopped short of explaining the real issues. I've often said that the use of det is a bad meme, one that just keeps propagating forever. One person learns about it, so they teach others. That it is a bad idea? Nobody ever told them. So det lives on - IT'S ALIVE! (With my apologies to the movie where that line was used. Thanks Mel.)
a = -1.3589e-10;
b = -1.7108e-9;
c = 12.5893;
d = -1e11;
x = [a b; c d];
So, is the 2x2 matrix singular or not? It is NUMERICALLY singular, thus in floating point double precision arithmetic, it can be risky to use a numerically singular matrix to solve a system of linear equations reliably, even though an inverse exists in theory.
A big problem with det is that it is sensitive to scaling. What result would indicate to you that a matrix is singular based on the determinant? Remember that a numerically computed determinant will virtually NEVER be exactly zero. For example, consider this matrix, A:
A = rand(4,5);
A = [A;sum(A,1)];
A
A =
0.031833 0.82346 0.034446 0.7952 0.64631
0.27692 0.69483 0.43874 0.18687 0.70936
0.046171 0.3171 0.38156 0.48976 0.75469
0.097132 0.95022 0.76552 0.44559 0.27603
0.45206 2.7856 1.6203 1.9174 2.3864
det(A)
ans =
3.0549e-18
Is A singular? It darn tootin is! The last row of A is the sum of the other 4 rows. But wait a second, det(A) is not returned as zero. Wwwwhhhhaaaatttt happened?
And, if you claim that 3e-18 is close enough to zero, then just multiply A by 10000.
det(10000*A)
ans =
5005.1
Gosh, 10000*A is surely not singular based on what det just told us. But I thought that we decided A was singular? Mathematically, A is indeed singular. We know that from theory. Theory would never lie, would it?
Or try this:
A = eye(1200);
det(A)
ans =
1
Hey, great! Det knows that an identity matrix is not singular! But then, what about this?
det(2*A)
ans =
Inf
det(A/2)
ans =
0
Hmm. While I am sure you know that A is remarkably well-posed for solving ANY system of equations, or computing a matrix inverse, for that matter, the same applies to A/2 as well as 2*A. So why does det tell us such a confusing story on such simple matrices?
Or, how about this matrix? Lets see, I think I recall that magic squares of even order are all singular matrices. It is certainly true for magic(4), at least.
A = magic(4)
A =
16 2 3 13
5 11 10 8
9 7 6 12
4 14 15 1
rank(A)
ans =
3
cond(A)
ans =
8.148e+16
So rank and cond both think that A is singular. Composed of purely integers, the determinant must also be an exact integer. The symbolic toolbox can do it for us exactly.
det(sym(A))
ans =
0
det(A)
ans =
-1.4495e-12
But, det fails here on A itself, since large scale determinants cannot be efficiently computed using any routine in a reasonable amount of time. So, what det actually does is to use linear algebra to compute the determinant. So, if we were to look at the internal code for det, it would look something like this:
[~,U] = lu(A);
prod(diag(U))
ans =
-1.4495e-12
Here we can compute the determinant of a matrix using no more than a call to LU. And the LU factorization is an O(n^3) operation in terms of flops, whereas computing the determinant using traditional methods as you were taught in school is an O(factorial(n)) operation. So det probably uses the high school formula for a 2x2 or 3x3 determinant as special cases. But it should use LU for matrices of size 4x4 or larger. The end result is that det is not something you can even trust to compute the determinant you so very much desire - to see if your matrix is singular.
The problem is that det is NOT indeed a good tool to identify if a matrix is well-posed for such a problem. Yes, again, I know that your teacher said, that mathematically, if a matrix is singular, then the determinant is zero. But in fact, det is a bad tool to identify when a matrix is singular, because we cannot trust what it reports. Just because a tool works in theory, does not make it a good tool for practical use. And here, practical use involves floating point arithmetic. So just bcause a computed determinant is near zero or is not near zero, it still does not tell you if the matrix was actually singular or not.
Instead, we tell you rely on tools like cond, rcond, svd, and rank to identify singular matrices and avoid det like the plague on mathematics it rightfully is. (There are some circumstances where it is indeed arguably appropiate to use det. However testing for singularity is a problem where you should essentially NEVER use det.)