MATLAB: Specify a tolerance for backslash operation or linsolve while A is non-square? (QR solve)
MATLABmatrixmatrix manipulation
Hi I need to get to the solution of Ax = b ;
Since my A is non-square, I need to use QR which is built-in in both \ and linsolve commands. I need to specify the accuracy (tolerance for error) could someone help plz. Thanks!
Best Answer
What do you mean by "tolerance"?
A QR solution has no need for a tolerance, nor does linsolve, as they are not iterative methods.
Anyway, you cannot specify the desired accuracy of a solution in a linear system like this, solved using QR. Just wanting a more accurate solution is not relevant. The solution exists, and is unique, or there are infinitely many solutions, all of which are equally good (or poor), or there is no exact solution, and that which qr/linsolve returns is entirely valid.
Perhaps you have a singular system, or one that is nearly singular. In that case, a tolerance is still not really relevant. Yes, you could decide which singular values to discard, in case of a pinv style solution. But that is not a "tolerance" in the way tolerances are traditionally used. And if your system is singular, then again, tolerances have no real meaning.
As far as your matrix being non-square, that just means (assuming you have an over-determined system) that you will never be able to achieve an exact solution. (Well, not entirely true. It IS possible that an over-determined system has an exact solution. But that is terribly rare.) So if no exact solution exists, then specifying a tolerance is again a meaningless goal.
If your problem is non-square in the sense that it is under-determined, then again, a tolerance is irrelevant. A solution may exist, or not. If one solution does exist, then there will be infinitely many solutions, all equally good. A tolerance is meaningless here.
I think that perhaps you are being confused, thinking that the solution to a linear system is like that of a nonlinear system, where you will employ convergence tolerances, etc. Or, perhaps you think that simply by cranking down on a tolerance, you can always gain an arbitrarily good solution. (Many people seem to think that. They are flat out wrong.) Or perhaps you are thinking that a solution like that provided by linsolve/qr is like an iterative solver, like that from lsqr.
If this has not cleared things up, then I will suggest you need to explain CLEARLY what you want to do, and why you think that a tolerance is a meaningful thing to ask for here. As well, you need to explain why it is that you think what I have said above does not apply.
You can define anything you want to be sparse if you so desire. So sparse(ones(1000)) will produce a sparse matrix. ;-)
But seriously, this matrix really is reasonably sparse. Just read what is shown on the page you direct to!
We see that it is shown to be an 1899x1899 square matrix. There are 20296 non-zeros in the matrix.
20296/1899
ans =
10.688
So on average, roughly 10.7 non-zeros per row of a matrix, where each row has 1899 elements.
20296/1899/1899
ans =
0.0056281
Total density of around 0.6% non-zeros. Is that matrix sparse? Well, yes, it is. Not massively so, but it is sparse. Since I don't have that matrix, I'll do a little memory comparison on a random one with the same density.
As = sprand(1899,1899,0.0056);
Af = full(As);
whos As Af
Name SizeBytesClassAttributes
Af 1899x189928849608double
As 1899x1899337568doublesparse
So we see that storing the matrix as sparse gives me a decrese in memory of almost a factor of 100-1. (Sparse matrix storage also requires we store the location of those elements, so it is not quite as efficient as we might like.)
b = rand(1899,1);
timeit(@() Af*b)
ans =
0.0023482
timeit(@() As*b)
ans =
6.1858e-05
And working with the matrix with a matrix multiply does give a significant savings.
Much of the time, when you work with a sparse matrix, you will perform a decomposition of some sort. Odds are that will not produce a speed increase here, because the matrix itself is not a nicely tightly banded matrix. I think much of the time, people think of a sparse matrix in the form of something like a tridiagonal matrix. A matrix factorization of a tridiagonal matrix will produce no fill-in at all. On the matrix you show, if we tried to do a Cholesky or LU factoriation, for example, the result will be a virtually completely full triangular matrix.
tic,[L,U] = lu(Af);toc
Elapsed time is 0.205769 seconds.
tic,[L,U] = lu(As);toc
Elapsed time is 3.032499 seconds.
As you see, not all computations on such a matrix will see a gain in speed. Here, we see the result ends up as a nearly full triangular matrix for the factors. So treating that matrix as sparse, IF you intend to do a decomposition of the matrix will be a time loss, because the overhead from the sparse algorithms is too costly.
nnz(L)
ans =
1177842
nnz(U)
ans =
1208969
numel(L)
ans =
3606201
That is to be expected on this matrix. It does not mean the original was not sparse. It is indeed sparse. Just perhaps not as massively sparse as you may want a matrix to be. Sometimes sparse must be measured by the eye of the beholder, in proper context.
We understand what a matrix multiply means. :) But it sounds like you don't appreciate the use of sparse matrices in MATLAB. Just because your matrix has zero elements in it, does not make it a matrix stored in sparse form.
If your sparse matrix is indeed stored in sparse format, then MATLAB will AUTOMATICALLY use highly efficient multiplication.
A = sprand(1000,1000,0.005);
B = sprand(1000,1000,0.005);
Af = full(A);
Bf = full(B);
whos A B Af Bf
Name SizeBytesClassAttributes
A 1000x100087656doublesparse
Af 1000x10008000000double
B 1000x100087832doublesparse
Bf 1000x10008000000double
But sparse matrices are not only there to save space. MATLAB does use the known sparsity in a multiplication.
timeit(@() Af*Bf)
ans =
0.078021563737
timeit(@() A*B)
ans =
0.000990737737
If you want something faster than the already fast multiplication built into MATLAB that works with sparse matrices, then, no. Not gonna happen.
Best Answer