MATLAB: Is there any better way for doing this specific “matrix multiplication”

matrix multiplicationsparse

What I want to calculate is as follows:
clc,clear all
na = 1000;
nb = 500;
B = sprandn(na, nb, 0.1);
C = sprandn(na, nb, 0.1);
tic;
for i = 1:500
a = rand(na, 1);
results = (a.*B)'*C;
end
toc;
I think you find it very simple.
But it takes quite long time for finishing these calculations. (In my original code, sizes of the matrix are much bigger and more for-loop iterations should be run.)
As you can see, vector {a} is changed in every iteration whereas the matrix [B] and [C] are constant in the for-loop.
But, matrix [B] and [C] are multiplied to the vector {a} every single time in the for-loop.
I'd like to pre-calculate a matrix related to the matrix [B] and [C] before starting the for-loop to avoid multipling the matrix [B] and [C] to the vector {a} repetitively.
But, I have no idea..

Best Answer

Full matrices are faster here, because your matrices are simply not sparse, in the sense that you gain nothing from them as being sparse.
na = 1000;
nb = 500;
B = sprandn(na, nb, 0.1);
C = sprandn(na, nb, 0.1);
So 10% non-zero? Are ya kidding us? Those matrices are not even useably sparse. Just use full matrices. when you multiply them, fill-in will create matrices that have no non-zeros anyway. So you will bestoring the matrices as sparse, but they will really be full matrices. So no gain from being sparse, yet all the drawbacks from sparse storage.
Drawbacks???? Yes. Drawbacks. If you store a matrix with essentially no zeros in it as sparse, it will take longer to work with, and will use MORE memory.
Anyway, next, lets look at what you are trying to optimize. First, DON'T use tic and toc. Use timeit. No loop needed.
You want to optimze the product (a.*B)' *C.
Again, the way to test whatever you are doing is to use timeit.
timeit(@() (a.*B)'*C)
timeit uses a loop internally. It deals with all the things it needs to do to give the best estimate of the time required. For example, the first few times you call any function will take just a wee bit longer. (Sometimes called warmup, because the function needs to get into cache.)
timeit(@() (a.*B)'*C)
ans =
0.022081
BB = full(B);
CC = full(C);
timeit(@() (a.*BB)'*CC)
ans =
0.01559
So, it is faster to just work with full matrices. B and C are not very large, or indeed, very sparse. sparse is just a mirage here.
Next, is there a more effcient way to compute (a.*B)'*C? a is a vector, so it is implicitly expanded to multiply every row with the same vector of elements in a. However, you can actually gain a little if you had created a as a sparse diagonal matrix.
aa = spdiags(rand(na,1),0,na,na);
timeit(@() (aa*BB)'*CC)
ans =
0.014551
But, the time goes back up if you create B and C as sparse.
timeit(@() (aa*B)'*C)
ans =
0.020147
The the very funny thing is, if you are purely interested in speed here, the matrices that you created as sparse, should really have been full. and for speed, the only thing you wanted to be sparse was a vector that SHOULD have been a sparse diagonal matrix.