S=spconvert(dlmread('data.csv',','));
whos S: 400,000×400,000 double sparse 1.5GB
nnz(S): 100,000,000
numel(S): 1.6e11
S_2 = S*S;
whos S_2:400,000×400,000 double sparse 112GB
nnz(S_2) = 7e9 ????
numel(S_2) = 1.6e11
The original matrix S is sparse and strictly upper triangular, and the number of non-zero elements should therefore reduce in number every time it's multiplied by itself. The data for S is imported from CSV that has integer indices and values to 6d.p. Is this likely to be some floating point precision issue multiplying 0 to 6d.p by itself? I can't replicate the problem with small matrices, must be something in my data. I know it's big, but running it's no prob on a 512GB ram machine.
Best Answer