I have a simple code for testing parfor in my local profile (with 4 cores)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%code 1
matlabpool open 4 % 2 or 1
tic;parfor i = 1:30 res = 0; for n = 1 : 3000000 res = res + sin(n) + cos(n); end A(i) = res;endtoc;matlabpool close%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%code 2
tic;for i = 1:30 res = 0; for n = 1 : 3000000 res = res + sin(n) + cos(n); end A(i) = res;endtoc;%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I have executed code 1 using 4 labs or 2 labs or 1 lab and executed code 2. the results is here:
code-1 - 8 labs(4 core with 4 hypthread) --> 15 seccode-1 - 4 labs --> 22 seccode-1 - 2 labs --> 35 seccode-1 - 1 labs --> 65 seccode-2 - --> 18 sec
regards the results, it is better to use code-2 and releasing all other cores (you may also consider the time needed to run 'matlabpool open' and 'matlabpool close'). I have read this : http://www.mathworks.co.uk/matlabcentral/answers/44734-there-is-aproblem-in-parfor
but it seems in this case execution time is much longer than setup time of parallel mechanism.
if there is not any thing wrong with my results, main question is when its better to use parfor.
Best Answer