Dear community, I want to utilize the GPU of my system to accelerate matrix multiplication. From the list of GPU-enabled functions, I can see that "mtimes" is available, therefore it should be accessible using gpuArrays.
My system is:
Matlab R2016b
Visual Studio 2013
CUDA 9.0
GTX 1070
I start by defining two matrices as gpuArrays:
>>gDexMat=gpuArray(DexMat);>> gx_PD=gpuArray(x_PD);>> whos gDexMat gx_PD Name Size Bytes Class AttributesgDexMat 200000x1000 4 gpuArray gx_PD 1000x200000 4 gpuArray
I then perform the multiplication, gather the data and stop the time how long it took:
>> tic;gIQMat=gx_PD*gDexMat;IQMat=gather(gIQMat);toc %doing the demodulation
Elapsed time is 0.528126 seconds.>> tic;gIQMat=gx_PD*gDexMat;IQMat=gather(gIQMat);tocElapsed time is 0.414626 seconds.>> tic;gIQMat=gx_PD*gDexMat;IQMat=gather(gIQMat);tocElapsed time is 0.405255 seconds.
In this example, the time it took is pretty constant. However, when I start Matlab the first time, the execution may take extremely long, up to 45 seconds. Even the first call:
>>gDexMat=gpuArray(DexMat);
Can sometimes take up to a minute.
Does any body have any ideas why this is happening? Best, Ludwig
Best Answer