MATLAB: Strange sparse matrix error on GPU

gpumaximum variable sizememory

I have a sparse matrix with a relatively small (16 MB) memory footprint
>> Whos A
Name Size Kilobytes Class Attributes
A 100000x100000 16406 double sparse
When I send it to the GPU and add the scalar 0, I get a puzzling error
>> gpuArray(A)+sparse(0);
Error using +
Maximum variable size allowed on the device is exceeded.
I really cannot see why this operation would cause any excessively large memory allocations. This is in Matlab version R2018a and the following GPU details. Does anyone have any insights as to why this occurs?
>> gpuDevice
ans =
CUDADevice with properties:
Name: 'GeForce GTX TITAN X'
Index: 1
ComputeCapability: '5.2'
SupportsDouble: 1
DriverVersion: 9.2000
ToolkitVersion: 9
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.2885e+10
AvailableMemory: 1.2259e+10
MultiprocessorCount: 24
ClockRateKHz: 1076000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 0
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1

Best Answer

The maximum number of elements in a gpuArray is intmax('int32'). In this case, the scalar addition is forcing your sparse matrix to become a full array of size 1e5×1e5. (In this particular case, because the scalar you are adding is precisely zero, then it could theoretically become a no-op.)
>> (1e5*1e5) <= intmax('int32')
ans =
logical
0