Hi. It sounds like you're using the same card for both graphics and computation - this will put quite severe limitations on the amount of memory you have available.
1. The simplest way is to gather() your gpuArray to the CPU and call whos. This will tell you the amount of space it takes up in bytes (short of a small amount of header information). The equation is pretty simple, it's just numel(X) * 8 for double-precision arrays (the standard). Or multiply by 16 if the array is complex.
2. There are no other ways to get an Out of Memory error when calling gpuArray.eye(). Other functions will sometimes run out of memory because they need to create intermediate arrays as part of the computation; this error will still only be thrown when trying to allocate more memory than is available.
Best Answer