Received from Keith Brown on Wed, Nov 25, 2015 at 03:47:32PM EST:
Here is the context.
Here a.T is a view and doesnt take up a lot of memory.
if I would do
np.dot(a.T.copy(),a) it would take up more memory.
This is part of my memory saving quest on the GPU. Trying to find some
ways around it...
If you tell skcuda.linalg.dot() to treat its first argument as transposed, you
don't need to copy the matrix:
skcuda.linalg.dot(a, a, 'T', 'N')
Bionet Group | Neurokernel Project