[PyCUDA] PyCuda 3x slower than nvcc
mantihor at gmail.com
Wed Apr 4 01:51:12 PDT 2012
Perhaps you are not using streams, and therefore suffering from Python
overhead (which would otherwise be more or less masked by asynchronous
kernel execution). This depends, of course, on how long do your
kernels take to execute on average.
Also, your declarations do not look good to me. Allocating 100k
element array on the stack is almost definitely a bad idea (I think
the compiler allocates them in global memory anyway, because there's
just not enough registers and local memory for them, and you may end
up using them very ineffectively).
On Wed, Apr 4, 2012 at 6:39 PM, Michiel Bruinink
<Michiel.Bruinink at mapperlithography.com> wrote:
> I have written a Cuda program that calculates lots of Gauss fits. When I use
> that same program with PyCuda, the time it takes to do the calculations is
> almost 3x the time it takes with nvcc.
> With nvcc it takes 380 ms and with PyCuda it takes 1110 ms, while the
> outcome of the calculations is the same.
> There is no difference in the device code, because I use the same file for
> the device code in both cases.
> How is this possible?
> Does anybody have an idea?
> I am not sure, but could it have someting to do with array declarations
> inside a device function?
> # define lenP 6
> # define nPoints 100000
> __device__ void someFunction()
> float residu[nPoints], newResidu[nPoints], pNew[lenP], b[lenP],
> float A[lenP*lenP], Jacobian[nPoints*lenP], B[lenP*lenP];
> PyCUDA mailing list
> PyCUDA at tiker.net
More information about the PyCUDA