Hi, Brian --- Thanks for the pointer. Just to make sure I understand what's
going on, from the CUDA documentation, the correct sequence seems to be:
(1) Initialize the runtime, which creates a context.
(2) Attach PyCUDA to this context from the runtime
(3) Run kernel code
Is this right? If so, is there something about PyCUDA that makes it
difficult to do this, or is it just a matter of setting up and tearing down
the driver instance in the right order? (Another thing I am not totally
clear on is what PyCUDA's lifecycle is for managing contexts. Does it leave
a single context up as long as the python process runs, or does it tear them
down as soon as there aren't any references to gpuarray instances?)
I admit to being something of a novice with PyCUDA and CUDA generally, so
sorry for pestering the list.
On Sun, Apr 18, 2010 at 10:17 AM, Bryan Catanzaro <bryan.catanzaro(a)gmail.com
> It's not a solution, but the workaround I've been using is to use
> context.detach() rather than context.pop() at the end of the
> computation. If you look at pycuda.autoinit, you can see what needs
> to be done to initialize a CUDA context. Do the same thing manually,
> just change the function you register with atexit to be
> context.detach, and the errors should go away.
> This is just a workaround, though, and doesn't solve the underlying
> - bryan
> On Apr 18, 2010, at 6:09 AM, Louis Theran <theran(a)temple.edu
> >> I'm trying to mix some ctypes wrapped CUDA runtime C libraries with
> >> PyCUDA.
> >> Even using CUDA 3.0, I am getting errors because PyCUDA seems to be
> >> trying to push/pop the current context.
> >> From the docs, it's not quite clear what I have to do in this
> >> regard. Any help from those of you successfully doing this would be
> >> much appreciated.
> >> Thanks!
> >> ^L
> > _______________________________________________
> > PyCUDA mailing list
> > PyCUDA(a)host304.hostmonster.com
> > http://host304.hostmonster.com/mailman/listinfo/pycuda_tiker.net