In my case, I want host CUDA code to be able to allocate memory on the device, and then
hand ownership of that memory over to PyCUDA. Hence, there is no "base"
GPUArray that should be pointed to, and PyCUDA should free the memory as normal.
I think in this situation, the assertion that if gpudata is given to a new GPUArray, that
new GPUArray must be a view of another GPUArray, is not actually true. So for now, I have
just removed the assertion, and things are working as expected. =)
On Dec 4, 2009, at 11:48 AM, Michael Rule wrote:
I just did this ...
my solution was to malloc a large GPUarray, then construct pointers to within this
region. GPUarrays made from these pointers pass in the master array as the
I think what I was trying to do is better served by slice notation however. Previously I
was not certain what a slice of a GPUarray actually was so I was handling things
On Fri, Dec 4, 2009 at 2:43 PM, Andreas Klöckner <lists(a)informa.tiker.net> wrote:
On Freitag 04 Dezember 2009, you wrote:
Hi Andreas -
As part of my work to get host cuda functions compilable through PyCUDA,
I've needed to come up with a mechanism for creating GPUArrays from device
pointers produced by host functions. I've been able to do that, but only
by commenting out an assertion in gpuarray.py on line 89: if gpudata is
self.gpudata = gpudata
assert base is not None <----
What is GPUArray.base for, and if we really need it, what data is expected
there? I can't find GPUArray.base being used anywhere in the PyCUDA code
Same as numpy.base--its sole purpose is to establish lifetime dependencies.
Say you create an array A that's a view of another array B (a slice, say).
Then you'd very much like B to not be destroyed before A.
PyCUDA mailing list