On Montag 21 Dezember 2009, Nicholas S-A wrote:
I have been loading my datasets within Python,
but found that
this was too slow. I rewrote in C and now I am trying to get the C
and Python integrated. I have a function which I am calling using
ctypes which returns a CUdeviceptr and I want it to return a
DeviceAllocation instead. Is this possible (to make a DeviceAllocation
If you use Boost.Python to do your C/Python integration, this could be easily
achieved. PyCUDA would need to install its header file src/cpp/cuda.hpp. Then
you could simply "return new cuda::device_allocation(address);" and off you
go. Let me know if you want that, I can add it with relative ease.
I couldn't find a good way to do it from the
and there doesn't appear to be a way from the C/C++ side either.
The Python side can be made to work, too, if you prefer.
If not, is there another way? A CUdeviceptr is
just an int; can
I pass it as an int to my cuda kernel and then cast it to a pointer?
You certainly can. The usual caveats about getting the right integer type
I don't know if the address is passed back as a
correct device address;
Do I have to run cuMemHostGetDevicePointer() first, or
does that return
the address in the host memory space?