[PyCUDA] CUDA 3.0 & 64-bit host code
lists at informa.tiker.net
Mon Nov 23 20:31:38 PST 2009
On Montag 23 November 2009, Bryan Catanzaro wrote:
> I built 64-bit versions of Boost and PyCUDA on Mac OS X Snow Leopard, as
> well as the 64-bit Python interpreter supplied by Apple, as well as the
> CUDA 3.0 beta. Everything built fine, but when I ran pycuda.autoinit, I
> got an interesting CUDA error, which PyCUDA reported as "pointer is
> 64-bit". I'm wondering - is it impossible to use a 64-bit host program
> with a 32-bit GPU program under CUDA 3.0?
First, I'm not sure I fully understand what's going on. You can indeed compile
GPU code to match a 32-bit ABI on a 64-bit machine (nvcc --machine 32 ...). Is
that what you're doing? If so, why? (Normally, nvcc will default to your
host's ABI. By and large, this changes struct alignment rules and pointer
If you're not doing anything special to get 32-bit GPU code, then your GPU
code should end up matching your host ABI. Or maybe nvcc draws the wrong
conclusions or is a fat binary or something and we need to actually specify
the --machine flag.
I also remember wondering what the error message referred to when I added it.
I'm totally not sure. Which routine throws it?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 190 bytes
Desc: This is a digitally signed message part.
More information about the PyCUDA