Greetings. Is CUFFT unusable from PyCUDA for the same reasons as
CUBLAS? My application doesn't need CUBLAS, just CUFFT (for
pre-processing). If I can't use CUFFT from PyCUDA, I'll just do the
pre-processing in the host (slower...) and do the rest with PyCUDA :)
Greetings, I write to ask guidance with a problem I'm encountering in
running the test_driver.py. I've followed the excellent PyCUDA and
Boost installation instructions, with GCC 4.1.2 (which caused a minor
hiccup) and Python 2.5.2 provided by the Sage package, and everything
has compiled fine so far. Upon running test_driver.py (via "sage
-python test_driver.py"), I get the following error:
Traceback (most recent call last):
File "test_driver.py", line 3, in <module>
line 1, in <module>
import pycuda.gpuarray as gpuarray
line 3, in <module>
import pycuda.elementwise as elementwise
line 1, in <module>
import pycuda.driver as drv
line 1, in <module>
from _driver import *
undefined symbol: Py_InitModule4
and I am unable to find any reference to CUDA/PyCUDA and this
Py_InitModule4. Any assistance is greatly appreciated, thank you!
the question is, what is "blindly copying bytes" when you have a numpy
array? The builtin bytes() function seems to suggest supporting both
F-contiguous and C-contiguous layouts (depending on what the current state
of the numpy array is).
btw c++ is not actually supported, just some features
On Sun, Mar 1, 2009 at 23:56, Andreas Klöckner <lists(a)informa.tiker.net>wrote:
> On Sonntag 01 März 2009, you wrote:
> > > 1) Careful, terminology trap. F-contiguous != contiguous (what you call
> > > nonlinear) != C-contiguous.
> > cuda is C, no? is "F" something other than fortran?
> Nicely set up. :) True, "F" is Fortran. But that's just a convenient name.
> Call them column- and row-major if you wish. There's good reason for both,
> even in a pure C program.
> > I think there should be some sort of check.
> I already knew *your* opinion. :)
:) I don't know how appropriate it would be, but perhaps you could post the
question on the nvidia cuda forums?
> > But maybe all of this can wait
> > for your codepy python-aware C structures? I'd like to see some tutorial
> > examples on that when you have time.
> Not sure how codepy would help here. I'll try to find time to write a
I guess I didn't see the row-major and column-major argument above -- I did
notice that the "erroneous" arrays before ndarray.copy() were transposed. On
further thought, if one had Python representations of C data structures,
arrays could be annotated as row-major or column-major (with C indexing --
last index most contiguous -- by default). Then, instead of doing memcpy,
perhaps a programmer could name a struct field (which would be an array in
this case) and copy to that, or (as I am now) copy to an array of fixed-size
On Sonntag 01 März 2009, Peter Waller wrote:
> I sorted the problem, but I was setting up some build scripts for Gentoo,
> so that it could use its package management system, rather than just
> polluting my system. The failure I was getting was of the form "Can't
> import blah from pytools", and it was because it wasn't installed, because
> I didn't see it in the prerequisites listed here
Check out virtualenv--it keeps your system Python tidy without forcing you to
mess with complicated packaging tools (unless you want to, that is! :).
PS: Please use the mailing list for support in the future--for archival.
On Sonntag 01 März 2009, you wrote:
> PyCuda looks very good - I'm excited to use it. But I found that I am
> missing pytools, and I see no mention of it... anywhere. At least not in
> the documents or on your home page..
What kind of failure do you get? (Please answer this even if the text below
solves the problem for you.)
Pytools is a rather behind-your-back dependency of PyCuda. If you don't have
it already, the 'setup.py' script should automatically go and get it for you.
If that doesn't work, you can get it from its Python package index page, here: