Setup issues (patch)
by Bryan Catanzaro
Hi -
In setting up a new PyCUDA installation recently from Git I came across two
issues, and made a patch that fixed them on my system.
Problem #1: The CURAND module requires linking to libcurand.so, which is
found in the directory with the CUDA Runtime, which on my system is in a
different default place (/usr/local/cuda/lib64) than the directory with the
CUDA driver (/usr/lib). To change this, I added some additional options to
specify:
- The name of the CUDA Runtime library (with default).
- The directory where the CUDA Runtime is found
I also amended the description of how to compile _curand with the location
CUDA Runtime Directory. Besides allowing the _curand.so module that comes
with PyCUDA to be properly built, this change also benefits projects using
Codepy to generate CUDA runtime code, as the Copperhead project does, since
CUDA Runtime information is located in the aksetup-defaults file along with
the other CUDA configuration information.
Problem #2: The USE_SHIPPED_BOOST option defaults to True. If you're using
configure.py to set up your siteconf.py, this is a problem. If you omit
--use-shipped-boost, the default of True means that you will use the shipped
boost anyway. So, there's no way to use a system Boost without manually
editing siteconf.py after running configure. The easy fix for this is just
to change the default to false.
You can see the changes at
https://github.com/BryanCatanzaro/catanzaro.pycuda/commit/46a756851195d1c...
Thanks,
bryan
8 years, 6 months
Undefined symbol in _curand.so
by Bogdan Opanchuk
Hello,
There is some problem with current PyCuda version (most recent commit
from repo). On my Ubuntu 10.04 x64, Python 2.6, Cuda 4.0 after
'submodule update', compilation and installation, _curand cannot be
imported:
>>> import pycuda._curand
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/python2.6/dist-packages/pycuda-2011.1-py2.6-linux-x86_64.egg/pycuda/_curand.so:
undefined symbol:
_ZNK5boost6python7objects21py_function_impl_base9max_arityEv
This, in turn, leads to inability to import pycuda.curandom:
try:
import pycuda._curand as _curand # <--- this fails
except ImportError:
def get_curand_version(): # <--- now this function returns None
return None
else:
get_curand_version = _curand.get_curand_version
if get_curand_version() >= (3, 2, 0): # <--- function returns None, so
'direction_vector_set' stays undefined
direction_vector_set = _curand.direction_vector_set
_get_direction_vectors = _curand._get_direction_vectors
...
def generate_direction_vectors(count,
direction=direction_vector_set.VECTOR_32): # <--- module import fails,
because 'direction_vector_set' is undefined
Best regards,
Bogdan
8 years, 6 months
Kernel calls and types
by Irwin Zaid
Hi all,
I have a quick question about how to handle different types in kernel
calls from PyCUDA...
Currently, I have a Python package that uses PyCUDA to manage GPU arrays
and call some custom CUDA kernels. For many of the functions in the
package, I allow the user to specify whether the output will be
single-precision or double-precision values. So far, this has required
me to write two versions of each CUDA kernel, which pretty much boils
down to one with the "float" type and one with the "double" type, though
sometimes I use PyCUDA's double texref.
Anyway, I was wondering if there is a better way to provide this
functionality? In normal CUDA code, this could be done with templates,
but that doesn't seem to be an option here. I know metaprogramming is a
solution, but I'd like to avoid that as it seems like an unnecessarily
large solution to a tiny problem. (And I don't want to require
additional dependencies like jinja2, mako, etc...)
Am I simply stuck maintaining two kernels that are nearly identical?
Thanks for your help!
Best,
Irwin
8 years, 6 months
Unable to Run demo
by Ted Kord
Hi
I just installed pyCUDA using:
python setup.py build
sudo python setup.py install
Afterwards, I tried to run demo.py but I get:
/Users/tedkord/CUDA/PyCUDA/pycuda-0.94.2/examples/demo.py in <module>()
2
3 import pycuda.driver as cuda
----> 4 import pycuda.autoinit
5 from pycuda.compiler import SourceModule
6
/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pycuda-0.94.2-py2.6-macosx-10.6-x86_64.egg/pycuda/autoinit.py
in <module>()
5
6 from pycuda.tools import make_default_context
----> 7 context = make_default_context()
8 device = context.get_device()
9
/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pycuda-0.94.2-py2.6-macosx-10.6-x86_64.egg/pycuda/tools.pyc
in make_default_context()
215
216 raise RuntimeError("make_default_context() wasn't able to
create a context "
--> 217 "on any of the %d detected devices" % ndevices)
218
219
RuntimeError: make_default_context() wasn't able to create a context on any
of the 2 detected devices
WARNING: Failure executing file: <demo.py>
--
Best regards,
Theodore
8 years, 6 months
texrefs keyword
by Anthony LaTorre
I noticed in the test source code that kernel functions are called with the
keyword *texrefs* containing a list of texture references; my code seems to
be unaffected whether I include this keyword argument or not. What is the *
texrefs* keyword argument used for?
Thanks,
Tony
8 years, 6 months