hi,
ive been endlessly trying to install pycuda on a red hat dist. machine, but
to no avail. it would be much appreciated if i could get some help.
i am able to get past the configure part of the installation, but the when i
"make" , the problem occurs. here is my siteconf.py file
BOOST_INC_DIR = ['/usr/local/include/boost/']
BOOST_LIB_DIR = ['/usr/lib']
BOOST_COMPILER = 'gcc4.1.2'
BOOST_PYTHON_LIBNAME = ['boost_python']
BOOST_THREAD_LIBNAME = ['boost_thread']
CUDA_TRACE = False
CUDA_ROOT = '/usr/local/cuda/'
CUDA_ENABLE_GL = False
CUDADRV_LIB_DIR = ['/usr/lib']
CUDADRV_LIBNAME = ['cuda']
CXXFLAGS = ['-DBOOST_PYTHON_NO_PY_SIGNATURES']
LDFLAGS = []
i beleive i built boost with gcc version 4.1.2
the error im getting is.....
/usr/local/include/boost/type_traits/remove_const.hpp:61: instantiated
from ‘boost::remove_const<<unnamed>::pooled_host_allocation>’
/usr/local/include/boost/python/object/pointer_holder.hpp:127:
instantiated from ‘void* boost::python::objects::pointer_holder<Pointer,
Value>::holds(boost::python::type_info, bool) [with Pointer =
std::auto_ptr<<unnamed>::pooled_host_allocation>, Value =
<unnamed>::pooled_host_allocation]’
src/wrapper/mempool.cpp:278: instantiated from here
/usr/local/include/boost/type_traits/detail/cv_traits_impl.hpp:38: internal
compiler error: in make_rtl_for_nonlocal_decl, at cp/decl.c:5067
i only included the ends. if you want the entire thing let me know. but the
error seems to point to a gcc problem. ive read thru
your archives but doesnt seem to solve this problem
if someone could shed some light on this issue, i would very appreciate it.
thanks
-nhieu
Hi everybody,
I am new on PyCuda. I just installed everything on Windows XP and, from the installation log, I think that I did it properly. However, I tried to run the test files provided with pycuda and I get this error
Traceback (most recent call last):
File "C:\PyCuda\test\test_gpuarray.py", line 2, in <module>
import pycuda.autoinit
File "C:\PyCuda\pycuda\autoinit.py", line 1, in <module>
import pycuda.driver as cuda
File "C:\PyCuda\pycuda\driver.py", line 1, in <module>
from _driver import *
ImportError: No module named _driver
how can I solve it?
thanks and sorry for the newbieness of this post
den3b
_________________________________________________________________
I tuoi amici sempre a portata di clic, sul nuovo Web Messenger
http://www.windowslive.it/foto.aspx
Bonjour,
I'm using UBUNTU 9.10 64bit distro on a NVDIA based laptop (Sony F11).
CUDA 3.0 has been installed and woks perfectly.
I have tried to install "stable" pycuda-0.93.tar.gz using the instructions
available on your web site (including gcc downgrade) without success.
I have tried also to install "git" version
In both cases the error looks the same (see below for the details). I have
tried all the hints available on-line without success.
Where is the mistake ?
Do you suggest to concentrate on 0.93, git or whatever?
Do I need to use another distro? 10.04 is incompatible with NVDIA-CUDA
driver, so I have stepped back to 9.10 (suggested by NVIDIA).
I'm a really green on Python, so I'm feeling lost. Pycuda looks great. I'm
working on thermonuclear fusion simulation (www.iter.org) using custom MHD
code: Pycuda could simplify A LOT the development.
Thanks in advance for your help
Simone Mannori - ENEA Brasimone - INRIA Rocquencourt
www.scicos.org - www.scicoslab.org
//++++++++++++++++++++++++++++++++++++++++++++++
simone@vaio:~/svn/pycuda/pycuda$ ./configure.py
Scanning installed packages
Setuptools installation detected at /home/simone/svn/pycuda/pycuda
Non-egg installation
Removing elements out of the way...
Already patched.
/home/simone/svn/pycuda/pycuda/setuptools-0.6c9-py2.6.egg-info already
patched.
Extracting in /tmp/tmpF6gEju
Now working in /tmp/tmpF6gEju/distribute-0.6.4
Building a Distribute egg in /home/simone/svn/pycuda/pycuda
Traceback (most recent call last):
File "setup.py", line 142, in <module>
scripts = scripts,
File "/usr/lib/python2.6/distutils/core.py", line 113, in setup
_setup_distribution = dist = klass(attrs)
File "/tmp/tmpF6gEju/distribute-0.6.4/setuptools/dist.py", line 224, in
__init__
_Distribution.__init__(self,attrs)
File "/usr/lib/python2.6/distutils/dist.py", line 270, in __init__
self.finalize_options()
File "/tmp/tmpF6gEju/distribute-0.6.4/setuptools/dist.py", line 257, in
finalize_options
ep.load()(self, ep.name, value)
File "/tmp/tmpF6gEju/distribute-0.6.4/pkg_resources.py", line 1922, in
load
raise ImportError("%r has no %r attribute" % (entry,attr))
ImportError: <module 'setuptools.dist' from
'/tmp/tmpF6gEju/distribute-0.6.4/setuptools/dist.py'> has no
'check_packages' attribute
/home/simone/svn/pycuda/pycuda/setuptools-0.6c9-py2.6.egg-info already
exists
Traceback (most recent call last):
File "./configure.py", line 3, in <module>
from aksetup_helper import configure_frontend
File "/home/simone/svn/pycuda/pycuda/aksetup_helper.py", line 3, in
<module>
distribute_setup.use_setuptools()
File "/home/simone/svn/pycuda/pycuda/distribute_setup.py", line 139, in
use_setuptools
return _do_download(version, download_base, to_dir, download_delay)
File "/home/simone/svn/pycuda/pycuda/distribute_setup.py", line 120, in
_do_download
egg = _build_egg(tarball, to_dir)
File "/home/simone/svn/pycuda/pycuda/distribute_setup.py", line 112, in
_build_egg
raise IOError('Could not build the egg.')
IOError: Could not build the egg.
simone@vaio:~/svn/pycuda/pycuda$
Hey,
thanks, I came accross your post to the mailing list but I thought it was a different issue in the beginning.
Now the test_driver.py is running. Also my own testing things seem to run OK...however seen the imprecision issues with gpuarrays I suppose I'd better not use them on Fermis right now?
++
Peter
On Jul 21, 2010, at 1:16 PM, Julien Cornebise wrote:
> Hi Peter
>
> I just had the same problem last week:
> http://pycuda.2962900.n2.nabble.com/PyCUDA-test-driver-py-problem-td5276874…
>
> Updating to the latest git commit will solve your context-related
> issues, due to Fermi architecture not accepting unaligned accesses
> (thanks again Andreas for the fix !).
> Beside, try the examples (obtained via
> examples/download-examples-from-wiki), most of them should work, even
> with the 0.94rc instead of the GIT version.
>
> However, the precision issues remain.
>
> Julien
>
> On Wed, Jul 21, 2010 at 4:42 AM, Peter Schmidtke <pschmidtke(a)ub.edu> wrote:
>> Dear PyCuda mailing list readers,
>>
>> I just finished the installation of pycuda 0.94rc on a Centos 5.5 64 bit machine using gcc4.1.2 and boost 1.39 and a fresh manual python2.7 install. The machine is a blade holding two Tesla C2050 cards.
>>
>> I went through all possible install problems on such a rigid system like Centos, but finally I got it installed, thanks to the wiki and a few mailing list posts.
>>
>> I tried to run the pycuda tests in the test directory, but already with the test_driver.py I ran into some trouble. Find the errors and stdout attached to this mail. test_math.py runs fine and test_gpuarray issues a few precision errors.
>> Someone experienced similar problems on other GPU's or is it related to the new Fermi architecture??
>>
>> Thanks in advance for your help.
>>
>>
>>
>> Peter Schmidtke
>>
>> -----------------
>> PhD Student
>> Department of Physical Chemistry
>> School of Pharmacy
>> University of Barcelona
>> Barcelona, Spain
>>
>>
>> _______________________________________________
>> PyCUDA mailing list
>> PyCUDA(a)tiker.net
>> http://lists.tiker.net/listinfo/pycuda
>>
>>
Peter Schmidtke
-----------------
PhD Student
Department of Physical Chemistry
School of Pharmacy
University of Barcelona
Barcelona, Spain
Hi All -
I needed to slice a GPUArray and then pass the gpudata of the resulting slice to a CUDA kernel expecting a pointer.
The current slicing logic in pycuda.gpuarray.GPUArray calculates the gpudata of the slice as an integer, which causes problems if I try to pass it as a pointer to a CUDA Kernel, due to the type mismatch between int and pointer.
I've solved this problem for myself by changing a couple things:
1. Adding a constructor to cuda.hpp/device_allocation: device_allocation(CUdeviceptr devptr, bool valid). This allows me to create a device_allocation object which will not be freed upon destruction. Obviously, the resulting pointer from a slicing operation that constructs a view of a gpuarray should never be freed.
2. Changing the constructor on wrap_cudadrv.cpp/DeviceAllocation from py::no_init to py::init<CUdeviceptr, bool>(), which allows me to instantiate a DeviceAllocation object from within python. Andreas, I have the feeling you won't like exposing this, but it was a quick solution that worked for me.
3. Changing the way the new gpudata is calculated in pycuda.gpuarray.GPUArray.__getitem__(), to create an "invalid" DeviceAllocation object that will not be freed upon destruction:
gpudata=drv.DeviceAllocation(int(self.gpudata) + start*self.dtype.itemsize, False)
I've attached the patch, in case it's useful.
- bryan
Hi,
I'm modifying Theano to allow it to use the code generated by pycuda.
While doing so I needed 2 modifications to pycuda.
1) elemwise1.patch: This modification allow to pass the block and grid
to the ElementwiseKernel generated fct. If not provided, it continue
as before.
2) tools1.patch: recognize the npy_[u]{int}[8,16,32,64] and
npy_float[32,64] data type.
Do you have any questions/comments about those patch?
I don't use the gpuarray class that are passed to the pycuda fct. I
modified mine to mimic its interface. While doing so, I saw that you
use the attribute size and mem_size that seam to always have the same
value? Is that true? If so, why both?
thanks
Frédéric Bastien
Is it now possible to use 64-bit CUDA on Mac OS X 10.6 with pycuda?
With CUDA 3.1, I can build and run pycuda tests fine with a 32-bit python,
boost. The Nvidia SDK examples default to build as 32-bit and work. But
with a 64-bit python and boost, I can build, but almost all tests fail.
My siteconfig.py with macports python 2.6 and boost 1.42, pycuda from git,
and output of 'python test_driver.py' attached:
BOOST_INC_DIR = ['/opt/local/include']
BOOST_LIB_DIR = ['/opt/local/lib']
BOOST_COMPILER = 'gcc42'
BOOST_PYTHON_LIBNAME = ['boost_python-mt']
BOOST_THREAD_LIBNAME = ['boost_thread-mt']
CUDA_TRACE = False
CUDA_ROOT = '/usr/local/cuda/'
CUDA_ENABLE_GL = False
CUDADRV_LIB_DIR = []
CUDADRV_LIBNAME = ['cuda']
CXXFLAGS = []
LDFLAGS =
['-L/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib']
CXXFLAGS = ['-arch', 'x86_64', '-m64']
LDFLAGS = ['-arch', 'x86_64', '-m64']
CXXFLAGS.extend(['-isysroot', '/Developer/SDKs/MacOSX10.6.sdk'])
LDFLAGS.extend(['-isysroot', '/Developer/SDKs/MacOSX10.6.sdk'])
CC:ed to list
Sure I've missed something but here goes:
1) compiler invocation line
>From the siteconf.py file: BOOST_COMPILER = 'gcc44'
2) all errors leading up to (and including) the first header-not-found error?:
as an attempted fix, tried 'git'-ing the submodules init/update and
received a new error still related to headers.
[root@skynet-linux1 pycuda]# make install
ctags -R src || true
/usr/bin/python setup.py install
Scanning installed packages
Setuptools installation detected at /home/vfulco/Downloads/pycuda
Non-egg installation
Removing elements out of the way...
Already patched.
/home/vfulco/Downloads/pycuda/setuptools-0.6c9-py2.6.egg-info already patched.
Extracting in /tmp/tmpIUn1In
Now working in /tmp/tmpIUn1In/distribute-0.6.4
Building a Distribute egg in /home/vfulco/Downloads/pycuda
/home/vfulco/Downloads/pycuda/setuptools-0.6c9-py2.6.egg-info already exists
bpl-subset/bpl_subset /boost/ python .hpp
*** Error occurred in plausibility checking for path of Boost Python library.
*** Error occurred in plausibility checking for path of Boost Thread library.
/usr/local/cuda /bin/ nvcc
/usr/local/cuda/include / cuda .h
/usr/lib64 / lib cuda .so
/usr/lib64/python2.6/distutils/dist.py:266: UserWarning: Unknown
distribution option: 'install_requires'
warnings.warn(msg)
running install
running build
running build_py
running build_ext
running install_lib
running install_data
running install_egg_info
Removing /usr/lib64/python2.6/site-packages/pycuda-0.94rc-py2.6.egg-info
Writing /usr/lib64/python2.6/site-packages/pycuda-0.94rc-py2.6.egg-info
#
3) Also, what does your /usr/include/boost look like?:
>From the siteconf.py file: BOOST_INC_DIR = ['/usr/include/boost']
whereis boost
boost: /usr/include/boost
[root@skynet-linux1 pycuda]# yum info boost
Installed Packages
Name : boost
Arch : x86_64
Version : 1.39.0
Release : 9.fc12
Size : 0.0
Repo : installed
>From repo : updates
Summary : The Boost C++ Libraries
URL : http://www.boost.org/
License : Boost
I should have mentioned running on a 470 card.
Best V.
--
Vince Fulco, CFA, CAIA
612.424.5477 (universal)
vfulco1(a)gmail.com
A posse ad esse non valet consequentia
“the possibility does not necessarily lead to materialization”
Hey PyCuda Gang-
Doing a clean install of 0.94rc after a previously working 0.93, i.e.
wiped out all Pycuda site-packages in python 2.6 directory prior to
new install. 'make' raises errors related to not being able to find
the boost headers, i.e. '/usr/include/boost /boost/ python .hpp'. My
siteconf.py file indicates BOOST_INC_DIR = ['/usr/include/boost'] same
as before and posted previously to this list, changed gcc42 to gcc44
in BOOST_COMPILER and that is it for changes. setup.py file indicates
it attaches a subpath like so :
# BOOST_INC_DIR/boost/python.hpp
if 'BOOST_INC_DIR' in sc_vars:
verify_path (
description="Boost headers",
paths=sc_vars['BOOST_INC_DIR'],
subpaths=['/boost/'],
names=['python'],
extensions=['.hpp']
);
but I can't seem to find where the additional '/boost/' label is being
attached if at all as indicated in the error message when the setup.py path
search executes. And my boost library is one level up
@ '/usr/include/boost/python.hpp'. Tried removing the subpath line
completely then error repeats but with '/usr/include/python.hpp'
suggesting path is not being modified further.
Running on F12 with boost1.39.0, cuda 3.1, gcc4.4.4. TIA, Vince.
--
Vince Fulco, CFA, CAIA
612.424.5477 (universal)
vfulco1(a)gmail.com
A posse ad esse non valet consequentia
“the possibility does not necessarily lead to materialization”
Hi,
I would like to make use of your sparse conjugate gradient solver to speed
up some python code I am working on, however I am having no end of trouble
getting it to work. Can somebody please explain to me exactly what I need to
do to get the conjugate gradient solver in pycuda working?
Regards
David Reynolds