Hi everybody,
I am new on PyCuda. I just installed everything on Windows XP and, from the installation log, I think that I did it properly. However, I tried to run the test files provided with pycuda and I get this error
Traceback (most recent call last):
File "C:\PyCuda\test\test_gpuarray.py", line 2, in <module>
import pycuda.autoinit
File "C:\PyCuda\pycuda\autoinit.py", line 1, in <module>
import pycuda.driver as cuda
File "C:\PyCuda\pycuda\driver.py", line 1, in <module>
from _driver import *
ImportError: No module named _driver
how can I solve it?
thanks and sorry for the newbieness of this post
den3b
_________________________________________________________________
I tuoi amici sempre a portata di clic, sul nuovo Web Messenger
http://www.windowslive.it/foto.aspx
Is it now possible to use 64-bit CUDA on Mac OS X 10.6 with pycuda?
With CUDA 3.1, I can build and run pycuda tests fine with a 32-bit python,
boost. The Nvidia SDK examples default to build as 32-bit and work. But
with a 64-bit python and boost, I can build, but almost all tests fail.
My siteconfig.py with macports python 2.6 and boost 1.42, pycuda from git,
and output of 'python test_driver.py' attached:
BOOST_INC_DIR = ['/opt/local/include']
BOOST_LIB_DIR = ['/opt/local/lib']
BOOST_COMPILER = 'gcc42'
BOOST_PYTHON_LIBNAME = ['boost_python-mt']
BOOST_THREAD_LIBNAME = ['boost_thread-mt']
CUDA_TRACE = False
CUDA_ROOT = '/usr/local/cuda/'
CUDA_ENABLE_GL = False
CUDADRV_LIB_DIR = []
CUDADRV_LIBNAME = ['cuda']
CXXFLAGS = []
LDFLAGS =
['-L/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib']
CXXFLAGS = ['-arch', 'x86_64', '-m64']
LDFLAGS = ['-arch', 'x86_64', '-m64']
CXXFLAGS.extend(['-isysroot', '/Developer/SDKs/MacOSX10.6.sdk'])
LDFLAGS.extend(['-isysroot', '/Developer/SDKs/MacOSX10.6.sdk'])
750W - the machine was built specifically as a "higher-end gaming
machine" (from a consumer-level PC supplier) - but built for the
office for physics simulations. i3 2.9GHz CPU, 4GB RAM, GTX 480 card,
no other hardware.
The machine is open (has been since Monday evening), I've been
measuring temperatures (48 degrees C at idle, 82 degrees C under 95%
load for 30 minutes) and nothing seems out of the ordinary.
I can run the Nvidia SDK sample programs (e.g. particles, smoke,
waves) and as long as they start, the machine is fine. The problem is
when one of them freezes upon start-up (just as my pyCUDA programs can
freeze on start-up). The machine can be freshly booted and old or
having been under stress for an hour, there's no obvious pattern to
when it will freeze.
I've switched back to my older 9800GT machine which has 197.45
drivers, I'll upgrade that shortly to the latest drivers to see if
that causes instability.
Right now I'm trying to finish some logical patches for pyCUDA before
I'm done with this client for the week.
i.
On 29 June 2010 14:40, Bryan Catanzaro <bryan.catanzaro(a)gmail.com> wrote:
> What kind of power supply do you have?
>
> - bryan
>
> On Jun 29, 2010, at 1:43 AM, Ian Ozsvald <ian(a)ianozsvald.com> wrote:
>
>> Does anyone here successfully run Win XP with a GTX 480 (or 470) with
>> CUDA 3.0/3.1?
>>
>> I upgraded from a 9800GT to the GTX 480 last week (with a whole new
>> machine) and the new machine is very unstable, I'm trying to identify
>> whether it is a hardware issue or the relatively recent drivers from
>> Nvidia. On Friday it hung twice, yesterday it hung 12 times.
>>
>> By 'hung' I mean that if I run the CUDA test programs (from the SDK)
>> then when the program starts - about 1 start in 5 - it'll cause the
>> machine to hang. With pyCUDA it seems to occur more frequently (but
>> maybe this is due to chance) - it happens for test programs (e.g.
>> dump_properties.py, test_gpuarray.py) and my mandelbrot.py demo - all
>> programs seem to cause the crash (but only when the program first
>> starts, if it runs ok then it'll keep running fine).
>>
>> I'm hunting around on the web but I'm not finding any similar
>> problems. This occurs with the latest NVidia drivers (257.21) and CUDA
>> 3.0, also with CUDA 3.1 (in fact the machine seems less stable with
>> CUDA 3.1 - I'm about to downgrade to confirm this).
>>
>> Has anyone seen these kind of symptons before?
>>
>> Cheers,
>> Ian.
>>
>> --
>> Ian Ozsvald (A.I. researcher, screencaster)
>> ian(a)IanOzsvald.com
>>
>> http://IanOzsvald.com
>> http://morconsulting.com/
>> http://TheScreencastingHandbook.com
>> http://ProCasts.co.uk/examples.html
>> http://twitter.com/ianozsvald
>>
>> _______________________________________________
>> PyCUDA mailing list
>> PyCUDA(a)tiker.net
>> http://lists.tiker.net/listinfo/pycuda
>
--
Ian Ozsvald (A.I. researcher, screencaster)
ian(a)IanOzsvald.com
http://IanOzsvald.comhttp://morconsulting.com/http://TheScreencastingHandbook.comhttp://ProCasts.co.uk/examples.htmlhttp://twitter.com/ianozsvald
Andreas, I'm attaching two patches.
0001 removes the #warning lines in cuda.hpp that make msvc (2008 on WinxP) fail.
0002 adds GPUArray comparisons for == != < > <= >=
Assuming you're cool with the patches I can contribute an updated
Mandelbrot.py where a reasonable speed-up is obtained using pure
Python/GPUArray(numpy-like) operators rather than having to implement
a pure .cu solution. This GPUArray solution sits between a numpy (CPU)
speed-up and the pure .cu code version. It'll make for a good demo for
pure-Python folk (like my boss).
The above is tested on WinXP 32 bit, pyCUDA 0.94 RC (latest), CUDA 3.0
with 9800GT. The original code with Mandelbrot solution was developed
on my new (but crashy) WinXP 32 bit box with pyCUDA 0.94 RC (latest),
CUDA 3.1 and 480 GTX.
i.
--
Ian Ozsvald (A.I. researcher, screencaster)
ian(a)IanOzsvald.com
http://IanOzsvald.comhttp://morconsulting.com/http://TheScreencastingHandbook.comhttp://ProCasts.co.uk/examples.htmlhttp://twitter.com/ianozsvald
A pre-alpha release of PyCULA, which provides support for the CULA port of
LAPACK to CUDA is available from:
http://math.temple.edu/research/geometry/PyCULA/
The main features are:
* ctypes/numpy bindings for the parts of LAPACK supported in CULA
* A PyCUDA/GPUArray interface for the CULA device functions
* Mixing PyCUDA kernel code with LACPACK calls
This is a preview with some rough edges, but it is in a state where it's
useful for applications that require LAPACK functionality.
^L
--
Louis Theran
Research Assistant Professor
Math Department, Temple University
http://math.temple.edu/~theran/
+1.215.204.3974
(Resent without attachment - sorry)
Is it now possible to use 64-bit CUDA on Mac OS X 10.6 with pycuda?
With CUDA 3.1, I can build and run pycuda tests fine with a 32-bit python,
boost. The Nvidia SDK examples default to build as 32-bit and work. But
with a 64-bit python and boost, I can build, but almost all tests fail.
My siteconfig.py with macports python 2.6 and boost 1.42, pycuda from git,
and output of 'python test_driver.py' attached (in prior email):
BOOST_INC_DIR = ['/opt/local/include']
BOOST_LIB_DIR = ['/opt/local/lib']
BOOST_COMPILER = 'gcc42'
BOOST_PYTHON_LIBNAME = ['boost_python-mt']
BOOST_THREAD_LIBNAME = ['boost_thread-mt']
CUDA_TRACE = False
CUDA_ROOT = '/usr/local/cuda/'
CUDA_ENABLE_GL = False
CUDADRV_LIB_DIR = []
CUDADRV_LIBNAME = ['cuda']
CXXFLAGS = []
LDFLAGS =
['-L/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib']
CXXFLAGS = ['-arch', 'x86_64', '-m64']
LDFLAGS = ['-arch', 'x86_64', '-m64']
CXXFLAGS.extend(['-isysroot', '/Developer/SDKs/MacOSX10.6.sdk'])
LDFLAGS.extend(['-isysroot', '/Developer/SDKs/MacOSX10.6.sdk'])
Does anyone here successfully run Win XP with a GTX 480 (or 470) with
CUDA 3.0/3.1?
I upgraded from a 9800GT to the GTX 480 last week (with a whole new
machine) and the new machine is very unstable, I'm trying to identify
whether it is a hardware issue or the relatively recent drivers from
Nvidia. On Friday it hung twice, yesterday it hung 12 times.
By 'hung' I mean that if I run the CUDA test programs (from the SDK)
then when the program starts - about 1 start in 5 - it'll cause the
machine to hang. With pyCUDA it seems to occur more frequently (but
maybe this is due to chance) - it happens for test programs (e.g.
dump_properties.py, test_gpuarray.py) and my mandelbrot.py demo - all
programs seem to cause the crash (but only when the program first
starts, if it runs ok then it'll keep running fine).
I'm hunting around on the web but I'm not finding any similar
problems. This occurs with the latest NVidia drivers (257.21) and CUDA
3.0, also with CUDA 3.1 (in fact the machine seems less stable with
CUDA 3.1 - I'm about to downgrade to confirm this).
Has anyone seen these kind of symptons before?
Cheers,
Ian.
--
Ian Ozsvald (A.I. researcher, screencaster)
ian(a)IanOzsvald.com
http://IanOzsvald.comhttp://morconsulting.com/http://TheScreencastingHandbook.comhttp://ProCasts.co.uk/examples.htmlhttp://twitter.com/ianozsvald
This follows up on the recent report by Vu Nguyen that cuda.hpp breaks
with MSVC:
> c:\pycuda-0.94rc\src\cpp\cuda.hpp(32) :
> fatal error C1021: invalid preprocessor command 'warning'
> error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"'
> failed with exit status 2
I've just moved to new WinXP machine with a GTX 480 (woot!) card with
CUDA 3.0. Upon compilation I hit the same problem. The issue is that:
#if (CUDA_VERSION == 3000)
in cuda.hpp is now active (before I had CUDA 2.3 IIRC) but MSVC
doesn't like the #warning statements, instead it needs:
#pragma warning
http://msdn.microsoft.com/en-us/library/2c8f766e%28VS.71%29.aspx
It looks like the best solution is to use a compiler-specific test:
#ifdef _MSC_VER
http://www.mobydisk.com/softdev/techinfo/cpptips.html#cpptip3
to switch between a "#pragma warning" block or the original "#warning" block.
If anyone knows of a better solution then say now else I'll make a
simple patch early next week.
i.
ps. I've also updated the "Windows Installation" wiki page with
updated version info and dependency package names.
--
Ian Ozsvald (A.I. researcher, screencaster)
ian(a)IanOzsvald.com
http://IanOzsvald.comhttp://morconsulting.com/http://TheScreencastingHandbook.comhttp://ProCasts.co.uk/examples.htmlhttp://twitter.com/ianozsvald
Hi Lev,
Yes, we are doing this in my project. We had the same issue as well, and
found a workaround. Actually, we have been writing on the mailing list
regarding this issue all weekend, and I think you will find some relevant
code in my postings (gerald) showing the exact workaround we are using. If
you can't find them I can repost, but I think they ought to be in the
digest?
We are working to finish a python module of the entire CULA package,
including device wrappers, and we are using pyCUDA for much of the device
wrapper framework. I am hoping to have this ready soon; we are just
tightening up a few issues and writing the testing routines now.
Best of Luck,
Garrett Wright
Temple University ~ Mathematics