Mike,
Mike McFarlane <mike.mcfarlane(a)iproov.com> writes:
>> I'm using Numpy 1.4.1. Thanks for your help.
First, please keep the list cc'd. That makes answers searchable.
Second, could you try upgrading that? You don't even need to mess with
your system to do so--just use a virtualenv.
Andreas
Mike McFarlane <mike.mcfarlane(a)iproov.com> writes:
> Hi
>
> I've installed pycuda following
> http://wiki.tiker.net/PyCuda/Installation/Linux
>
> When I try to run test/test_driver.py it fails many tests, mainly with
> 'TypeError: 'numpy.ndarray' does not have the buffer interface'. The output
> is below for test_driver.py and the initial make.
>
> Can anyone explain what is wrong please?
>
> Thanks, and apologies if this mailing list isn't the right place for such
> Qs.
It is. What's your version of numpy?
Andreas
Daniel Pineo <daniel(a)pineo.net> writes:
> The instructions to build and install PyCUDA on Windows have been removed
> from the wiki. There was a very detailed set of instructions that I had
> previously used to successfully install. Can these still be found
> somewhere? Is there a way to restore them to the wiki?
I try to catch vandalism of this nature and undo it right away,
but this seems to have slipped through the cracks. Sorry about that, it
should be back up.
Andreas
The instructions to build and install PyCUDA on Windows have been removed
from the wiki. There was a very detailed set of instructions that I had
previously used to successfully install. Can these still be found
somewhere? Is there a way to restore them to the wiki?
Eric Larson wrote:
> Hey Kevin,
>
> Not sure about the CUDA limitations, I'll let others speak to that...
>
> But in developing the mne-python CUDA filtering code, IIRC the primary
> limitation was (by far) transferring the data to and from the GPU. The FFT
> computations themselves were a fraction of the total time. I suspect using
> multiple jobs won't help CUDA filtering very much since the jobs would
> presumably compete for the same memory bandwidth, but I would love to be
> wrong about this. If it works better, it would be great to open an
> mne-python issue for it, as we are always looking for speedups :)
>
> Cheers,
> Eric
> On Nov 1, 2014 7:21 PM, "kjs" <bfb(a)riseup.net> wrote:
>
>> Hello,
>>
>> I have written an MPI routine in Python that sends jobs to N worker
>> processes. The root process handles file IO and the workers do
>> computation. In the worker processes calls are made to the cuda enabled
>> GPU to do FFTs.
>>
>> Is it safe to have N processes potentially making calls to the same GPU
>> at the same time? I have not made any amendments to the cuda code[0],
>> and have little knowledge of what could possibly go wrong.
>>
>> Thanks much,
>> Kevin
>>
>> [0] I am using python-mne with cuda enabled to call scikits.cuda.fft
>> https://github.com/mne-tools/mne-python/blob/master/mne/cuda.py
>>
>> _______________________________________________
>> PyCUDA mailing list
>> PyCUDA(a)tiker.net
>> http://lists.tiker.net/listinfo/pycuda
>>
>>
>
Thanks Andreas, this is good to know. I noticed that even though pycuda
is currently only using one of two GPUs, that GPU is only ever at ~35%
memory and ~22% processing utilization. This could be related to Eric's
observation that the PCI-e 16x bus bandwidth reaches capacity while the
GPU is pushing out fast FFT'ed arrays. Thus allowing for only one or two
arrays in the GPU at the same time.
From what I have seen, using cuda speeds up my FFTs ~2x. Though, the
workers do many other computations on the CPU. It's a worst case
scenario that all N workers are trying to send data to the GPU at the
same time.
-Kevin
Dnia 2014-11-01, sob o godzinie 17:01 -0700, Eric Larson pisze:
> Before I left work on Friday I checked to ensure that the packages in
> Synaptic were all NVIDIA version 331 (I think). Even if the numbers
> are right in Synaptic, I suppose there could still be a version
> mismatch somewhere, so I'll check "dmesg" specifically on Monday.
>
>
> FWIW I am pretty sure that the CUDA version jumped up from 5.5 (or
> 5.0?) in 14.04 up to CUDA 6.0 in 14.10. But I assume that PyCUDA is
> designed (and tested) to be forward-compatible, so I doubt that's the
> problem.
It depends on how Ubuntu builds PyCUDA.
Check pycuda package dependencies:
$ apt-cache show python-pycuda
whether there is any dependency on package
with 5.5 in its name, or only 6.0 packages.
Regards
--
Tomasz Rybak GPG/PGP key ID: 2AD5 9860
Fingerprint A481 824E 7DD3 9C0E C40A 488E C654 FB33 2AD5 9860
http://member.acm.org/~tomaszrybak
Hello,
I have written an MPI routine in Python that sends jobs to N worker
processes. The root process handles file IO and the workers do
computation. In the worker processes calls are made to the cuda enabled
GPU to do FFTs.
Is it safe to have N processes potentially making calls to the same GPU
at the same time? I have not made any amendments to the cuda code[0],
and have little knowledge of what could possibly go wrong.
Thanks much,
Kevin
[0] I am using python-mne with cuda enabled to call scikits.cuda.fft
https://github.com/mne-tools/mne-python/blob/master/mne/cuda.py
Eric Larson <larson.eric.d(a)gmail.com> writes:
> PyCUDA worked perfectly on Ubuntu 14.04, but after upgrade to 14.10 I get
> the following in both Python 2 and Python 3:
>
>>>> import pycuda.autoinit
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File
> "/home/larsoner/.local/lib/python2.7/site-packages/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/autoinit.py",
> line 4, in <module>
> cuda.init()
> pycuda._driver.Error: cuInit failed: unknown
>
> This is the same on latest `master` (shown above) and using the version in
> the Ubuntu repos. I can manually compile and run at least this example
> (devicequery):
>
> http://www.cac.cornell.edu/vw/gpu/example_submit.aspx
>
> and my system is using the proprietary NVIDIA drivers, so CUDA appears to
> be configured properly. Has anyone else experienced this? I've reported the
> bug here as well:
>
> https://bugs.launchpad.net/ubuntu/+source/pycuda/+bug/1388217
Check if 'dmesg' reveals anything. You might have a mismatch between
libcuda.so and you driver. (If so, good job, Ubuntu.)
Andreas
PyCUDA worked perfectly on Ubuntu 14.04, but after upgrade to 14.10 I get
the following in both Python 2 and Python 3:
>>> import pycuda.autoinit
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/home/larsoner/.local/lib/python2.7/site-packages/pycuda-2014.1-py2.7-linux-x86_64.egg/pycuda/autoinit.py",
line 4, in <module>
cuda.init()
pycuda._driver.Error: cuInit failed: unknown
This is the same on latest `master` (shown above) and using the version in
the Ubuntu repos. I can manually compile and run at least this example
(devicequery):
http://www.cac.cornell.edu/vw/gpu/example_submit.aspx
and my system is using the proprietary NVIDIA drivers, so CUDA appears to
be configured properly. Has anyone else experienced this? I've reported the
bug here as well:
https://bugs.launchpad.net/ubuntu/+source/pycuda/+bug/1388217
Cheers,
Eric