Hello all,
I finally bit the bullet and got radix working in PyOpenCL :)
It's also improved over the SDK example because it does keys and values,
mostly thanks to my advisor.
Additionally this sort will handle any size array as long as it is a power
of 2. The shipped example does not allow for arrays smaller than 32768, but
I've hooked up their naive scan to allow all smaller arrays.
https://github.com/enjalot/adventures_in_opencl/tree/master/experiments/rad…
all you really need are radix.py, RadixSort.cl and Scan_b.cl
some simple tests are at the bottom of radix.py
I hammered this out because I need it for a project, it's not all that clean
and I didn't add support for sorting on keys only (altho it wouldn't take
much to add that, and I intend to at a later time when I need the
functionality). Hopefully this helps someone else out there. I'll also be
porting it using my own OpenCL C++ wrappers to include in my fluid
simulation library at some point.
I also began looking at AMD's radix from their SPH tutorial, but they use
local atomics which are not supported on my 9600M
--
Ian Johnson
http://enja.org
Hi,
I'm trying to access GL textures from pyopencl. Here is my test program:
import sys, os, pygame
from OpenGL.GL import *
sys.path.append("extern/pyopencl/build/lib.linux-x86_64-2.6")
import pyopencl
pygame.init()
screen = pygame.display.set_mode((1024, 768), pygame.HWSURFACE |
pygame.OPENGL | pygame.DOUBLEBUF)
if pyopencl.have_gl():
context = pyopencl.create_some_context()
tex = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA,
GL_UNSIGNED_BYTE, None)
cltex = pyopencl.GLTexture(context, pyopencl.mem_flags.READ_ONLY,
GL_TEXTURE_2D, 0, tex, 2)
It fails with error:
Traceback (most recent call last):
File "cl.py", line 14, in <module>
cltex = pyopencl.GLTexture(context, pyopencl.mem_flags.READ_ONLY,
GL_TEXTURE_2D, 0, tex, 2)
pyopencl.LogicError: clCreateFromGLTexture2D failed: invalid context
I thought that the problem might be in pyopencl's context creation,
which doesn't take the GL context into account. I tried to fix it by
adding appropriate CL_GL_CONTEXT_KHR, CL_GLX_DISPLAY_KHR and
CL_CONTEXT_PLATFORM props to the context, but then I got another error
"pyopencl.LogicError: clCreateFromGLTexture2D failed: invalid value". I
can run kernels just fine with my setup, but this GL stuff won't work.
What am I doing wrong?
Hi
We have recently acquired an NVIDIA Tesla s2050 device for image processing
at work.
I have used PyOpenCL before on my laptop and know the basics. After
installing PyOpenCL on the host server (hosting the s2050 device)
successfully I get the following result when trying to create a context:
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyopencl as cl
>>> cl.create_some_context()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/pyopencl/__init__.py", line
327, in create_some_context
platforms = get_platforms()
pyopencl.LogicError: clGetPlatformIDs failed: invalid/unknown error code
As a sanity check I installed PyCUDA as well on the same machine and again I
get an error:
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pycuda.autoinit
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/usr/local/lib/python2.6/dist-packages/pycuda-0.94.2-py2.6-linux-x86_64.egg/pycuda/autoinit.py",
line 4, in <module>
cuda.init()
pycuda._driver.RuntimeError: cuInit failed: no device
Does anyone know what else I can try, other than installing the latest
NVIDIA development drivers and the latest CUDA toolkit? (which I have done)
Riaan
Hi all,
I'm considering replacing all transfer functions in PyOpenCL with a
single new one, documented here:
http://documen.tician.de/pyopencl/runtime.html#pyopencl.enqueue_copy
The old transfer functions would continue to work for the foreseeable
future, but print deprecation warnings.
A prototype implementation of this is now in git. I'd be happy to hear
your comments.
Andreas
For the wrapping around the C++ core, I think I will use Rcpp.
Which seems to make wrapping C(++) code quite easy.
It is not the R part I need pointers for, but pointers on which Cpp
files are best to begin with.
For example, which files would be required if I wanted to put a vector
of integers on the GPU, do a simple calculation (Kernel function) and
retrieve the results.
And of course if anyone wants to help out, they are very welcome.
Kind regards,
Willem
On Thu, May 26, 2011 at 10:44, Bernard Spies <bfspies(a)gmail.com> wrote:
> Hi Willem,
> One place to look at is perhaps: http://www.rseek.org/
> which searches all the R-documentation and mailing-list archives.
> that might be a good starting point.
> cheers,
> Bernard
>
> --
> Bernard Spies
> +27 (0) 84 580 5409
> bfspies (at) gmail.com
>
> On 26 May 2011 10:30, Riaan van den Dool <riaanvddool(a)gmail.com> wrote:
>>
>> Sounds interesting!
>>
>> Riaan
>>
>> On Wed, May 25, 2011 at 3:50 PM, Willem Ligtenberg
>> <willem.ligtenberg(a)openanalytics.eu> wrote:
>>>
>>> Hi,
>>>
>>> I want to bring the PyOpenCL goodness to another programming languange
>>> (R).
>>> If I understand it correctly PyOpenCL use a small C++ layer before
>>> communicating with OpenCL.
>>> What I would like to do, is change the Python wrapping around this C++
>>> layer into an R wrapping.
>>> Do you have any pointers on how best to proceed?
>>>
>>> Kind regards,
>>>
>>> Willem Ligtenberg
>>>
>>> _______________________________________________
>>> PyOpenCL mailing list
>>> PyOpenCL(a)tiker.net
>>> http://lists.tiker.net/listinfo/pyopencl
>>
>
>
>
>
>
>
On Thu, 26 May 2011 12:03:51 +0200 (CEST), kevin(a)kbullmann.de wrote:
Non-text part: text/html
>
> I am running pyopencl 2011.1beta, the version from PyPi on a windows 7 x64 machine with AMD APP toolkit.
> I'm experimenting with syncing 2 kernels in 2 queues from the same context and encountered the following behaviour:
>
> I get an event from enqueing the first kernel into it's queue. Using event.wait() I can wait for kernel1 to be finished before enqueueing kernel2 this works fine.
> Now I got the idea of testing how long roughly the kernel1 takes to complete in relation to the host program and wrote something like this:
>
> while(event1.command_execution_status != pyopencl.event_info.COMMAND_EXECUTION_STATUS.COMPLETE):
> counter+=1
>
> This lead to the program getting stuck in the while-loop. I am new to
> OpenCL and pyOpenCL so maybe I am wrong about how to use the event
> object but can somebody explain to me why this loop doesn't work?
I think this ought to work. What implementation? Can you send test code?
Andreas
I am running pyopencl 2011.1beta, the version from PyPi on a windows 7 x64 machine with AMD APP toolkit.
I'm experimenting with syncing 2 kernels in 2 queues from the same context and encountered the following behaviour:
I get an event from enqueing the first kernel into it's queue. Using event.wait() I can wait for kernel1 to be finished before enqueueing kernel2 this works fine.
Now I got the idea of testing how long roughly the kernel1 takes to complete in relation to the host program and wrote something like this:
while(event1.command_execution_status != pyopencl.event_info.COMMAND_EXECUTION_STATUS.COMPLETE):
counter+=1
This lead to the program getting stuck in the while-loop.
I am new to OpenCL and pyOpenCL so maybe I am wrong about how to use the event object but can somebody explain to me why this loop doesn't work?
Hi,
I want to bring the PyOpenCL goodness to another programming languange (R).
If I understand it correctly PyOpenCL use a small C++ layer before
communicating with OpenCL.
What I would like to do, is change the Python wrapping around this C++
layer into an R wrapping.
Do you have any pointers on how best to proceed?
Kind regards,
Willem Ligtenberg
On Sun, May 22, 2011 at 11:33 AM, Nicolas Pinto <nicolas.pinto(a)gmail.com> wrote:
>> Also, is there a public issue tracker for PyCUDA and PyOpenCL? I
>> personally find keeping track of e-mail threads to be a nightmare. I
>> hate to be a pain but like everyone else I'm +1 on putting these
>> projects on GitHub.
>
> +1 on (re-)opening the issue tracker on github:
> https://github.com/inducer/pyopencl
> ;-)
> N
BTW near as I can tell there is across-the-board exception-reporting
problems with boost.python in 64-bit Python on OS X (both PyCUDA and
PyOpenCL). I am trying to reinstall 32-bit Python on here so I can see
if the same problems occur with 32-bit...
OT but: there are some other bug reports I'd like to make about
PyCUDA...issue tracker would be nice for that ;) Here's an example:
passing scalar values (e.g. ints) to a PyCUDA kernel *only* works
right now when they are subclasses of numpy.number. This is extremely
annoying in practice.
Also, PyCUDA git HEAD does not currently build in 64-bit mode on OS X
due to a linker error (surely having something to do with the fat CUDA
binaries released earlier this year-- used to be 32-bit only).
-W