Hi,
I'm trying to access GL textures from pyopencl. Here is my test program:
import sys, os, pygame
from OpenGL.GL import *
sys.path.append("extern/pyopencl/build/lib.linux-x86_64-2.6")
import pyopencl
pygame.init()
screen = pygame.display.set_mode((1024, 768), pygame.HWSURFACE |
pygame.OPENGL | pygame.DOUBLEBUF)
if pyopencl.have_gl():
context = pyopencl.create_some_context()
tex = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, tex)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA,
GL_UNSIGNED_BYTE, None)
cltex = pyopencl.GLTexture(context, pyopencl.mem_flags.READ_ONLY,
GL_TEXTURE_2D, 0, tex, 2)
It fails with error:
Traceback (most recent call last):
File "cl.py", line 14, in <module>
cltex = pyopencl.GLTexture(context, pyopencl.mem_flags.READ_ONLY,
GL_TEXTURE_2D, 0, tex, 2)
pyopencl.LogicError: clCreateFromGLTexture2D failed: invalid context
I thought that the problem might be in pyopencl's context creation,
which doesn't take the GL context into account. I tried to fix it by
adding appropriate CL_GL_CONTEXT_KHR, CL_GLX_DISPLAY_KHR and
CL_CONTEXT_PLATFORM props to the context, but then I got another error
"pyopencl.LogicError: clCreateFromGLTexture2D failed: invalid value". I
can run kernels just fine with my setup, but this GL stuff won't work.
What am I doing wrong?
Hi Olivier,
First of all, sorry for the long delay in replying, and thanks for your
message. Also, please note that I have cc'd the pyopencl mailing list--I
hope that's ok with you.
On Wed, 26 Jan 2011 11:29:37 +0100, Olivier Chafik <olivier.chafik(a)gmail.com> wrote:
> I'm Olivier Chafik, the author of JavaCL / ScalaCL / OpenCL4Java (
> http://javacl.googlecode.com/, http://scalacl.googlecode.com/)
>
> [snip]
>
> Now that we have bindings for many languages and technologies, I feel like
> some amount of convergence wouldn't hurt. Convergence of APIs on the Java
> side has briefly been evoked in the past but it's precisely where everyone
> tries to distinguish himself from his neighbour, and convergence of API
> between languages just does not make sense, so I was more thinking about the
> OpenCL kernels side.
>
> Long story short, the latest JavaCL snapshots support #include include
> directives in the hosted code that refer to files in the Java classpath, and
> even support the syntax ((CLProgram)someProgram).addInclude("
> http://path.to/some/include/base") to resolve remote includes.
> This is done through basic #include parsing in the sources, eager includes
> resolution + caching + adding -I compiler arguments.
>
> You can see a demo of these includes in my latest Image Transform Kernels
> Editor :
> http://code.google.com/p/javacl/wiki/SamplesAndDemos#Image_Transform_Kernel…
>
> The result is that this mechanism allows for cleaner composition and
> reusability of OpenCL code. From a Java deployment perspective, adding an
> OpenCL dependency amounts to add a Java dependency that only contains OpenCL
> files in its resources. All this still being standard (pure OpenCL, with
> relative file includes) and language-agnostic.
>
> I've started grouping some OpenCL code in "LibCL", a JavaCL sub-project
> thats 'includable' by any JavaCL program :
> http://code.google.com/p/nativelibs4java/source/browse/trunk/libraries/Open…
>
> I'd be thrilled if you accepted to join forces to help build a standard
> OpenCL on that model (hehe, and I didn't say there was anything good to keep
> from this first attempt, it's just an embryonic bunch of quick refactors +
> dirty naive ports from BSD code).
>
> Now of course I thought, why stop at this productivity gain ? We could add
> template engines that would make it possible to create meta programs and
> work around the C / C preprocessor limitations (I saw you had similar
> discussions on your mailing list recently and you used Mako).
> This is where it becomes dangerous, because choosing a template engine makes
> people write code that's not OpenCL anymore, and won't work accross multiple
> bindings libraries (including the raw C bindings). I believe OpenCL is not
> big enough yet, we still have a chance to avoid such fragmentation on the
> kernel source front... Now that said, I dont know what templating engines
> are available on Python, Java and others, so this might just not be possible
> at all... (if *only* Apple could have chosen C++ over C !)
>
> Anyway, sorry for this lengthy email... Please let me know what you think
> about this if you have a minute or two :-)
A collaboration like this certainly sounds like a good idea.
If we insist that such a collaboration centers around shared code, that
would depend on a common templating technology.
According to
https://secure.wikimedia.org/wikipedia/en/wiki/Template_engine_(web)
that limits us to StringTemplate [1] and Mustache [2,3]. If you know any
other candiates, please point them out. Unfortunately, Mustache makes
many things that are simple in Mako (such as unrolling a loop based on a
count) rather difficult because of the absence of actual logic,
StringTemplate less so.
[1] http://www.stringtemplate.org/
[2] http://mustache.github.com/
[3] http://mustache.github.com/mustache.5.html
In any case, since potentially significant bits of (non-shared) host
code need to be written for each (shared) kernel, I think that explicit
kernel code sharing might be a bit too heavy-handed, and that we can
probably get by with occasional mutual 'manual' stealing.
All of pyopencl's code will show up in this directory:
http://git.tiker.net/pyopencl.git/tree/HEAD:/pyopencl
If you can point us to a permanent place where your code is availble,
then I'd say a mutually beneficial future of theft is assured. :)
Thanks again for being in touch,
Andreas
Hi Riaan,
On Fri, 28 Jan 2011 14:49:10 +0200, Riaan van den Dool <riaanvddool(a)gmail.com> wrote:
> Would it be possible at all to create a version of PyOpenCL that does not
> link to and does not need any proprietary drivers? Maybe via a make option.
>
> I think it will help adoption of pyOpenCL if code that is written using
> pyopencl will 'always run' even if no GPU is available.
>
> If there is already such a version, please excuse my ignorance and point me
> in the right direction.
I'll answer with a collection of a few facts:
- It is possible to use OpenCL on the CPU only, by way of AMD's OpenCL
implementation.
http://developer.amd.com/gpu/AMDAPPSDK/Pages/default.aspx
This will work even if no GPU is available. (Apple Snow Leopard and
later also always have a CPU implementation.)
- No open-source implementation of OpenCL currently exists, but there is
one in the works here:
http://cgit.freedesktop.org/mesa/clover/tree/
- Making PyOpenCL work even if no CL implementation is available is
explicitly not one of the projects goals. This would amount to
building a CL implementation, which would best be accomplished in a
separate project.
- The mailing list for PyOpenCL is pyopencl(a)tiker.net, not
pycuda(a)tiker.net.
HTH,
Andreas
Hi
I run pyopencl on 64-bit linux 2.6.35 with CUDA toolkit 3.2.
When trying to allocate Contexts from different processes with pyopencl
0.92beta, I got "out of host memory" errors on creating new Contexts,
and a segmentation fault on the running Context.
Same code with pyopencl 2011.1beta3 worked flawlessly, solving my problem.
Sorry if this issue is already known. Hope this can help others avoid this.
Cheers,
Jonathan.
On 1/19/2011 9:19 AM, Andreas Kloeckner wrote:
> Hi Christoph,
> On Tue, 18 Jan 2011 22:34:58 -0800, Christoph Gohlke<cgohlke(a)uci.edu> wrote:
>> I'm still getting the compiler error:
>>
>> pyopencl-121e1d6\src\wrapper\wrap_cl.hpp(733) : error C2664:
>> 'std::vector<_Ty>
>> ::push_back' : cannot convert parameter 1 from 'HANDLE' to 'const
>> cl_context_properties&'
>> with
>> [
>> _Ty=cl_context_properties
>> ]
>> Reason: cannot convert from 'HANDLE' to 'const
>> cl_context_properties'
>> There is no context in which this conversion is possible
>
> Can you try the version that's in git now (which is basically what you
> proposed initially)? Thanks for your patience!
>
> Andreas
>
Thank you, that version works.
Christoph
I am using Windows 7/64, Python 2.6, VS2008, Boost1.44, AMD 5870 and I can not
get the gl interop demo to work. I get either of the two following messages:
C:\Users\Keith\Desktop\pyopencl-0.92\examples>python gl_interop_demo.py
Traceback (most recent call last):
File "gl_interop_demo.py", line 72, in <module>
initialize()
File "gl_interop_demo.py", line 33, in initialize
ctx = cl.Context(properties=props)
pyopencl.LogicError: Context failed: invalid gl sharegroup reference number
C:\Users\Keith\Desktop\pyopencl-0.92\examples>python gl_interop_demo.py
Traceback (most recent call last):
File "gl_interop_demo.py", line 72, in <module>
initialize()
File "gl_interop_demo.py", line 33, in initialize
ctx = cl.Context(properties=props)
OverflowError: long int too large to convert to int
anyone know what I am doing wrong?
--Keith
Hi all,
I've made a small, backward-compatible change to the CL Array API:
In places where the context can be inferred (such as when a queue is
passed at the same time), passing it is now deprecated and can be
omitted in the future. This is obviously not only convenient, but also
logical. If this is a bad idea for some reason I can't see, now would be
a good time to let me know. :)
The code for this is in git as of a few minutes ago.
Andreas
Hi Andreas,
> -----------------------------------------------------------------------
>
> 1. Make it a 'hard' dependency, installed by setup.py.
>
> 2. Make it a 'soft' dependency--the reduction code will fail on import
> with a usable error message alerting the user that he needs to install
> Mako.
>
> 3. Try to eliminate the dependency on any particular templating engine
> by rewriting the code.
>
> -----------------------------------------------------------------------
>
I would prefer option 2.
Installing an additional package such as Mako is not a big deal if you
administrate a single-user system. But if you have a cluster with many users
and a large number of request for new software or don't have administrative
privileges and want to install things as user additional dependencies make
life harder.
Happy New Year,
Jan
Hello.
I have just tried to compile PyCUDA
90bf8d282c58fb9c3a14aa0ded537dbe4dd7a68f
from 2011-01-18 and it fails with error
gcc -pthread -fno-strict-aliasing -fwrapv -Wall -O3 -DNDEBUG -g -O2
-fPIC -DPYGPU_PACKAGE=pyopencl -DHAVE_GL=1 -Isrc/cpp
-I/usr/include/nvidia-current
-I/usr/lib/pymodules/python2.5/numpy/core/include
-I/usr/include/python2.5 -c src/wrapper/wrap_cl.cpp -o
build/temp.linux-x86_64-2.5/src/wrapper/wrap_cl.o
In file included from src/wrapper/wrap_cl.cpp:1:
src/wrapper/wrap_cl.hpp: In function ‘boost::python::handle<_object>
pyopencl::get_mem_obj_host_array(boost::python::api::object,
boost::python::api::object, boost::python::api::object,
boost::python::api::object)’:
src/wrapper/wrap_cl.hpp:3117: warning: comparison between signed and
unsigned integer expressions
src/wrapper/wrap_cl.cpp: In function ‘void init_module__cl()’:
src/wrapper/wrap_cl.cpp:21: error: ‘pyopencl_expose_mempool’ was not
declared in this scope
error: command 'gcc' failed with exit status 1
Attached patch fixes this problem.
--
Tomasz Rybak <bogomips(a)post.pl> GPG/PGP key ID: 2AD5 9860
Fingerprint A481 824E 7DD3 9C0E C40A 488E C654 FB33 2AD5 9860
http://member.acm.org/~tomaszrybak
Hi Bartek,
On 01/10/11 19:27, Bartek Kochan wrote:
> Could You provide full platform details please ?
We have QS22 blades running Fedora Core 9. IBM provides an OpenCL SDK
(technology preview). I used IBM XL C for OpenCL, V0.2 (technology preview)
Version: 00.02.0000.0001 and Python 2.5.1.
> Is there a lot of modifications You have to apply to compile it ?
I had to specify the name of the OpenCL library:
./configure.py --cl-libname=IBMOpenCL --cl-inc-dir=/usr/include
That was all I had to do.
Jan