On Mon, 20 Jun 2011 09:40:02 -0400, Frédéric Bastien <nouiz(a)nouiz.org> wrote:
compyte is not ready for public use. The structure of files will
change when isolate the dependency on python lib later.
See more comment inlined in your post.
2011/6/18 Bogdan Opanchuk <mantihor(a)gmail.com>:
> I finally have the time to contribute something to compyte, so I had a
> look at its sources. As far as I understand, at the moment it has:
> - sources for GPU platform-dependent memory operations (malloc()/free()/...)
> - sources for array class, which uses abstract API of these operations
> - some high-level Python code like scan.py with generalized kernels
> So I have a few questions about this layout:
> 1. It does not have its own setup script; is it supposed to be a part
> of PyCuda/PyOpenCL and get compiled with them or is it just a
> temporary solution?
Currently there is not a good compilation system for this project as
you saw. What I currently have in mind is that it should
compile/install itself and not ask the other project to do so. But
this is not done yet. If you want and have time to do it, it would be
a good contribution.
I actually disagree with that--I don't particularly see why compyte
shouldn't use the surrounding project's (i.e. PyCUDA/PyOpenCL's)
distutils scripts. No need to waste time maintaining another build
infrastructure if the package is likely only installed as part of a
(but see below)
> In the former case, the second question:
> 2. Why was it decided to keep low-level memory operations in compyte?
> They require platform-specific makefiles (and the one currently
> committed to repo is quite specific and belongs to Frederic, as I
> understand from the paths inside). The only reason I can see is to
> keep memory operations API inside the single module, but in this case
> we will have to copy specialized building code from setup scripts of
> PyCuda/PyOpenCL, which, I think, is more serious violation of DRY.
> Memory API is small and unlikely to change much; we can create
> separate modules in PyCuda/PyOpenCL and pass pointers to memory
> functions to compyte using capsules.
compyte should be usable by other tools then PyCUDA and PyOpenCL.
If you care to make it so, sure--I don't see the point. This would only
make sense if the package weren't Python-only--but I thought we decided
it would be. But if someone is using GPUs/CL compute devices with
Python, he/she's very likely (IMO) to be using one of PyCUDA or
PyOpenCL. What am I missing? Note again that this is separate from
being able to exchange data with other packages--but IMO it's sufficient
in this case to just throw a header file somewhere which enables
whichever outside package to access the data.
So it need to have them. I suppose that Andreas will remove them
PyCUDA/PyOpenCL and call the compyte version when it is ready.
I personally used to see compyte as a common infrastructure from which
PyCUDA and PyOpenCL can derive actually working array classes. In any
case, array functionality will not be removed from either
package. Whatever you do with the existing array types will continue to
work. More functionality will become available with time.
> 3. Moreover, we can export some simple memory API in each of
> PyCuda/PyOpenCL (something like opaque Buffer object and memory
> functions that use it, like it's done in PyOpenCL) for people who want
> some fine tuning and do not want to use our general ndarray-like
> object. In fact, compyte developers are such people too. There can be
> some problems, of course, if you are inclined to write ndarray module
> in C (is it really necessary?), but they are, of course, solvable.
Someone that just want a buffer could allocate a simple vector and use
it as he want. Do you see problem with that? Did I missed something?
They won't need to use the function that we provide.
I actually do see value in providing something like Numpy's buffer
interface, but aimed at CL/CUDA.
What problem do you see to have the base object and function in C?
One of the goal of compyte is to be usable by people who don't use
Once again, you mean--be able to exchange data with non-Python packages?
That makes sense. Trying to provide an array object for any- and
everybody is a) too hard and b) not likely to work.
So we need something in C. The first phase that I'm doing is to
Theano code that is in C. But we don't plan to do all in C. In fact,
functionality not in Theano will probably be in python generated code
to easy the development.
Beyond data access and localized speed hotspots, I don't see much of a
need for C.