Sam <rexx765(a)mailinator.com> writes:
I was wondering if anyone has looked at the clblas and clfft
AMD recently open sourced? Would it be possible to have these inside pyopencl
or should they be a seperate wrapper on top of it like pyfft? Also are they
vendor agnostic? The reason I ask is I suck at opencl but; I am a okay at
python and it would be awesome to be able to do cl.math.matrix.multiply(A,B)
and have it work. pyopencl is awesome but is still difficult to do work like
train a neural net on it. Currently most python gpu neural net libraries are
written in/for cuda (theano, cudamat, etc.) this is probably because of the
lack of a good BLAS for opencl. So is clBlas a candidate for a python wrapper?
I saw that they've released those libraries, and making them
accessible/easily usable with PyOpenCL arrays would of course be super
cool. My initial inclination would be to have a separate wrapper for
them--that's one reason why I introduced the interoperability features
But some manner of integration would be of course desirable, and if
that turns out to only be doable from within PyOpenCL, then I'm of
course willing to change opinions. I'd also be more than happy to
support whoever takes on the project of making that wrapper.
To answer your second question, I'm not sure how vendor agnostic they
are. I haven't played with them. Given where they're from, my
expectation would be that they run very well on AMD hardware and perhaps
at least tolerably on Nvidia.