So all you've got is disk io and opencl calls. You only need 1 process and
On 17 Aug 2016 19:03, "Marcos Paulo Rocha" <markao01(a)gmail.com> wrote:
In a application that i'm working, the programs is
written in a form of a
graph. A example of part of one application is bellow:
[image: Inline image 1]
Each node of the graph is a different process. When a node receive all of
your inputs, he is ready to start. This way, concurrency occurs naturally
between some nodes of the graph.
One of the goals of the library that i'm working, is to make easy,
development of applications with the behavior of the image. The ideia is
that nodes: Nodes CP_IN, CP_OUT and EX_KERNEL can be used to abstract the
copy, kernel setup and invocation to the final user.
I hope that now i have make myself clear about my goals and you can help
me to solve this problem.
On Wed, Aug 17, 2016 at 1:35 PM, Andreas Kloeckner <
Marcos Paulo Rocha <markao01(a)gmail.com>
Thanks for reply Andreas.
Andreas, i need to access PyOpenCL objects in another process because
working in a dataflow library and i would like to
make copy and kernels
calls in parallel. I'm doing asynchronous copies and need that process
responsible for executing kernel receive event object of copy and the
buffer to set as kernel parameter. So there is another way to achieve
behavior without using pickle ?
I'm not sure what goal the different processes achieve here. To do
concurrent copy and kernel invocations, all you need is two different
commandqueues (from a single thread even). Just submit the copies to one
and the kernel to the other. They'll run in parallel if the hardware is
capable of doing that.