> So, I was wondering if in "normal" conditions (I'm using the same
> context and just one stream for both launches) f2 is launched only
> after f1 has ended
Yes, though f2 can be scheduled before f1 is finished (that is, the
python call will return).
Tomi Pieviläinen, +358 400 487 504
A: Because it disrupts the natural way of thinking.
Q: Why is top posting frowned upon?
Hi there. I have 2 kernels, one which must be launched only after the
first one has finished it's computations as it uses the results
computed by the first.
My code looks something like:
----- Something ------
src = SourceModule(cudaCode)
f1 = src.get_function("f1")
f2 = src.get_function("f2")
f1(args1, res1, block=..., grid=...)
f2(args2, res2, block=..., grid=...)
---- Something else -----
where res1 is contained in args2
So, I was wondering if in "normal" conditions (I'm using the same
context and just one stream for both launches) f2 is launched only
after f1 has ended or it's possible (maybe because of the Cuda
scheduler) that it's launched before f1 is done, in which case I
should seek a way to prevent this.
Thanks in advance, Leandro Demarco.
I have installed PyCUDA on my campus cluster under my local directory ~/.local/lib/python2.7/python/site-packages/pycuda-2012.1-py2.7-linux-x86_64
How can the other users use this package? How to set the environment for this?
By the way we use Ubuntu 12.04 LTS.
M: (+61) 415786645
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)