黄 瓒 <dreaming_hz(a)hotmail.com> writes:
@inducer<https://github.com/inducer> THANK YOU for providing PyCUDA.
As cudaMalloc could be time-consuming, it seems even slicing would include such operation
in PyCUDA, are there any tricks to avoid frequent gpu memory operation in PyCUDA?
Slicing a GPUArray involves no allocations. PyCUDA includes a memory
pool which can help avoid redundant allocation.