Hi Neil,
On Montag 02 November 2009, Neil Pilgrim wrote:
Just wondering if there is any scope for mapping
between host and device
data which is in vector formats? I couldn't see anything obvious in the
documentation, but for example if you had (in openCL, from memory):
float4 v = float4(1,2,3,4);
then could this somehow be mapped/copied into host memory, or do all
vector variables exist only in device memory?
You can easily mimic the vector types' memory layout in numpy. Just make sure
that the stride of the relevant dimension (see myarray.strides, in bytes) is
the smallest one. The example you give:
I ask since in python I have something like:
self.r = zeros((num_particles,3))
should already work perfectly for a double3 (if such a thing existed in
vanilla CL).
Of course, this partly assumes that vector operations
are even
worthwhile at the current stage of development...any anecdotal or other
evidence, anyone?
None, sorry. In CUDA, the answer depends on a number of factors, too.
Andreas