I think it might be simpler than that.
There are two items to take care of in dict.
1. We have a keys list [("key1",offset_index), ....
Here the offset_index is the address of the pointer
2.the dict file is then laid out as a block of memory.
Since I plan to use it for every thread, it should be stored in common
The reason for my query is that I am not quite sure how to access these
things once I have passed them down. But I can experiment and report back,
if no one has done this before.
The other problem , not related to dict but to NLP is that it uses a lot of
strings resulting in variable lengths words which the GPU does not like. My
plan is to allocate space for the largest possible word viz 64 chars.
On Monday, May 27, 2013, Andreas Kloeckner wrote:
> Can someone please show me how to pass a dict down to the GPU and access
> with a C code? I have a 100MByte NLP dict I would like to access.
that'll require a bit more work than just saying 'transfer this', for a
number of reasons.
- First, dicts (and generally most Python data structures) are very
pointer-heavy. But the GPU has a distinct memory space and thus
pointers are invalidated when transferring.
- Next, dicts rely on the Python run time system, which is available on
the host, but not the CL device (GPU).
- Finally, and most fundamentally, GPUs like data structures that are
compatible with data-parallel computing. dicts don't quite fit the bill.
But you might be able to build something using this recently added
Hope that helps,