Actually, I was wrong about GPU emulators not being available to the general
public. It turns out that NV does actually provide an emulator for CUDA.
It's still slow as mud, but it's at least a possibility.
On Mon, Jan 18, 2010 at 7:01 PM, David Garcia <david.rigel(a)gmail.com> wrote:
Short answer: Use a real GPU.
Real GPU emulators running on a CPU are /slow/. Think orders of magnitude
slower than the real thing. And this is assuming you had access to a real
GPU emulator, which is unlikely unless you work for AMD/NVidia/Intel/etc.
If you use anything other than a proper GPU emulator, then the performance
metrics will not have anything to do with the real performance you would get
on a GPU. You may as well throw dices to decide which algorithm is faster.
Finally, the best implementation of an algorithm in one GPU may not be the
best on a different model of GPU, especially if they come from different
On Mon, Jan 18, 2010 at 6:43 AM, Neal Becker <ndbecker2(a)gmail.com> wrote:
I'm interested in doing some investigation
into what performance I could
obtain by implementing various signal processing algorithms in GPU. Any
suggestion on an environment I could setup to try this? I'm thinking of a
simulation environment running under linux, without using a real gpu.
PyOpenCL mailing list