On Sat, 28 Jan 2012 18:21:29 -0500, Thomas Wiecki <Thomas_Wiecki(a)brown.edu> wrote:
> I am currently revisiting this but having some problems with the
> random number generator.
>
> generator.generators_per_block is 512 on my card, so I initialize 512
> generators, but I see that some of them don't produce random numbers
> when sampling from them. I notice that in some subtle ways (mainly
> that the distribution is not correct or all numbers are the same) if I
> sample from more than 300-350 generators (always the last ones are
> affected), but it's fine when using e.g. 128. So it seems I can only
> use a smaller number of generators than what the card says I should be
> able to use.
>
> Any idea on why that might be or how to investigate this further?
Mysterious. What generator is this using? XORWOW? Tomasz, any ideas?
Andreas
Hi All,
I am fighting several days now with pycuda installation.
I program in CUDA for several years, now I want to give pycuda a try.
I did everything according to the instructions, however the 'make step'
keeps complaining about CUDA_ROOT and nvcc...
kk@oktan:/usr/lib/python2.6/pycuda$ sudo -E echo $CUDA_ROOT
/usr/local/cuda
kk@oktan:/usr/lib/python2.6/pycuda$ sudo -E echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/cuda/bin
kk@oktan:/usr/lib/python2.6/pycuda$ nvcc
nvcc fatal : No input files specified; use option --help for more
information
kk@oktan:/usr/lib/python2.6/pycuda$ sudo -E make -j 4
ctags -R src || true
/usr/bin/python setup.py build
*** CUDA_ROOT not set, and nvcc not in path. Giving up.
Traceback (most recent call last):
File "setup.py", line 232, in <module>
main()
File "setup.py", line 77, in main
conf = get_config(get_config_schema())
File "setup.py", line 29, in get_config_schema
sys.exit(1)
NameError: global name 'sys' is not defined
make: *** [all] Error 1
I googled but found no solution for this.
Thanks for any advices... yes I am a python newbie (but I program in it
a bit).
Cheers,
KK
Great, thanks!
Andreas
On Tue, 17 Jan 2012 08:07:19 +0100, Thomas Wiecki <Thomas_Wiecki(a)brown.edu> wrote:
> Not following that list, but I hope it got resolved.
>
> This works fine now for me. Danke!
>
> Thomas
>
> On Tue, Jan 17, 2012 at 2:31 AM, Andreas Kloeckner
> <lists(a)informa.tiker.net> wrote:
> > On Sat, 14 Jan 2012 12:45:12 +0100, Thomas Wiecki <Thomas_Wiecki(a)brown.edu> wrote:
> >> On Fri, Jan 13, 2012 at 10:13 PM, Andreas Kloeckner
> >> <lists(a)informa.tiker.net> wrote:
> >> > You mean uintp (not uintp32), right? I've made that fix in compyte. Can
> >> > you please verify? (requires a submodule update, fixed in both PyOpenCL
> >> > and PyCUDA)
> >>
> >> Yes, that's a typo.
> >>
> >> > I was a bit unsure what C type to map this to, but decided in favor of
> >> > uintptr_t, even though that requires the user to have stdint.h included,
> >> > which none of the other types do. Hope that's ok, but I am open to
> >> > suggestions.
> >>
> >> The current fix doesn't work for me:
> >> CompileError: nvcc compilation of /tmp/tmp2ru5rp/kernel.cu failed
> >> [command: nvcc --cubin -arch sm_11
> >> -I/usr/local/lib/python2.7/dist-packages/pycuda-2011.2.2-py2.7-linux-i686.egg/pycuda/../include/pycuda
> >> kernel.cu]
> >> [stderr:
> >> kernel.cu(7): error: identifier "uintptr_t" is undefined
> >
> > Can you try again now? Sorry for the wait. If you follow the PyOpenCL
> > list, you'll know what held me up. :(
> >
> > Andreas
>
Hi,
Have someone met this error when building pycuda?
the following is the error description:
*thread.obj : error LNK2019: unresolved external symbol "void __cdecl
pycudaboost::tss_cl
eanup_implemented(void)" (?tss_cleanup_implemented@pycudaboost@@YAXXZ)"
referenced in
function "void __cdecl pycudaboost::`anonymous
namespace'::create_current_thread_tls
_key(void)" (?create_current_thread_tls_key@?A0x33f885c9@pycudaboost@@YAXXZ)
*
Waiting for help on line...
Thanks.
CiCi
-----
a chinese student
--
View this message in context: http://pycuda.2962900.n2.nabble.com/Installation-problem-Win64bit-vs2008-py…
Sent from the PyCuda mailing list archive at Nabble.com.
Further to this, it appears that the context lifetime management doesn't
properly tie the creation of a CUDA device context to its deletion. I'm
thinking of a scenario where a user can create multiple windows with OpenGL
viewports in them, each with its own OpenGL context and corresponding CUDA
context. Since PyCUDA is using a global stack to maintain these contexts,
creating windows and destroying them in arbitrary order will probably do
something funky.
-Mark
On Wed, Feb 29, 2012 at 11:28 AM, Mark Wiebe <mwwiebe(a)gmail.com> wrote:
> I'd like to create a VBO using OpenGL, then manipulate it in PyCUDA as a
> gpuarray. Is this possible? I looked a bit into how I would set the
> deallocation policy, which needs to use OpenGL calls to free the VBO, but
> gpuarray seems to hardcode CUDA. I would expect it to work like the NumPy
> ndarray.base object, which owns the memory used by the ndarray. Is this
> possible?
>
> Thanks,
> Mark
>
I'd like to create a VBO using OpenGL, then manipulate it in PyCUDA as a
gpuarray. Is this possible? I looked a bit into how I would set the
deallocation policy, which needs to use OpenGL calls to free the VBO, but
gpuarray seems to hardcode CUDA. I would expect it to work like the NumPy
ndarray.base object, which owns the memory used by the ndarray. Is this
possible?
Thanks,
Mark
On Fri, 24 Feb 2012 11:36:56 -0600, Alexander Pourshalchi <spourshalchi(a)gmail.com> wrote:
> I was having trouble installing PyCUDA on my windows 7 machine. I am
> running Python (x,y) with numpy, matplotlib and pyqt4 already installed and
> I was hoping there was an install version to just add PyCUDA instead of
> installing all of the modules I want to use individually.
Have you tried using Christoph Gohlke's binaries?
http://www.lfd.uci.edu/~gohlke/pythonlibs/
Andreas
I was having trouble installing PyCUDA on my windows 7 machine. I am
running Python (x,y) with numpy, matplotlib and pyqt4 already installed and
I was hoping there was an install version to just add PyCUDA instead of
installing all of the modules I want to use individually.
Thanks,
Alexander Pourshalchi
--
This message contains information that may be privileged or confidential.
It is intended only for the person to whom it is addressed. If you are not
the intended recipient, you are not authorized to read, print, retain,
copy, disseminate, distribute, or use this message or any part thereof. If
you receive this message in error, please notify the sender immediately and
delete all copies of this message.
Hi all,
This must be pretty obvious but I haven't been able to find the answer in
the documentation...
What is the preferred way to set the size of the shared memory of a
function using the prepare* interface?
I am using 2011.2.2 and receiving the following warning:
*DeprecationWarning: setting the shared memory size in Function.prepare is
deprecated*
Thanks!
Jesse