Hi there!
I am currently chasing a very weird bug in my code: The following code
will consistently crash on Kepler-type GPUs (tested on a Tesla K40 and
on a GTX 780), but runs fine on my Fermi-class notebook GPU:
import numpy as np
import pycuda.autoinit
from pycuda import gpuarray
from pycuda.driver import Stream
from scikits.cuda.cublas import cublasSgemm
import scikits.cuda.autoinit
from scikits.cuda.misc import _global_cublas_handle as handle
for _ in range(3):
n = 131
s = slice(128, n)
X = gpuarray.to_gpu(np.random.randn(n, 2483).astype(np.float32))
a = gpuarray.empty((X.shape[1], 3), dtype=np.float32)
c = gpuarray.empty((a.shape[0], X.shape[1]), dtype=np.float32)
b = gpuarray.empty_like(X)
m, n, k = a.shape[0], b[s].shape[1], a.shape[1]
lda, ldb, ldc = m, k, m
cublasSgemm(handle, 'n', 'n', m, n, k, 1.0, b[s].gpudata, lda,
a.gpudata, ldb, 0.0, c.gpudata, ldc)
stream = Stream()
stream.synchronize()
The errors I'm getting are:
Traceback (most recent call last):
File "<stdin>", line 22, in <module>
pycuda._driver.LogicError: cuStreamSynchronize failed:
invalid/unknown error code
>>
PyCUDA WARNING: a clean-up
operation failed (dead context maybe?)
cuStreamDestroy failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: invalid/unknown error code
the Stream - thing at the end of the code is only necessary to notice
the error (by triggering error-checks). Copies to/from the device would
also trigger the same errors. The bug is extremely weird, especially since:
* the used constants seem to matter. If I change 'n' to 132 the error
goes away. If I change the 2nd dimension of X to 100 instead of 2483, it
goes away as well.
* the order of the allocations matter. If I allocate 'd' before 'c', the
error goes away
* the for-loop is necessary (i.e., the error only occurs at the third
run-through)
Still, the error seems to be completely reproducable across different
machines (tried on a machine running CentOS 6 on a K40, on a machine
running Ubuntu 13.10 on a K40 and on a machine running Xubuntu 14.04 on
a GTX 780).
At this point, I am at a complete loss. I don't know if the error is
caused by PyCUDA, cuBLAS or scikits.cuda (the latter seems the least
probable since cublasSgemm is very straight-forward) or by something
else entirely. I'd appreciate any help or advice.
Cheers
Thomas