thanks again for helping us understand what the philosophy behind your
package is. You are right that templated signatures might be the best
hovever we ran into another issue.
One of our users here did a #include <pyublas/numpy.h> in a piece of code
that he compiled into a C++ binary executable on unix. He then ran the
and witnessed a core dump. The stack trace seemed to reveal that runtime
were made to call various python functionalities, which of course could not
be done since
he was running a pure C++ program which has no knowledge of any python
Note that the above include is the only reference to pyublas, i.e. there was
(yet) to create any numpy_vector array or anything in this program.
Again I apologize for my sketchiness in describing the issue and I may get
to be more explicit.
However, can you let me know if your package is meant to be used exclusively
python? As I said in my last post, we write C++ code that is called from
code, or that may provide services to other clients, not necessarily python
If it is the case that pyublas is inextricably linked to a python
interpreter, then would
that mean that whenever we want to benefit from pyublas handle semantics
from python, we would have to write some kind of interface layer, callable
python and which would call into pyublas agnostic C++ code? Am I seeing this
Also, if satisfied with value semantics, then can we nevertheless expose
C++ functionality to python without any reference to pyublas on the C++
I can try that out myself, but you probably know that anyway.)
thanks for your reply. v. helpful. You can compile the package with legacy
boost 1.34.1 and
an older version of gcc, although we had to comment out a conversion
operation for the sparse
container - we are not sure why, but are proceeding incrementally.
One thing that is confusing me is the deal with numpy_vector vs the native
It seems to be possible, as your examples show, to write and expose C++ code
work both with numpy_vector or ublas vector signatures. How does that work?
It seems clear
that for numpy_vector signatures you have implemented handle semantics. On
hand what happens for ublas vector signed functions? When calling from
python, are numpy
arrays converted on the fly into into numpy_vector arrays, and are then
sliced when passed
into the function and then copied by value before going into C++?
May I ask you how you envisage pyublas to work in a large scale system?
Imagine you need
to write some complicated and deep C++ code? Would you envisage writing that
terms of numpy_vectors and other pyublas arrays? Or could you just handle
the python interface
(at in and output) with pyublas containers and then rely on native ublas
containers within the
C++ code? If the former, then what is the dependency of pyublas on ublas?
For instance, could
we ever be in a situation in which a future boost release would just break
These are important questions when wanting to use this stuff in a multi-user
Also, I apologize if my question are ill-formulated. I haven't understood
everything from the doc, and
so far have spent limited time in the source code.
Thanks & best wishes,
I am constrained to work with boost 1.34.1 and gcc for some time to come
I am quite interested in setting up pyublas in this setting. Given that it
that boost 1.35 or higher and gcc 4.x be used, I wonder if anybody knows if
Presumably having an earlier version of gcc is not too limiting, but what
1.34.1? That serious? The pyublas package being small, I would be happy to
the internals in order to figure out the compilation, but I would appreciate
there is a fundamental limitation in using boost 1.34.1.
Many thanks for any input,