I am using Boost's uBLAS in a numerical code and have a 'heavy' solver in place:

http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?LU_Matrix_Inversion

The code works excellently, however, it is painfully slow. After some research, I found UMFPACK, which is a sparse matrix solver (among other things). My code generates large sparse matrices which I need to invert very frequently (more correctly solve, the value of the inverse matrix is irrelevant), so UMFPACk and BOOST's Sparse_Matrix class seems to be a happy marriage.

UMFPACK asks for the sparse matrix specified by three vectors: an entry count, row indexes, and the entries. (See example).

My question boils down to, can I get these three vectors efficiently from BOOST's Sparse Matrix class?

There is a binding for this:

http://mathema.tician.de/software/boost-numeric-bindings

The project seems to be two years stagnant, but it does the job well. An example use:

```
#include <iostream>
#include <boost/numeric/bindings/traits/ublas_vector.hpp>
#include <boost/numeric/bindings/traits/ublas_sparse.hpp>
#include <boost/numeric/bindings/umfpack/umfpack.hpp>
#include <boost/numeric/ublas/io.hpp>
namespace ublas = boost::numeric::ublas;
namespace umf = boost::numeric::bindings::umfpack;
int main() {
ublas::compressed_matrix<double, ublas::column_major, 0,
ublas::unbounded_array<int>, ublas::unbounded_array<double> > A (5,5,12);
ublas::vector<double> B (5), X (5);
A(0,0) = 2.; A(0,1) = 3;
A(1,0) = 3.; A(1,2) = 4.; A(1,4) = 6;
A(2,1) = -1.; A(2,2) = -3.; A(2,3) = 2.;
A(3,2) = 1.;
A(4,1) = 4.; A(4,2) = 2.; A(4,4) = 1.;
B(0) = 8.; B(1) = 45.; B(2) = -3.; B(3) = 3.; B(4) = 19.;
umf::symbolic_type<double> Symbolic;
umf::numeric_type<double> Numeric;
umf::symbolic (A, Symbolic);
umf::numeric (A, Symbolic, Numeric);
umf::solve (A, X, B, Numeric);
std::cout << X << std::endl; // output: [5](1,2,3,4,5)
}
```

**NOTE**:

Though this work, I am considering moving to NETLIB

Similar Questions

I am trying to multiply two large sparse matrices of size 300k * 1000k and 1000k*300k using Eigen. The matrices are highly sparse ~0.01% non zero entries, however there's no block or other structure i

I'm making a little program to make a representation of sparse matrixes (a matrix with a lot of elements equal to zero). Represented like this page 108 (I think watching at the figure is enough to und

I am using a sparse matrix format implemented in scipy as csr_matrix. I have a mat variable which is in csr_matrix format and all its elements are non-negative. However, when I use mat + mat operation

I've got a plain text data file (.dat) containing sparse matrix information that I'd like to import into MATLAB. It looks a bit like: (1,2) 1 (2,3) 2 And so forth, where we've got the index for matri

I dug into the boost ublas code and found out the ublas implementation for memory allocation in compressed_matrix is not as standard as in CSC or CSR. There is one line that cause the trouble, namely

I am a bit perplexed by the Boost ublas documentation. It does not seem clear to me that the sparse and dense matrix classes share a common parent class---which I believe is by design. But then how ca

I need to do matrix operations (mainly multiply and inverse) of a sparse matrix SparseMat in OpenCV. I noticed that you can only iterate and insert values to SparseMat. Is there an external code I can

Using the Matrix package I can create a two-dimensional sparse matrix. Can someone suggest a package that would allow me to create a multidimensional (specifically a 3-dimensional) sparse matrix (arra

In another post regarding resizing of a sparse matrix in SciPy the accepted answer works when more rows or columns are to be added, using scipy.sparse.vstack or hstack, respectively. In SciPy 0.12 the

I am trying to find the indices of nonzero entries by row in a sparse matrix: scipy.sparse.csc_matrix. So far, I am looping over each row in the matrix, and using numpy.nonzero() to each row to get t

What do you think? What would be faster and how much faster: Doing sparse matrix (CSR) multiplication (with a vector) on the GPU or the CPU (multithreaded)?

I have a scipy.sparse.csr.csr_matrix that represents words in a document and a list of lists where each index represents the categories for each index in the matrix. The problem that I am having is t

How do I raise a scipy.sparse matrix to a power, element-wise? numpy.power should, according to its manual, do this, but it fails on sparse matrices: >>> X <1353x32100 sparse matrix of typ

I am trying to copy matrix rows to a vector of vectors how can I apply it? Is it possible to convert boost matrix to vector of vectors? source code: #include <iostream.h> #include <boost/nu

I'm currently trying to classify text. My dataset is too big and as suggested here, I need to use a sparse matrix. My question is now, what is the right way to add an element to a sparse matrix? Let's

I'm having trouble loading my data set into a sparse matrix in R. I am using the Matrix package. The data I have is in the form x y value. For example: V1 V2 V3 1 2 .34 7 4 .56 4 5 .62 where I would

I have two large square sparse matrices, A & B, and need to compute the following: A * B^-1 in the most efficient way. I have a feeling that the answer involves using scipy.sparse, but can't for t

I have a sparse matrix that I obtained by using Sklearn's TfidfVectorizer object: vect = TfidfVectorizer(sublinear_tf=True, max_df=0.5, analyzer='word', vocabulary=my_vocab, stop_words='english') tfid

I am trying to filter values smaller than 10 from a huge (1Mx1M) CSR matrix (SciPy). Since all my values are integers, dividing by 10 and remultiplying by 10 does the job, but I was wondering if there

I have a sparse matrix created from R's Matrix package. I would like to iterate over each entry in the matrix and perform an operation, saving the result in another sparse matrix with the same indexes

Given an image of size [hh,ww], I would like to create efficiently a sparse matrix of size [hh*ww, hh*ww]. For each 4- or 8-neighbor of a given pixel, the sparse matrix should be filled with a constan

I have got an output using sparse matrix in python, i need to store this sparse matrix in my hard disk, how can i do it? if i should create a database then how should i do?? this is my code: import nl

I am using CHOLMOD in SuiteSparse to factor an N by N large band-diagonal matrix that is relatively sparse, ie, it contains only a few diagonals that are non-zero. The sparsity of the matrix is set by

How do you save/load a scipy sparse csr_matrix in a portable format? The scipy sparse matrix is created on Python 3 (Windows 64-bit) to run on Python 2 (Linux 64-bit). Initially, I used pickle (with p

I have a sparse matrix. I need to sort this matrix row-by-row and create another [sparse] matrix. Code may explain it better: # for `rand` function, you need newer version of scipy. from scipy.sparse

I have a large sparse matrix, implemented as a lil sparse matrix from sci-py. I just want a statistic for how sparse the matrix is once populated. Is there a method to find out this?

I have to create a matlab matrix that is much bigger that my phisical memory, and i want to take advantage of the sparsity. This matrix is really really sparse [say N elements in an NxN matrix], and m

I need to create a matrix with values from a numpy array. The values should be distributed over the matrix lines according to an array of indices. Like this: >>> values array([ 0.73620381, 0.

I'd like to write a function that normalizes the rows of a large sparse matrix (such that they sum to one). from pylab import * import scipy.sparse as sp def normalize(W): z = W.sum(0) z[z < 1e-6]

I'm a very noob at Boost::uBLAS. I have a function which take a ublas::matrix_expression<double> as input: namespace ublas = boost::numeric::ublas; void Func(const ublas::matrix_expression<do

I've got a scipy.sparse_matrix A and I want to zero-out a decently-sized fraction of the elements. (In the matrices I'm working with today, A has about 70M entries and I want to zero-out about 700K of

I'm trying to manipulate some data in a sparse matrix. Once I've created one, how do I add / alter / update values in it? This seems very basic, but I can't find it in the documentation for the sparse

Suppose I create this sparse matrix, where the non-zero elements consist of booleans 'true': s = sparse([3 2 3 3 3 3 2 34 3 6 3 2 3 3 3 3 2 3 3 6], [10235 11470 21211 33322 49297 88361 91470 127422

I have written a program using Boost::ublas that uses extensive sparse matrix vector multiplication. I am not satisfied at all with its speed and I want to try ATLAS. Is there a clear procedure to tra

I'm trying to implement the functionality of MATLAB function sparse. Insert a value in sparse matrix at a specific index such that: If a value with same index is already present in the matrix, then th

There is a nonzero() method for the csr_matrix of scipy library, however trying to use that function for csr matrices result in an error, according to the manual that should return a tuple with row an

Does anyone know how to perform svd operation on a sparse matrix in python? It seems that there is no such functionality provided in scipy.sparse.linalg.

I want to test some of the newer sparse linear solvers and I want to know if there is a fast way of filling in the matrix. The format I'm interested is CSR (http://goo.gl/hLXYd). Let's say the matrix,

I'm thinking of using Boost's Sparse Matrix for a computation where minimal memory usage is the goal. Unfortunately, the documentation page didn't include a discussion of the sparse matrix implementat

As I want to use only numpy and scipy (I don't want to use scikit-learn), I was wondering how to perform a L2 normalization of rows in a huge scipy csc_matrix (2,000,000 x 500,000). The operation must

I'm trying to figure out how to efficiently solve a sparse triangular system, Au*x = b in scipy sparse. For example, we can construct a sparse upper triangular matrix, Au, and a right hand side b with

What is the best way to efficiently remove columns from a sparse matrix that only contain zeros. I have a matrix which I have created and filled with data: matrix = sp.sparse.lil_matrix((100, 100)) I

It's possible to represent every sparse matrix by a two dimensional array. to do so, should save rows, columns and non-zero elements in it. A L-diagonal matrix ($n$ by $n$) is given, what is the bigge

I am converting from a scipy sparse matrix to a dense matrix and adding that to an ndarray using a += operator and I am getting a broadcast error. The ndarray has a shape (M,) while the dense matrix h

I need to store a sparse matrix on disk. It is like a database table with millions of rows and thousands of columns, where many or most columns are null. It needs to be queryable, like a SQL SELECT wi

I am doing a text classification task with R, and I obtain a document-term matrix with size 22490 by 120,000 (only 4 million non-zero entries, less than 1% entries). Now I want to reduce the dimension

Given a sparse matrixR of type scipy.sparse.coo_matrix of shape 1.000.000 x 70.000 I figured out that row_maximum = max(R.getrow(i).data) will give me the maximum value of the i-th row. What I need n

For a current project I have to compute the inner product of a lot of vectors with the same matrix (which is quite sparse). The vectors are associated with a two dimensional grid so I store the vector

I am trying to calculate the laplacian matrix of a graph. I ve calculated the sparse representation of the adjacency matrix which is stored in a text file with dimension Nx3. N the size of nodes (ith-

I have an RDD as such: byUserHour: org.apache.spark.rdd.RDD[(String, String, Int)] I would like to create a sparse matrix of the data for calculations like median, mean, etc. The RDD contains the row_