I'm currently working with sparse matrices, and I have to compare the computation time of sparse matrix-matrix multiplication with full matrix-matrix multiplication. The issue is that sparse matrix computation is waaaaay slower than full matrix computation.

I'm compressing my matrices with the Compressed Row Storage, and multiplicating 2 matrices is very time consuming (quadruple for loop), so I'm wondering if there is a better compression format more suitable for matrix-matrix operation (CRS is very handy with matrix-vector computation).

Thanks in advance!

It's usually referred to as "Compressed Sparse Rows" (CSR), not CRS. The transpose, Compressed Sparse Columns (CSC) is also commonly used, including by the CSparse package that ends up being the backend of quite a lot of systems including MatLAB and SciPy (I think).

There is also a less-common Doubly-Compressed Sparse Columns (DCSC) format used by the Combinatorial BLAS. It compresses the column index again, and is useful for cases where the matrix is hypersparse. A hypersparse matrix has most columns empty, something that happens with 2D matrix decomposition.

That said, yes there is more overhead. However your operations are now dominated by the number of nonzeros, not the dimensions. So your FLOPS might be less but you still get your answer quicker.

You might look at the paper EFFICIENT SPARSE MATRIX-MATRIX PRODUCTS USING COLORINGS http://www.mcs.anl.gov/papers/P5007-0813_1.pdf for a discussion of how to achieve high performance with sparse matrix matrix products.

Similar Questions

As I understand, MySQL doesn't have any special SPARSE COLUMN directive, and any question like this is purely situational, so I'm wondering if there is a good rule of thumb for when to use a sparse co

I am trying load a sparse array that I have previously saved. Saving the sparse array was easy enough. Trying to read it though is a pain. scipy.load returns a 0d array around my sparse array. import

I need to make a matrix/vector multiplication in Matlab of very large sizes: A is an 655360 by 5 real-valued matrix that are not necessarily sparse and B is a 655360 by 1 real-valued vector. My qu

I originally asked a related question here, but didn't really seem to get anywhere. Perhaps if I rephrase part of it more specifically it might help.... I have files stored using Matlab's sparse forma

I am trying to do a multiplication operation in MySQL. I have the following table called vehicul: id car_type car_model number price car_id 1 audi <model1> 2 320 1 2 audi <model2> 4 100 1

I am looking for the best package for sparse matrix multiplication on single core solution. I am not looking for CUDA, MPI or OpenMP solutions. My preference for languages in decreasing order : Matlab

I have a large 500x53380 sparse matrix and trying to dichotomize it. I have tried using event2dichot under sna package but no success because it requires an adjacency matrix or network object. I al

I am using Python 3.23 and I am want to multiply a sparse VECTOR with a dense MATRIX. The idea of first unfolding the sparse vector into a dense one and then multiplying is of course silly from any st

Imagine I have a table which stores a series of sparse vectors. A sparse vector means that it stores only the nonzero values explicitly in the data structure. I could have a 1 million dimensional vect

I have a big trouble with the multiplication of floats. here is the example: 750 * 10.7 = 8025 but in PHP the result is: 8024 why? EDIT ------------ 750 * 10.7 = 8024 (the real is 8025) 750 * 10.2

My goal is to combine many sparse matrices together to form one large sparse matrix. The only two ideas I've been able to think of are (1) create a large sparse matrix and overwrite certain blocks, (2

I'm working on a large scale project in which I'm designing a sparse matrix vector application but I'm still working to understand the code. I'm beginning by building the foundation for the applicatio

Javascript beginner here. I've created a simple multiplication calculator. However I need Number 2 to increase Number 1 by a percent. Any assistance would be great. Current setup: 100 x 7 = 700 Needs

Is it possible to speed up large sparse matrix calculations by e.g. placing parantheses optimally? What I'm asking is: Can I speed up the following code by forcing Matlab to do the operations in a spe

In another post regarding resizing of a sparse matrix in SciPy the accepted answer works when more rows or columns are to be added, using scipy.sparse.vstack or hstack, respectively. In SciPy 0.12 the

I want to plot a sparse matrix in an imagesc type of style (one color for each pixel, and not symbols a la scatter). The matrix consists of blobs that are spread ut over a 10000x10000 square. I expect

I'm trying to do a sparse checkout of a folder containing externals, but none of the externals are being checked out. This issue seems to indicate that this behavior may be by design, or at least that

I'm performing matrix multiplication with this simple algorithm. To be more flexible I used objects for the matricies which contain dynamicly created arrays. Comparing this solution to my first one w

I need to implement a matrix multiplication on GPU with CUDA for large matrices. Size of each matrix alone is bigger than the GPU memory. So I think I need an algorithm to do that efficiently. I went

Does anyone have any good intuition for a good hash function for a sparse bit vector? To give a concrete example, say I want to hash a 4096 bit integer where the probability of each bit being 1 is 10%

I've found a quite good sparse matrix implementation for c# over http://www.blackbeltcoder.com/Articles/algorithms/creating-a-sparse-matrix-in-net. But as i work in 3d coordinate-system, i need a spar

Does anyone know how to compute a correlation matrix from a very large sparse matrix in python? Basically, I am looking for something like numpy.corrcoef that will work on a scipy sparse matrix.

Is there any implementation of sparse arrays or equivalent lists in Fortran. In the stage of computation of large data set we pass say an array of size of n=10000 to a subroutine to do some stuff on t

Lots of resources say that there are two types optical flow algorithms. And Lucas-Kanade is an sparse technique but i can't find what is the mean of sparse and dense? Can some one tell me what is the

I am trying to do a Matrix-Vector Multiplication on GPU (using Cuda). I loaded the matrix on my C++ code (CPU), and then I perform the matrix-vector multiplication using cuda. I am also using shared m

I'm trying to calculate an expression of the form K = P*C.T*S^-1 (implementation of a Kalman filter) All the involved matrices are sparse, and I would of course like to avoid calculating the actual in

I've a Sparse matrix in CSR Sparse format in python and I want to import it to MATLAB. MATLAB does not have a CSR Sparse format. It has only 1 Sparse format for all kind of matrices. Since the matrix

What do Make Sandbox Sparse and Make Sandbox Shared mean in MKS Integrity?

I'm having trouble with matrix multiplication code in JavaScript. If I run the function below with the following two matrices: var m1 = [ [ 1, 0, 0 ], [ 0, 1, 0 ], [ 1, 1, 0 ], [ 0, 0, 1 ], [ 1, 0, 1

I'm looking for a library to do huge Sparse Matrix x Vector multiplication. The matrix itself will almost fill the RAM. I've found Eigen3, OSKI and some basic Sparse BLAS implementations. Are there ot

I read in this question that eigen has very good performance. However, I tried to compare eigen MatrixXi multiplication speed vs numpy array multiplication. And numpy performs better (~26 seconds vs.

To store big matrix on disk I use numpy.memmap. Here is a sample code to test big matrix multiplication: import numpy as np import time rows= 10000 # it can be large for example 1kk cols= 1000 #create

So I'm attempting to print a multiplication table in C# however I can't quite figure out how to get what I need. So far my program outputs the following: 1 2 3 2 4 6 3 6 9 However, I need it to output

What am I doing wrong here? I want to element-wise multiply two sparse matrices using Colt. Here's an example of how I'm attempting to do this: DoubleMatrix2D A = new SparseDoubleMatrix2D(2, 2); A.se

After implementing matrix multiplication with CUDA. I tried to implement it with CUBLAS(thanks to the advice of some people here in the forum). I can multiply square matrices but (yes once again...) I

I am learning tags in Emacs org-mode, but I found when using C-c / m (aka org-sparse-tree -- Show entries selected by a tags/property match) or C-c \ (aka org-match-sparse-tree) to search tag in a buf

Is there any sparse matrix library that can do these: solve linear algebraic equations support operations like matrix-matrix/number multiplication/addition/subtraction,matrix transposition, get a row

I would like to do a function to generalize matrix multiplication. Basically, it should be able to do the standard matrix multiplication, but it should allow to change the two binary operators product

Does Matlab support efficient operations on large sparse tensors? More specifically: Is there an elegant way, similar to sparse, of loading and storing a sparse tensor? As far as I can understand, sp

After learning about the options for working with sparse matrices in R, I want to use the Matrix package to create a sparse matrix from the following data frame and have all other elements be NA. s r

I have two large, scipy sparse matrices, representing time series data. In the first, each row represents a user's music listening over a number of months (the columns), with each value in the row bei

I'm basically trying to do the example of StratifiedShuffleSplit but with X not being an array but a sparse matrix. In the example below, this matrix was created by a DictVectorizer fit to an array of

I want to know the difference between packed switch and sparse switch opcodes in dalvik. Please if you can provide examples. The explanation provided by google is unclear to me. packed-switch sparse

I know dgemv is for matrix-vector, but which is more efficient? Using dgemm directly for matrix multiplication or using dgemv to do the matrix multiplication by multiplying the Matrix A with each indi

I have two sparse matrix A (affinity matrix) and D (Diagonal matrix) with dimension 100000*100000. I have to compute the Laplacian matrix L = D^(-1/2)*A*D^(-1/2). I am using scipy CSR format for spars

I have some problem with matrix multiplication: I want to multiplicate for example a and b: a=array([1,3]) # a is random and is array!!! (I have no impact on that) # there is a just for example what I

When trying to directly set the data attribute of a sparse lil_matrix, I encounter very unexpected behavior. Can someone explain what is going on in the following simple example? My particular use ca

I am confused about how to handle assembly multiplication. I am working on a few problems described like so: Given these hex values for the 8086 registers AX = E204 BX = 30C2 CX = 5C08 DX = 38F1 What

Is element wise multiplication (%) speed in armadillo depends whether LAPACK/BLAS is installed? Im currently running armadillo without them installed and speed is awful. Ok here is the simplest code,

Is it possible to express Matlab operation A.*B (multiplication element by element) A.*B = [a11 * b11, a12 * b12, ..., a1n * b1n; ...; am1 * bm1, ..., amn * bmn] by a common matrix algebra?