Is it possible to speed up large sparse matrix calculations by e.g. placing parantheses optimally?

What I'm asking is: Can I speed up the following code by forcing Matlab to do the operations in a specified order (for instance "from right to left" or something similar)?

I have a sparse square symmetric matrix H, that previously has been factorized, and a sparse vector M with length equal to the dimension of H. What I want to do is the following:

**EDIT:** Some additional information: H is typically 4000x4000. The calculations of z and c are done around 4000 times, whereas the computation of dVa and dVaComp is done 10 times for every 4000 loops, thus 40000 in total. (dVa and dVaComp are solved iteratively, where P_mis is updated).

Here `M*c*M'`

, will become a sparse matrix with 4 non-zero element. In Matlab:

```
[L U P] = lu(H); % H is sparse (thus also L, U and P)
% for i = 1:4000 % Just to illustrate
M = sparse([bf bt],1,[1 -1],n,1); % Sparse vector with two non-zero elements in bt and bf
z = -M'*(U \ (L \ (P * M))); % M^t*H^-1*M = a scalar
c = (1/dyp + z)^-1; % dyp is a scalar
% while (iterations < 10 && ~=converged)
dVa = - (U \ (L \ (P * P_mis)));
dVaComp = (U \ (L \ (P * M * c * M' * dVa)));
% Update P_mis etc.
% end while
% end for
```

And for the record: Even though I use the inverse of H many times, it is not faster to pre-compute it.

Thanks =)

You might want to try using the extended syntax for `lu`

when factoring the (sparse) matrix `H`

:

```
[L,U,P,Q] = lu(H);
```

The extra permutation matrix `Q`

re-orders columns to increase the sparsity of the factors `L,U`

(while the permutation matrix `P`

only re-orders rows for partial pivoting).

Specific results depend on the sparsity pattern of `H`

, but in many cases using a good column permutation *significantly* reduces the number of non-zeros in the factorisation, reducing memory use and increasing speed.

You can read more about the `lu`

syntax here.

There's a few things not entirely clear to me:

- The command
`M = sparse([t f],[1 -1],1,n,1);`

can't be right; you're saying that on rows`t,f`

and columns`1,-1`

there should be a`1`

; column`-1`

obviously can't be right. - The result
`dVaComp`

is a full matrix due to multiplication by`P_mis`

, while you say it should be sparse.

Leaving these issues aside for now, there's a few small optimizations I see:

- You use
`inv(H)*M`

twice, so you could pre-compute that. - negation of the
`dVa`

can be moved out of the loop. - if you don't need
`dVa`

explicitly, leave out the assignment to a variable as well. - inversion of a scalar means dividing 1 by that scalar (computation of
`c`

).

Implementing changes, and trying to compare fairly (I used only 40 iterations to keep total time small):

```
%% initialize
clc
N = 4000;
% H is sparse, square, symmetric
H = tril(rand(N));
H(H<0.5) = 0; % roughly half is empty
H = sparse(H+H.');
% M is sparse vector with two non-zero elements.
M = sparse([1 N],[1 1],1, N,1);
% dyp is some scalar
dyp = rand;
% P_mis = full vector
P_mis = rand(N,1);
%% original method
[L, U, P] = lu(H);
tic
for ii = 1:40
z = -M'*(U \ (L \ (P*M)));
c = (1/dyp + z)^-1;
for jj = 1:10
dVa = -(U \ (L \ (P*P_mis)));
dVaComp = (U \ (L \ (P*M * c * M' * dVa)));
end
end
toc
%% new method I
[L,U,P,Q] = lu(H);
tic
for ii = 1:40
invH_M = U\(L\(P*M));
z = -M.'*invH_M;
c = -1/(1/dyp + z);
for jj = 1:10
dVaComp = c * (invH_M*M.') * ( U\(L\(P*P_mis)) );
end
end
toc
```

This gives the following results:

```
Elapsed time is 60.384734 seconds. % your original method
Elapsed time is 33.074448 seconds. % new method
```

Similar Questions

Hello I know there are a lot of questions on sparse matrix multiplication, but many of the answers say to just use libraries. I want to do it without using library functions. So far I've done the easy

I have a very large (about 91 million non-zero entries) sparseMatrix() in R that looks like: > myMatrix a b c a . 1 2 b 1 . . c 2 . . I would like to convert it to a triangular matrix (upper or lo

Consider the following matrix, nc <- 5000 nr <- 1024 m <- matrix(rnorm(nc*nr), ncol=nc) I wish to take the difference between the rowMeans of two groups of identical size taken at random in

I look for ideas how to speed up message transfers through RabbitMQ. I installed the latest version on Windows 64 bit, running a server on my local machine on which I also publish and consume to/from

I am doing some computations on a sparse matrix of floats in the log domain, so the empty entries are actually -Inf (using -FLT_MAX). I'm using a custom sparse matrix class right now but I am eager

I'm doing a project and I'm doing a lot of matrix computation in it. I'm looking for a smart way to speed up my code. In my project, I'm dealing with a sparse matrix of size 100Mx1M with around 10M n

I have downloaded and included UJMP (Universal Java Matrix Package) library to my project for generating sparse matrix. But I could not find any documentation about functions of the library, how to cr

I have a sparse matrix: from scipy import sparse a = sparse.diags([1,4,9],[-1,0,1],shape =(10,10),format =csr) I want to take the square root of each of the elements in the sparse matrix I look up

I am writing a java program which involves working with a 1058 X 1058 matrix containing float values. This matrix contains many zero values and so I need to store this as a sparse matrix and later use

I am trying to calculate the time required to travel a certain distance at a specific speed in HH:MM:SS Check Doc here For some reason the values are incorrect, the decimal value is correct, but the t

Basically, I am just trying to do a simple matrix multiplication, specifically, extract each column of it and normalize it by dividing it with its length. #csc sparse matrix self.__WeightMatrix__ = s

I'm asking this as a general/beginner question about R, not specific to the package I was using. I have a dataframe with 3 million rows and 15 columns. I don't consider this a huge dataframe, but mayb

I have this sparse matrix of size 20 millionx20million in matlab. I want to get around 40000 specific rows from this matrix. If I do new_data = data_original(index,:) where index consists of the rows

I'm trying to build and update a sparse matrix as I read data from file. the matrix is of size 100000X40000 what is the most efficient way of updating multiple entries of the sparse matrix. specifical

I am using the C++ Eigen 3 library in my program. In particular, I need to multiply two Eigen sparse matrix and store the result into another Eigen sparse matrix. However, I noticed that if the some e

According to this link only installing the following package in Ubuntu will speed up R significantly for certain calculations: libatlas3gf-base Do I have to compile from source to get this benefit? I

I'm coding the program that using linked list to store a sparse matrix. First I create a class Node contains the index of entry, value of entry and two pointers to next row and next column. Second I

I am having problem with 3 loops in Python. The purpose of this code is to calculate sparse matrix according to number of (x) unknown values of DATA. Here, x number is 13 , which means unrepeated valu

I am working with a large (complex) Hermitian matrix and I am trying to diagonalize it efficiently using Python/Scipy. Using the eigh function from scipy.linalgit takes about 3s to generate and diago

I want to do SVD on a sparse matrix by using scipy: from svd import compute_svd print(The size of raw matrix: +str(len(raw_matrix))+ * +str(len(raw_matrix[0]))) from scipy.sparse import dok_matrix

consider the code below, Am trying to make a Sparse_matrix that contains an array of pointer. In struct Sparse_matrix, Is this a proper way to create a pointer array when size is unknown? What's the

I am trying to create a very huge sparse matrix which has a shape (447957347, 5027974). And, it contains 3,289,288,566 elements. But, when i create a csr_matrix using scipy.sparse, it return something

I want to pass a sparse matrix to a shared library from MATLAB, do some operation there (written in C), and then return it. I can pass a dense matrix and use, pretty easy. But, I have no idea how to p

I wanted CSR files preferably from matrix market for my OpenCL library, I searched a lot for CSR generators in C but didn't get any. I find matrix market formats comfortable since they have defined th

I just started to learn to program in Python and I am trying to construct a sparse matrix using Scipy package. I found that there are different types of sparse matrices, but all of them require to sto

I got some sparse matrix like this >>>import numpy as np >>>from scipy.sparse import * >>>A = csr_matrix((np.identity(3))) >>>print A (0, 0) 1.0 (1, 1) 1.0 (2, 2) 1

This Sparse Matrix and its 3-Tuple representation is not getting into my head... Either its bit tricky or my resources from where I am studying are really not that good... here is the URI Sparse Matri

How in Matlab we can form a matrix X, 1000 by 1000, which is sparse with, say, 5% of independent Bernoulli +-1 nonzero entries? I.e. such a matrix would have rho = ||X||_0/10^6 = 0.05.

I'd like to find the N smallest eigenvalues of a sparse matrix in Python. I've tried using the scipy.sparse.linalg.eigen.arpack package, but it is very slow at computing the smallest eigenvalues. I re

I am currently trying to speed up my large sparse (scipy) matrix multiplications. I have successfully linked my numpy installation with OpenBLAS and henceforth, also scipy. I have run these tests with

I am trying to create a sparse matrix which has a 2D pattern run down the diagonal. This is probably easiest to explain with a quick example. Say my pattern is: [1,0,2,0,1]... I want to create a spars

I am working on a project that involves doing calculations using large Matrices of data. I have CSV files with 10,000 rows and 100 columns, and there are 10 of them. Currently, I'm running a backgroun

I was wondering if there's any way to have a scipy.sparse.csc_matrix format for mlpy in python. I have worked with mlpy before and have always dealt with non sparse matrices. For instance if I have 5

I am using Scipy to construct a large, sparse (250k X 250k) co-occurrence matrix using scipy.sparse.lil_matrix. Co-occurrence matrices are triangular; that is, M[i,j] == M[j,i]. Since it would be high

I need to run a matrix-vector multiplication 240000 times per second. The matrix is 5x5 and is always the same, whereas the vector changes at each iteration. The data type is float. I was thinking of

I am trying to factorize very large matrixes with the python library Nimfa. Since the matrix is so large I am unable to instanciate it in a dence format in memory, so instead I use scipy.sparse.csr_ma

I'm thinking of using Boost's Sparse Matrix for a computation where minimal memory usage is the goal. Unfortunately, the documentation page didn't include a discussion of the sparse matrix implementat

I have a 3x3 matrix as follows ls 1 2 3 mic 1 d11 d12 d13 mic 2 d21 d22 d23 mic 3 where the matrix elements define distance between each component e.g. d11 is the distance between microphone 1 (mic 1

I'm working with large sparse matrices that are not exactly very sparse and I'm always wondering how much sparsity is required for storage of a matrix as sparse to be beneficial? We know that sparse r

I'm trying to put a matrix in sparse form, and so far I'm looping through the rows and columns checking every element to see if it's non zero. However, this seems to be order n^2, where n is the numbe

I hope my question is clear, but let's say I have a sparse matrix like following: import numpy as np a = np.eye(5, 5) a[0,3]=1 a[3,0]=1 a[4,2]=1 a[3,2]=1 a = csr_matrix(a) [[ 1. 0. 0. 1. 0.] [ 0. 1. 0

Could anyone recommend set of tools to perform standard NMF application onto sparse input data [ matrix of size 50kx50k ], thanks!

This question already has an answer here: Is there support for sparse matrices in Python? 4 answers I am looking for a solution to store about 10 million floating point (double precision) numbe

I have a question about Eigen library in C++. Actually, I want to calculate inverse matrix of sparse matrix. When I used Dense matrix in Eigen, I can use .inverse() operation to calculate inverse of d

I process rather large matrices in Python/Scipy. I need to extract rows from large matrix (which is loaded to coo_matrix) and use them as diagonal elements. Currently I do that in the following fashio

I have a 3007 x 1644 dimensional matrix of terms and documents. I am trying to assign weights to frequency of terms in each document so I'm using this log entropy formula http://en.wikipedia.org/wiki/

I want to speed up my website. I was wondering I've done it correctly syntax wise. <IfModule mod_expires.c> Header unset Pragma FileETag None Header unset ETag ExpiresActive On ExpiresDefault a

I need a library to solve Ax=b systems, where A is a non-symmetric sparse matrix, with 8 entry per row (and it might be quite big). I think a library that implements biconjugate gradient should be fin

Is there a way to speed up inserts to a mdb? using (StreamReader sr = new StreamReader(_localDir + \\ + _filename)) while ((line = sr.ReadLine()) != null) { //sanitize the data } This takes about

The b.GetPixel call in this method is pretty slow, is there a way to speed this method up with LockBits or something? But I don't know how to work with pointers to get the pixel value etc. Background: