when I print the values of the second row in a sparse matrix, I noticed that the first index is 0 instead 1. See my example below. Why is that?

```
>>> from scipy.sparse import *
>>> a=lil_matrix((100,100))
>>> a[0,0]=5
>>> a[0,1]=6
>>> a[0,20]=9
>>> print a[0,:]
(0, 0) 5.0
(0, 1) 6.0
(0, 20) 9.0
>>> a[1,5]=55
>>> a[1,50]=99
>>> print a[1,:]
(0, 5) 55.0
(0, 50) 99.0
```

Because `a[1,:]`

is a sparse matrix with one row (as opposed to the original `a`

) and you are printing its first (and only) row.

This is probably an artifact of what's happening under the hood for the list of lists sparse array. If you print the entire array the indices are correct, but if you slice on the row you get relative indices. Check it out:

```
>>> print a
(0, 0) 5.0
(0, 1) 6.0
(0, 20) 9.0
(1, 5) 55.0
(1, 50) 99.0
>>> print a[1,:]
(0, 5) 55.0
(0, 50) 99.0
```

I haven't looked under the hood to figure out why this happens, but if you really need to know, the Numpy/SciPy mailing lists are really responsive.

Similar Questions

I am using a sparse matrix format implemented in scipy as csr_matrix. I have a mat variable which is in csr_matrix format and all its elements are non-negative. However, when I use mat + mat operation

This is my first post and I'm still a Python and Scipy newcomer, so go easy on me! I'm trying to convert an Nx1 matrix into a python list. Say I have some 3x1 matrix x = scipy.matrix([1,2,3]).transpos

For Scipy sparse matrix, one can use todense() or toarray() to transform to Numpy.matrix or array. What are the functions to do the inverse? I searched, but got no idea what keywords should be the rig

I have a very large sparse matrix in Octave and I want to get the variance of each row. If I use std(A,1); it crashes because memory is exhausted. Why is this? The variance should be very easy to calc

Say I would like to remove the diagonal from a scipy.sparse.csr_matrix. Is there an efficient way of doing so? I saw that in the sparsetools module there are C functions to return the diagonal. Based

I am trying to cPickle a large scipy sparse matrix for later use. I am getting this error: File tfidf_scikit.py, line 44, in <module> pickle.dump([trainID, trainX, trainY], fout, protocol=-1)

Question: How can I split 1 sparse matrix into 2, based on the values in a list? That is, I have a sparse matrix X: >>print type(X) <class 'scipy.sparse.csr.csr_matrix'> that I visualize

I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand: from scipy.sparse import * matrix = dok_matrix((en,en), int) for pub in pubs:

I'm currently working on a scipy sparse csr matrix. I would like to delete all rows in the matrix that contain 0 in the data array of the matrix (the data array is the 1s and 2s you can see in the exa

I'm coding the program that using linked list to store a sparse matrix. First I create a class Node contains the index of entry, value of entry and two pointers to next row and next column. Second I

For the purpose I used the solution from that thread link by now, however it gives memory error as expected since my matrix A size is 6 million to 40000 matrix. Therefore I am looking for any other so

What is the best way to construct a matrix whose elements are exactly their indices in Matlab? EDIT: The existing answers to this question is applicable to how to construct a matrix whose elements are

I wonder what is the best way to replaces rows that do not satisfy a certain condition with zeros for sparse matrices. For example (I use plain arrays for illustration): I want to replace every row wh

Is it possible to effectively obtain the norm of a sparse vector in python? I tried the following: from scipy import sparse from numpy.linalg import norm vector1 = sparse.csr_matrix([ 0 for i in xrang

I am working on an FEM project using Scipy. Now my problem is, that the assembly of the sparse matrices is too slow. I compute the contribution of every element in dense small matrices (one for each e

I have huge n×n matrix A, and the indices of its non-zero elements by a = find(A). I have obtained a new list a1 by deleting some elements from a. I want to have matrix A of indices in a1 without usin

import numpy as np from scipy.sparse import lil_matrix using numpy I get test_mat = (np.ones((4,6))) test_list = test_mat[0,:].tolist() gives test_list as a list which has 6 elements. However whe I

What would be the most efficient way to concatenate sparse matrices in Python using SciPy/Numpy? Here I used the following: >>> np.hstack((X, X2)) array([ <49998x70000 sparse matrix of typ

I have a transaction dataset with 250000 transactions (rows) and 2183 items (columns). I wanna transform it to a sparse matrix and then do hierarchical cluster on it. I tried package 'sparcl', but it

I am examining java version sparse matrix multiplication program which is from JGF benchmark. I run this program in many kinds of cpu frequency. I also do some profile for this program. I classify it

I can't find more info about scipy.sparse indexing except SciPy v0.11 Reference Guide, which says that The lil_matrix class supports basic slicing and fancy indexing with a similar syntax to NumPy ar

Is there any sparse matrix library that can do these: solve linear algebraic equations support operations like matrix-matrix/number multiplication/addition/subtraction,matrix transposition, get a row

I have to implement sparse matrix and do some decompositions like Cholesky Decomposition, LU Decomposition, QR Decomposition on it. Actually I found a library called JAMA which is capable of doing th

I have got an output using sparse matrix in python, i need to store this sparse matrix in my hard disk, how can i do it? if i should create a database then how should i do?? this is my code: import nl

I would like to extract specific rows and columns from a scipy sparse matrix - probably lil_matrix will be the best choice here. It works fine here: from scipy import sparse lilm=sparse.lil_matrix((10

I am trying to create a very huge sparse matrix which has a shape (447957347, 5027974). And, it contains 3,289,288,566 elements. But, when i create a csr_matrix using scipy.sparse, it return something

I have a large 500x53380 sparse matrix and trying to dichotomize it. I have tried using event2dichot under sna package but no success because it requires an adjacency matrix or network object. I al

I am working on a project that involves the computation of the eigenvectors of a very large sparse matrix. To be more specific I have a Matrix that is the laplacian of a big graph and I am interested

I need to overload [] operator in class Sparse Matrix. This operator must work like in 2D table access. For example tab[1][1], return reference. The problem is I have a vector of elements(struct). tem

I'm trying to use large 10^5x10^5 sparse matrices but seem to be running up against scipy: n = 10 ** 5 x = scipy.sparse.rand(n, n, .001) gets ValueError: Trying to generate a random sparse matrix su

I wrote a small sparse matrix class with the member: std::map<int,std::map<int,double> > sm; The method below is the function i use to access the elements of a matrix, if not possible thr

I am trying to make my control algorithm more efficient since my matrices are sparse. Currently, I am doing conventional matrix-vector multiplications in Simulink/xPC for a real-time application. I ca

Hello I know there are a lot of questions on sparse matrix multiplication, but many of the answers say to just use libraries. I want to do it without using library functions. So far I've done the easy

Say I have a huge numpy matrix A taking up tens of gigabytes. It takes a non-negligible amount of time to allocate this memory. Let's say I also have a collection of scipy sparse matrices with the sam

I have a N*N matrix: N=3 x = scipy.sparse.lil_matrix( (N,N) ) for _ in xrange(N): x[random.randint(0,N-1),random.randint(0,N-1)]=random.randint(1,100) Assume the matrix looks as below: X Y Z X 0 [2,

How to convert an Eigen::Matrix<double,Dynamic,Dynamic> to an Eigen::SparseMatrix<double> ? I'm looking for a better way instead of iterate through the dense matrix

I'd like to write a function that normalizes the rows of a large sparse matrix (such that they sum to one). from pylab import * import scipy.sparse as sp def normalize(W): z = W.sum(0) z[z < 1e-6]

While attempting to combine dense and sparse data with scipy.spare.hstack, I'm occasionally running into the error: Traceback (most recent call last): File hstack_error.py, line 3, in <module>

I've got a SciPy sparse matrix A, let's say in CSR format, and a vector v of matching length. What's the best way of row-scaling A with v, i.e., performing diag(v) * A?

I have a large sparse graph that I am representing as an adjacency matrix (100k by 100k or bigger), stored as an array of edges. An example with a (non-sparse) 4 by 4 matrix: 0 7 4 0 example_array = [

We have a matlab program in which we want to calculate the following expression: sum( (M*x) .* x) Here, M is a small dense matrix (say 100 by 100) and x is a sparse fat matrix (say of size 100 by 1

Is it possible to speed up large sparse matrix calculations by e.g. placing parantheses optimally? What I'm asking is: Can I speed up the following code by forcing Matlab to do the operations in a spe

I got Memory Error when I was running dbscan algorithm of scikit. My data is about 20000*10000, it's a binary matrix. (Maybe it's not suitable to use DBSCAN with such a matrix. I'm a beginner of machi

I am trying to create a large sparse matrix, 10^5 by 10^5 in R, but am running into memory issues. > Matrix(nrow=1e5,ncol=1e5,sparse=TRUE) Error in Matrix(nrow = 1e+05, ncol = 1e+05, sparse = TRUE)

The distributions available in the scipy.stats module have fit methods (http://docs.scipy.org/doc/scipy/reference/stats.html) to estimate the parameters of a distribution given input data. Is there a

I am filling a sparse matrix P (230k,290k) with values coming from a text file which I read line by line, here is the (simplified) code while ... C = textscan(text_line,'%d','delimiter',',','EmptyValu

I was trying to iterate over the non zero elements of a row major sparse matrix, such as shown below: Eigen::SparseMatrix<double,Eigen::RowMajor> Test(2, 3); Test.insert(0, 1) = 34; Test.insert

What do you think? What would be faster and how much faster: Doing sparse matrix (CSR) multiplication (with a vector) on the GPU or the CPU (multithreaded)?

Could anyone recommend set of tools to perform standard NMF application onto sparse input data [ matrix of size 50kx50k ], thanks!

I'm trying to copy part of a matrix (matrix 1) in matlab to another empty matrix of zeros (matrix 2) so that the section I copy from matrix 1 has the same indices in matrix 2, e.g. Matrix 1 (mat1): 0