I am using cusp for sparse matrix multiplication. From the resultant matrix i need the max value without copying the matrix from device memory to host memory. I am planning to wrap the resultant matrix in thrust device pointer and then use the function **thrust::max_element** to get max element. The matrices are in coo format. If C is resultant sparse matrix then

C.row_indices[] : contains row number

C.column_indices[]: contains column number

C.values[]: contain actual value

So basically i need highest value from C.values array.

Using

```
thrust::device_ptr<int> dev_ptr = C.values;
```

is giving error

```
error: no instance of constructor "thrust::device_ptr<T>::device_ptr [with T=int]" matches the argument list
argument types are: (cusp::array1d<float, cusp::host_memory>)
```

How can i wrap my resultant matrix in order to use it in thrust library ?

If my device matrix definition is like this:

```
cusp::coo_matrix<int, double, cusp::device_memory> A_d = A;
```

Then try this:

```
thrust::device_ptr<double> dev_ptr = &(A_d.values[0]);
thrust::device_ptr<double> max_ptr = thrust::max_element(dev_ptr, dev_ptr + 6);
```

Similar Questions

I have a thrust device_vector. I want to cast it to a raw pointer so that I can pass it to a kernel. How can I do so? thrust::device_vector<int> dv(10); //CAST TO RAW kernel<<<bl,tpb&g

i'm running the following code, where M is a ~200,000 by ~200,000 sparse matrix and points is ~200,000 by 2 matrix inds=sub2ind(size(M),points(:,1),points(:,2)); M(inds)=M(inds)+1; the problem is tha

Are there any algorithms that allow efficient creation (element filling) of sparse (e.g. CSR or coordinate) matrix in parallel?

Given a sparse matrixR of type scipy.sparse.coo_matrix of shape 1.000.000 x 70.000 I figured out that row_maximum = max(R.getrow(i).data) will give me the maximum value of the i-th row. What I need n

Suppose you have a data frame with a high number of columns(1000 factors, each with 15 levels). You'd like to create a dummy variable data set, but since it would be too sparse, you would like to keep

Is there a way to declare a Thrust Vector Pointer without actually allocating a vector? I need to use this pointer as a member variable in a class. Since I do not actually know the size of the vector

I have a large sparse matrix X in scipy.sparse.csr_matrix format and I would like to multiply this by a numpy array W making use of parallelism. After some research I discovered I need to use Array in

Using the Matrix package I can create a two-dimensional sparse matrix. Can someone suggest a package that would allow me to create a multidimensional (specifically a 3-dimensional) sparse matrix (arra

I have a 1034_by_1034 sparse matrix (scipy.sparse.csr.csr_matrix), which basically represents the adjacency matrix of a graph. I want to check if some elements are ones or not. But I found this to be

I have the following sparse matrix that contains O(N) elements boost::numeric::ublas::compressed_matrix<int> adjacency (N, N); I could write a brute force double loop to go over all the entries

I'm looking for a matrix / linear algebra library in Java that provides a sparse matrix that can be written to concurrently from different threads. Most of the libraries I've come across either do not

I am trying to do a Matrix-Vector Multiplication on GPU (using Cuda). I loaded the matrix on my C++ code (CPU), and then I perform the matrix-vector multiplication using cuda. I am also using shared m

We have an application that stores a sparse matrix. This matrix has entries that mostly exist around the main diagonal of the matrix. I was wondering if there were any efficient algorithms (or existin

I'm trying to set up a particular kind of sparse matrix in R. The following code gives the result I want, but is extremely slow: library(Matrix) f <- function(x){ out <- rbind(head(x, -1), tail(

I try to use thrust to find the max element from a 2D matrix. However, I always get incorrent results. The codes work well for 1D matrix but behave unpredictably when using 2D matrix. I use opencv Gp

I'm writing a sparse matrix solver using the Gauss-Seidel method. By profiling, I've determined that about half of my program's time is spent inside the solver. The performance-critical part is as fol

I want to augment the scipy.sparse.csr_matrix class with a few methods and replace a few others for personal use. I am making a child class which inherits from csr_matrix, as such: class SparseMatrix(

After learning about the options for working with sparse matrices in R, I want to use the Matrix package to create a sparse matrix from the following data frame and have all other elements be NA. s r

I use some C++ code to take a text file from a database and create a dgcMatrix type sparse matrix from the Matrix package. For the first time, I'm trying to build a matrix that has more than 2^31-1 no

I have a large (500k by 500k), sparse matrix. I would like to get the principle components of it (in fact, even computing just the largest PC would be fine). Randomized PCA works great, except that it

I have a 3007 x 1644 dimensional matrix of terms and documents. I am trying to assign weights to frequency of terms in each document so I'm using this log entropy formula http://en.wikipedia.org/wiki/

I have a matrix A in CSC-format, of which I index just a single column b = A[:,col] resulting in a (n x 1) matrix. What I want to do is: v = M * b where M is a (n x n) matrix in CSR. The result v i

The question is: Is it possible to create a sparse matrix using the following sparse list implementation? In special, using a class template with a class template (SparseList*>)? I've created a cla

I have a sparse matrix P of dimension dim*dim given as a pointer through double* P /* create the output matrix */ plhs[0] = mxCreateDoubleMatrix(dim,dim,mxREAL); /* get a pointer to the real data in t

I work on converting a large Matlab code to C++ and CUDA. I have problems converting some sparse matrix operations like: 1. full_Matrix * sparse_Matrix 2. sparse_Matrix * full_Matrix 3. sparse_Matrix

I am using Scipy to construct a large, sparse (250k X 250k) co-occurrence matrix using scipy.sparse.lil_matrix. Co-occurrence matrices are triangular; that is, M[i,j] == M[j,i]. Since it would be high

Below is my code for generating my sparse matrix: import numpy as np import scipy def sparsemaker(X, Y, Z): 'X, Y, and Z are 2D arrays of the same size' x_, row = np.unique(X, return_inverse=True) y_,

Using the Eigen library in C++, given a sparse matrix A, what is the most efficient way (row-wise operations? how to?) to compute a sparse matrix B such that B(i, j) = A(i, j) / A(i, i) ? That is, div

I am filling a sparse matrix P (230k,290k) with values coming from a text file which I read line by line, here is the (simplified) code while ... C = textscan(text_line,'%d','delimiter',',','EmptyValu

I am trying to make a vector of cusp::coo_matrix and it seems one cannot use thrust::host_vector in this manner. Consider this code: int main(void) { typedef typename cusp::coo_matrix<int, float, c

I'm trying to use hcluster library in python. I have no enough python knowledges to use sparse matrix in hcluster. Please help me anybody. So, that what I'm doing: import os.path import numpy import

I have a scipy.sparse.csr.csr_matrix which is the output from TfidfVectorizer() class. I know I can access the individual components of this matrix in this manner: So if I have this matrix here: tf_id

I am trying to find an efficient way that lets me increase the top k values of a sparse matrix by some constant value. I am currently using the following code, which is quite slow for very large matri

I have an assignment where Im supposed to finish the implementation on a generic sparse matrix. Im stuck on the addition part. The matrix is only going to support numbers so I had it extend Number hop

I have a fairly large sparse matrix that, I reckon, would occupy 1Gb when loaded into memory. I don't need access to the whole matrix at all times, so some kind of memory mapping would work; it doesn'

I am trying to pass a matrix to a function by reference. The function will replace every element A[i][j] of the matrix by -A[i][j]. I first create the matrix: float a[3][4] = { {1.0f, 0.0f, 0.0f, 0.0f

I created a compressed sparse matrix, but while accessing to a positive index it complains that the index is negative: import scipy.sparse as sparse B= sparse.csc_matrix((110111213141516, 25)) B[11011

I have a sparse matrix that I obtained by using Sklearn's TfidfVectorizer object: vect = TfidfVectorizer(sublinear_tf=True, max_df=0.5, analyzer='word', vocabulary=my_vocab, stop_words='english') tfid

Does anyone know how to compute a correlation matrix from a very large sparse matrix in python? Basically, I am looking for something like numpy.corrcoef that will work on a scipy sparse matrix.

I have a m x n matrix where each row consists of zeros and same values for each row. an example would be: M = [-0.6 1.8 -2.3 0 0 0; 0 0 0 3.4 -3.8 -4.3; -0.6 0 0 3.4 0 0] In this example the first co

Say I have a square NxN crs matrix in spicy. I'd like to restrict that matrix to a subspace, by keeping M indices (given by an array of N booleans, M of which are true). This should give me an MxM spa

I have to implement a sparse matrix (a matrix that has predominantly zeroes, so you only record values different than 0), but I have to implement it using a binary search tree. EDIT: So now I'm thinki

I'm using thrust vector. I'm looking for an elegant method for reordering a thrust device vector using a mirror ordering, (example given, couldn't find any function for that in Thrust ) For instan

what's the best suitable data structure to use in C for sparse dynamic matrix. I know about the Yale format but it's for static matrices. I need to be able to add rows column and values in it. Thanks

I am trying to factorize very large matrixes with the python library Nimfa. Since the matrix is so large I am unable to instanciate it in a dence format in memory, so instead I use scipy.sparse.csr_ma

Suppose I have a NxN matrix M (lil_matrix or csr_matrix) from scipy.sparse, and I want to make it (N+1)xN where M_modified[i,j] = M[i,j] for 0 <= i < N (and all j) and M[N,j] = 0 for all j. Basi

The dot product of a row and column in a matrix is a 1x1 csr_matrix. How can I efficiently convert that to a scalar? Right now I use sum. M below is a square matrix: dot_product_result = csr_matrix.su

I'm making a little program to make a representation of sparse matrixes (a matrix with a lot of elements equal to zero). Represented like this page 108 (I think watching at the figure is enough to und

I have a large sparse matrix and I want to get the maximum value for each row. In numpy, I can call numpy.max(mat, axis=1), but I can not find similar function for scipy sparse matrix. Is there any ef

I recently installed Cholmod in order to perform sparse cholesky decompositions in some C++ code. I wanted to then use the decomp to calculate the matrix inverse (I have the following problem: d^T . (