In this page, we give a quick summary of the main operations available for sparse matrices in the class SparseMatrix. First, it is recommended to read the introductory tutorial at Sparse matrix manipulations. The important point to have in mind when working on sparse matrices is how they are stored : i.e either row major or column major. The default is column major. Most arithmetic operations on sparse matrices will assert that they have the same storage order.
Category | Operations | Notes |
---|---|---|
Constructor |
SparseMatrix<double> sm1(1000,1000); SparseMatrix<std::complex<double>,RowMajor> sm2; |
Default is ColMajor |
Resize/Reserve |
sm1.resize(m,n); // Change sm1 to a m x n matrix. sm1.reserve(nnz); // Allocate room for nnz nonzeros elements. |
Note that when calling reserve(), it is not required that nnz is the exact number of nonzero elements in the final matrix. However, an exact estimation will avoid multiple reallocations during the insertion phase. |
Assignment |
SparseMatrix<double,Colmajor> sm1; // Initialize sm2 with sm1. SparseMatrix<double,Rowmajor> sm2(sm1), sm3; // Assignment and evaluations modify the storage order. sm3 = sm1; |
The copy constructor can be used to convert from a storage order to another |
Element-wise Insertion |
// Insert a new element; sm1.insert(i, j) = v_ij; // Update the value v_ij sm1.coeffRef(i,j) = v_ij; sm1.coeffRef(i,j) += v_ij; sm1.coeffRef(i,j) -= v_ij; |
insert() assumes that the element does not already exist; otherwise, use coeffRef() |
Batch insertion |
std::vector< Eigen::Triplet<double> > tripletList;
tripletList.reserve(estimation_of_entries);
// -- Fill tripletList with nonzero elements...
sm1.setFromTriplets(TripletList.begin(), TripletList.end());
|
A complete example is available at Triplet Insertion . |
Constant or Random Insertion |
sm1.setZero(); |
Remove all non-zero coefficients |
Beyond the basic functions rows() and cols(), there are some useful functions that are available to easily get some information from the matrix.
sm1.rows(); // Number of rows sm1.cols(); // Number of columns sm1.nonZeros(); // Number of non zero values sm1.outerSize(); // Number of columns (resp. rows) for a column major (resp. row major ) sm1.innerSize(); // Number of rows (resp. columns) for a row major (resp. column major) sm1.norm(); // Euclidian norm of the matrix sm1.squaredNorm(); // Squared norm of the matrix sm1.blueNorm(); sm1.isVector(); // Check if sm1 is a sparse vector or a sparse matrix sm1.isCompressed(); // Check if sm1 is in compressed form ... |
It is easy to perform arithmetic operations on sparse matrices provided that the dimensions are adequate and that the matrices have the same storage order. Note that the evaluation can always be done in a matrix with a different storage order. In the following, sm denotes a sparse matrix, dm a dense matrix and dv a dense vector.
Operations | Code |
Notes |
---|---|---|
add subtract |
sm3 = sm1 + sm2; sm3 = sm1 - sm2; sm2 += sm1; sm2 -= sm1; |
sm1 and sm2 should have the same storage order |
scalar product |
sm3 = sm1 * s1; sm3 *= s1; sm3 = s1 * sm1 + s2 * sm2; sm3 /= s1; |
Many combinations are possible if the dimensions and the storage order agree. |
Sparse Product |
sm3 = sm1 * sm2; dm2 = sm1 * dm1; dv2 = sm1 * dv1; |
|
transposition, adjoint |
sm2 = sm1.transpose(); sm2 = sm1.adjoint(); |
Note that the transposition change the storage order. There is no support for transposeInPlace(). |
Permutation |
perm.indices(); // Reference to the vector of indices sm1.twistedBy(perm); // Permute rows and columns sm2 = sm1 * perm; // Permute the columns sm2 = perm * sm1; // Permute the columns |
|
Component-wise ops |
sm1.cwiseProduct(sm2); sm1.cwiseQuotient(sm2); sm1.cwiseMin(sm2); sm1.cwiseMax(sm2); sm1.cwiseAbs(); sm1.cwiseSqrt(); |
sm1 and sm2 should have the same storage order |
Code | Notes |
---|---|
Sub-matrices | |
sm1.block(startRow, startCol, rows, cols); sm1.block(startRow, startCol); sm1.topLeftCorner(rows, cols); sm1.topRightCorner(rows, cols); sm1.bottomLeftCorner( rows, cols); sm1.bottomRightCorner( rows, cols); |
Contrary to dense matrices, here all these methods are read-only. See Block operations and below for read-write sub-matrices. |
Range | |
sm1.innerVector(outer); // RW sm1.innerVectors(start, size); // RW sm1.leftCols(size); // RW sm2.rightCols(size); // RO because sm2 is row-major sm1.middleRows(start, numRows); // RO because sm1 is column-major sm1.middleCols(start, numCols); // RW sm1.col(j); // RW |
A inner vector is either a row (for row-major) or a column (for column-major). As stated earlier, for a read-write sub-matrix (RW), the evaluation can be done in a matrix with different storage order. |
Triangular and selfadjoint views | |
sm2 = sm1.triangularview<Lower>(); sm2 = sm1.selfadjointview<Lower>(); |
Several combination between triangular views and blocks views are possible |
Triangular solve | |
dv2 = sm1.triangularView<Upper>().solve(dv1); dv2 = sm1.topLeftCorner(size, size) .triangularView<Lower>().solve(dv1); |
For general sparse solve, Use any suitable module described at Solving Sparse Linear Systems |
Low-level API | |
sm1.valuePtr(); // Pointer to the values sm1.innerIndexPtr(); // Pointer to the indices. sm1.outerIndexPtr(); // Pointer to the beginning of each inner vector |
If the matrix is not in compressed form, makeCompressed() should be called before. Note that these functions are mostly provided for interoperability purposes with external libraries. A better access to the values of the matrix is done by using the InnerIterator class as described in the Tutorial Sparse section |
Mapping external buffers | |
int outerIndexPtr[cols+1]; int innerIndices[nnz]; double values[nnz]; Map<SparseMatrix<double> > sm1(rows,cols,nnz,outerIndexPtr, // read-write innerIndices,values); Map<const SparseMatrix<double> > sm2(...); // read-only |
As for dense matrices, class Map<SparseMatrixType> can be used to see external buffers as an Eigen's SparseMatrix object. |
© Eigen.
Licensed under the MPL2 License.
https://eigen.tuxfamily.org/dox/group__SparseQuickRefPage.html