MatCreateAIJ#
Creates a sparse parallel matrix in MATAIJ
format (the default parallel PETSc format). For good matrix assembly performance the user should preallocate the matrix storage by setting the parameters d_nz
(or d_nnz
) and o_nz
(or o_nnz
).
Synopsis#
Collective
Input Parameters#
comm - MPI communicator
m - number of local rows (or
PETSC_DECIDE
to have calculated if M is given) This value should be the same as the local size used in creating the y vector for the matrix-vector product y = Ax.n - This value should be the same as the local size used in creating the x vector for the matrix-vector product y = Ax. (or
PETSC_DECIDE
to have calculated if N is given) For square matrices n is almost always m.M - number of global rows (or
PETSC_DETERMINE
to have calculated if m is given)N - number of global columns (or
PETSC_DETERMINE
to have calculated if n is given)d_nz - number of nonzeros per row in DIAGONAL portion of local submatrix (same value is used for all local rows)
d_nnz - array containing the number of nonzeros in the various rows of the DIAGONAL portion of the local submatrix (possibly different for each row) or
NULL
, ifd_nz
is used to specify the nonzero structure. The size of this array is equal to the number of local rows, i.e ‘m’.o_nz - number of nonzeros per row in the OFF-DIAGONAL portion of local submatrix (same value is used for all local rows).
o_nnz - array containing the number of nonzeros in the various rows of the OFF-DIAGONAL portion of the local submatrix (possibly different for each row) or
NULL
, ifo_nz
is used to specify the nonzero structure. The size of this array is equal to the number of local rows, i.e ‘m’.
Output Parameter#
A - the matrix
Options Database Keys#
-mat_no_inode - Do not use inodes
-mat_inode_limit
- Sets inode limit (max limit=5)-matmult_vecscatter_view
- View the vecscatter (i.e., communication pattern) used inMatMult()
of sparse parallel matrices. See viewer types in manual ofMatView()
. Of them, ascii_matlab, draw or binary cause theVecScatter
to be viewed as a matrix. Entry (i,j) is the size of message (in bytes) rank i sends to rank j in oneMatMult()
call.
Notes#
It is recommended that one use MatCreateFromOptions()
or the MatCreate()
, MatSetType()
and/or MatSetFromOptions()
,
MatXXXXSetPreallocation() paradigm instead of this routine directly.
[MatXXXXSetPreallocation() is, for example, MatSeqAIJSetPreallocation()
]
If the *_nnz parameter is given then the *_nz parameter is ignored
The m
,n
,M
,N
parameters specify the size of the matrix, and its partitioning across
processors, while d_nz
,d_nnz
,o_nz
,o_nnz
parameters specify the approximate
storage requirements for this matrix.
If PETSC_DECIDE
or PETSC_DETERMINE
is used for a particular argument on one
processor than it must be used on all processors that share the object for
that argument.
If m
and n
are not PETSC_DECIDE
, then the values determine the PetscLayout
of the matrix and the ranges returned by
MatGetOwnershipRange()
, MatGetOwnershipRanges()
, MatGetOwnershipRangeColumn()
, and MatGetOwnershipRangesColumn()
.
The user MUST specify either the local or global matrix dimensions (possibly both).
The parallel matrix is partitioned across processors such that the
first m0
rows belong to process 0, the next m1
rows belong to
process 1, the next m2
rows belong to process 2, etc., where
m0
, m1
, m2
… are the input parameter m
on each MPI process. I.e., each MPI process stores
values corresponding to [m x N] submatrix.
The columns are logically partitioned with the n0 columns belonging to 0th partition, the next n1 columns belonging to the next partition etc.. where n0,n1,n2… are the input parameter ‘n’.
The DIAGONAL portion of the local submatrix on any given processor is the submatrix corresponding to the rows and columns m,n corresponding to the given processor. i.e diagonal matrix on process 0 is [m0 x n0], diagonal matrix on process 1 is [m1 x n1] etc. The remaining portion of the local submatrix [m x (N-n)] constitute the OFF-DIAGONAL portion. The example below better illustrates this concept.
For a square global matrix we define each processor’s diagonal portion to be its local rows and the corresponding columns (a square submatrix); each processor’s off-diagonal portion encompasses the remainder of the local matrix (a rectangular submatrix).
If o_nnz
, d_nnz
are specified, then o_nz
, and d_nz
are ignored.
When calling this routine with a single process communicator, a matrix of
type MATSEQAIJ
is returned. If a matrix of type MATMPIAIJ
is desired for this
type of communicator, use the construction mechanism
MatCreate(..., &A);
MatSetType(A, MATMPIAIJ);
MatSetSizes(A, m, n, M, N);
MatMPIAIJSetPreallocation(A, ...);
By default, this format uses inodes (identical nodes) when possible. We search for consecutive rows with the same nonzero structure, thereby reusing matrix information to achieve increased efficiency.
Example Usage#
Consider the following 8x8 matrix with 34 non-zero values, that is assembled across 3 processors. Lets assume that proc0 owns 3 rows, proc1 owns 3 rows, proc2 owns 2 rows. This division can be shown as follows
1 2 0 | 0 3 0 | 0 4
Proc0 0 5 6 | 7 0 0 | 8 0
9 0 10 | 11 0 0 | 12 0
-------------------------------------
13 0 14 | 15 16 17 | 0 0
Proc1 0 18 0 | 19 20 21 | 0 0
0 0 0 | 22 23 0 | 24 0
-------------------------------------
Proc2 25 26 27 | 0 0 28 | 29 0
30 0 0 | 31 32 33 | 0 34
This can be represented as a collection of submatrices as
A B C
D E F
G H I
Where the submatrices A,B,C are owned by proc0, D,E,F are owned by proc1, G,H,I are owned by proc2.
The ‘m’ parameters for proc0,proc1,proc2 are 3,3,2 respectively. The ‘n’ parameters for proc0,proc1,proc2 are 3,3,2 respectively. The ‘M’,’N’ parameters are 8,8, and have the same values on all procs.
The DIAGONAL submatrices corresponding to proc0,proc1,proc2 are
submatrices [A], [E], [I] respectively. The OFF-DIAGONAL submatrices
corresponding to proc0,proc1,proc2 are [BC], [DF], [GH] respectively.
Internally, each processor stores the DIAGONAL part, and the OFF-DIAGONAL
part as MATSEQAIJ
matrices. For example, proc1 will store [E] as a MATSEQAIJ
matrix, ans [DF] as another SeqAIJ matrix.
When d_nz
, o_nz
parameters are specified, d_nz
storage elements are
allocated for every row of the local diagonal submatrix, and o_nz
storage locations are allocated for every row of the OFF-DIAGONAL submat.
One way to choose d_nz
and o_nz
is to use the max nonzerors per local
rows for each of the local DIAGONAL, and the OFF-DIAGONAL submatrices.
In this case, the values of d_nz
,o_nz
are
proc0 dnz = 2, o_nz = 2
proc1 dnz = 3, o_nz = 2
proc2 dnz = 1, o_nz = 4
We are allocating m*(d_nz
+o_nz
) storage locations for every proc. This
translates to 3*(2+2)=12 for proc0, 3*(3+2)=15 for proc1, 2*(1+4)=10
for proc3. i.e we are using 12+15+10=37 storage locations to store
34 values.
When d_nnz
, o_nnz
parameters are specified, the storage is specified
for every row, corresponding to both DIAGONAL and OFF-DIAGONAL submatrices.
In the above case the values for d_nnz,o_nnz are
proc0 d_nnz = [2,2,2] and o_nnz = [2,2,2]
proc1 d_nnz = [3,3,2] and o_nnz = [2,1,1]
proc2 d_nnz = [1,1] and o_nnz = [4,4]
Here the space allocated is sum of all the above values i.e 34, and hence pre-allocation is perfect.
See Also#
Matrices, Mat
, Sparse Matrix Creation, MatCreate()
, MatCreateSeqAIJ()
, MatSetValues()
, MatMPIAIJSetPreallocation()
, MatMPIAIJSetPreallocationCSR()
,
MATMPIAIJ
, MatCreateMPIAIJWithArrays()
, MatGetOwnershipRange()
, MatGetOwnershipRanges()
, MatGetOwnershipRangeColumn()
,
MatGetOwnershipRangesColumn()
, PetscLayout
Level#
intermediate
Location#
Examples#
src/ksp/ksp/tutorials/ex81.c
src/ksp/ksp/tutorials/ex79.c
src/mat/tutorials/ex9.c
src/ts/tutorials/ex44.c
src/tao/bound/tutorials/plate2f.F90
src/ksp/ksp/tutorials/ex81a.c
src/mat/tutorials/ex4f.F90
src/mat/tutorials/ex4.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages