MatGetMultiProcBlock#
Create multiple ‘parallel submatrices’ from a given Mat
. Each submatrix can span multiple procs.
Synopsis#
#include "petscmat.h"
PetscErrorCode MatGetMultiProcBlock(Mat mat, MPI_Comm subComm, MatReuse scall, Mat *subMat)
Collective
Input Parameters#
mat - the matrix
subComm - the sub communicator obtained as if by
MPI_Comm_split(PetscObjectComm((PetscObject)mat))
scall - either
MAT_INITIAL_MATRIX
orMAT_REUSE_MATRIX
Output Parameter#
subMat - parallel sub-matrices each spanning a given
subcomm
Notes#
The submatrix partition across processors is dictated by subComm
a
communicator obtained by MPI_comm_split()
or via PetscSubcommCreate()
. The subComm
is not restricted to be grouped with consecutive original MPI processes.
Due the MPI_Comm_split()
usage, the parallel layout of the submatrices
map directly to the layout of the original matrix [wrt the local
row,col partitioning]. So the original ‘DiagonalMat’ naturally maps
into the ‘DiagonalMat’ of the subMat
, hence it is used directly from
the subMat
. However the offDiagMat looses some columns - and this is
reconstructed with MatSetValues()
This is used by PCBJACOBI
when a single block spans multiple MPI processes.
See Also#
Matrices, Mat
, MatCreateRedundantMatrix()
, MatCreateSubMatrices()
, PCBJACOBI
Level#
advanced
Location#
Implementations#
MatGetMultiProcBlock_MPIAIJ() in src/mat/impls/aij/mpi/mpb_aij.c
MatGetMultiProcBlock_SeqAIJ() in src/mat/impls/aij/seq/aij.c
MatGetMultiProcBlock_MPIBAIJ() in src/mat/impls/baij/mpi/mpb_baij.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages