PetscSFSetGraphWithPattern#
Sets the graph of a PetscSF with a specific pattern
Synopsis#
#include "petscsf.h"
PetscErrorCode PetscSFSetGraphWithPattern(PetscSF sf, PetscLayout map, PetscSFPattern pattern)
Collective
Input Parameters#
sf - The
PetscSFmap - Layout of roots over all processes (not used when pattern is
PETSCSF_PATTERN_ALLTOALL)pattern - One of
PETSCSF_PATTERN_ALLGATHER,PETSCSF_PATTERN_GATHER,PETSCSF_PATTERN_ALLTOALL
Notes#
It is easier to explain PetscSFPattern using vectors. Suppose we have an MPI vector root and its PetscLayout is map.
n and N are the local and global sizes of root respectively.
With PETSCSF_PATTERN_ALLGATHER, the routine creates a graph that if one does PetscSFBcastBegin() and PetscSFBcastEnd() on it, it will copy root to
sequential vectors leaves on all MPI processes.
With PETSCSF_PATTERN_GATHER, the routine creates a graph that if one does PetscSFBcastBegin() and PetscSFBcastEnd() on it, it will copy root to a
sequential vector leaves on MPI rank 0.
With PETSCSF_PATTERN_ALLTOALL, map is not used. Suppose NP is the size of sf’s communicator. The routine
creates a graph where every MPI process has NP leaves and NP roots. On MPI rank i, its leaf j is connected to root i
of rank j. Here 0 <=i,j<NP. It is a kind of MPI_Alltoall() with sendcount/recvcount being 1. Note that it does
not mean one can not send multiple items. One needs to create a new MPI datatype for the multiple data
items with MPI_Type_contiguous and use that as the PetscSF routines. In this case, roots and leaves are symmetric.
See Also#
PetscSF - an alternative to low-level MPI calls for data communication, PetscSF, PetscSFCreate(), PetscSFView(), PetscSFGetGraph()
Level#
intermediate
Location#
Index of all PetscSF routines
Table of Contents for all manual pages
Index of all manual pages