PetscSFSetGraphWithPattern#

Sets the graph of a PetscSF with a specific pattern

Synopsis#

Collective

Input Parameters#

Notes#

It is easier to explain PetscSFPattern using vectors. Suppose we have an MPI vector root and its PetscLayout is map. n and N are the local and global sizes of root respectively.

With PETSCSF_PATTERN_ALLGATHER, the routine creates a graph that if one does PetscSFBcastBegin() and PetscSFBcastEnd() on it, it will copy root to sequential vectors leaves on all MPI processes.

With PETSCSF_PATTERN_GATHER, the routine creates a graph that if one does PetscSFBcastBegin() and PetscSFBcastEnd() on it, it will copy root to a sequential vector leaves on MPI rank 0.

With PETSCSF_PATTERN_ALLTOALL, map is not used. Suppose NP is the size of sf’s communicator. The routine creates a graph where every MPI process has NP leaves and NP roots. On MPI rank i, its leaf j is connected to root i of rank j. Here 0 <=i,j<NP. It is a kind of MPI_Alltoall() with sendcount/recvcount being 1. Note that it does not mean one can not send multiple items. One needs to create a new MPI datatype for the multiple data items with MPI_Type_contiguous and use that as the argument in the PetscSF routines. In this case, roots and leaves are symmetric.

See Also#

PetscSF - an alternative to low-level MPI calls for data communication, PetscSF, PetscSFCreate(), PetscSFView(), PetscSFGetGraph()

Level#

intermediate

Location#

src/vec/is/sf/interface/sf.c


Index of all PetscSF routines
Table of Contents for all manual pages
Index of all manual pages