# PetscSFSetGraphWithPattern#

Sets the graph of a `PetscSF`

with a specific pattern

## Synopsis#

```
#include "petscsf.h"
PetscErrorCode PetscSFSetGraphWithPattern(PetscSF sf, PetscLayout map, PetscSFPattern pattern)
```

Collective

## Input Parameters#

The**sf -**`PetscSF`

Layout of roots over all processes (insignificant when pattern is**map -**`PETSCSF_PATTERN_ALLTOALL`

)One of**pattern -**`PETSCSF_PATTERN_ALLGATHER`

,`PETSCSF_PATTERN_GATHER`

,`PETSCSF_PATTERN_ALLTOALL`

## Notes#

It is easier to explain `PetscSFPattern`

using vectors. Suppose we have an MPI vector `x`

and its `PetscLayout`

is `map`

.
`n`

and `N`

are the local and global sizes of `x`

respectively.

With `PETSCSF_PATTERN_ALLGATHER`

, the routine creates a graph that if one does `PetscSFBcastBegin()`

/`PetscSFBcastEnd()`

on it, it will copy `x`

to
sequential vectors `y`

on all MPI processes.

With `PETSCSF_PATTERN_GATHER`

, the routine creates a graph that if one does `PetscSFBcastBegin()`

/`PetscSFBcastEnd()`

on it, it will copy `x`

to a
sequential vector `y`

on rank 0.

In above cases, entries of `x`

are roots and entries of `y`

are leaves.

With `PETSCSF_PATTERN_ALLTOALL`

, map is insignificant. Suppose NP is size of `sf`

’s communicator. The routine
creates a graph that every rank has NP leaves and NP roots. On rank i, its leaf j is connected to root i
of rank j. Here 0 <=i,j<NP. It is a kind of `MPI_Alltoall()`

with sendcount/recvcount being 1. Note that it does
not mean one can not send multiple items. One just needs to create a new MPI datatype for the mulptiple data
items with `MPI_Type_contiguous`

and use that as the

In this case, roots and leaves are symmetric.

## See Also#

`PetscSF`

, `PetscSFCreate()`

, `PetscSFView()`

, `PetscSFGetGraph()`

## Level#

intermediate

## Location#

Index of all PetscSF routines

Table of Contents for all manual pages

Index of all manual pages