PCMPIServerBegin#
starts a server that runs on the rank != 0
MPI processes waiting to process requests for parallel KSP
solves and management of parallel KSP
objects.
Synopsis#
#include "petscksp.h"
PetscErrorCode PCMPIServerBegin(void)
Logically Collective on all MPI processes except rank 0
Options Database Keys#
-mpi_linear_solver_server - causes the PETSc program to start in MPI linear solver server mode where only the first MPI rank runs user code
-mpi_linear_solver_server_view - displays information about all the linear systems solved by the MPI linear solver server at the conclusion of the program
-mpi_linear_solver_server_use_shared_memory - use shared memory when communicating matrices and vectors to server processes (default where supported)
Note#
This is normally started automatically in PetscInitialize()
when the option is provided
See PCMPI
for information on using the solver with a KSP
object
Developer Notes#
When called on MPI rank 0 this sets PETSC_COMM_WORLD
to PETSC_COMM_SELF
to allow a main program
written with PETSC_COMM_WORLD
to run correctly on the single rank while all the ranks
(that would normally be sharing PETSC_COMM_WORLD
) to run the solver server.
Can this be integrated into the PetscDevice
abstraction that is currently being developed?
Conceivably PCREDISTRIBUTE
could be organized in a similar manner to simplify its usage
This could be implemented directly at the KSP
level instead of using the PCMPI
wrapper object
The code could be extended to allow an MPI + OpenMP application to use the linear solver server concept across all shared-memory nodes with a single MPI process per node for the user application but multiple MPI processes per node for the linear solver.
The concept could also be extended for users’s callbacks for SNES
, TS
, and Tao
where the SNESSolve()
for example, runs on
all MPI processes but the user callback only runs on one MPI process per node.
PETSc could also be extended with an MPI-less API that provides access to PETSc’s solvers without any reference to MPI, essentially remove
the MPI_Comm
argument from PETSc calls.
See Also#
Using PETSc’s MPI parallel linear solvers from a non-MPI program, PCMPIServerEnd()
, PCMPI
, KSPCheckPCMPI()
Level#
developer
Location#
Index of all PC routines
Table of Contents for all manual pages
Index of all manual pages