PetscShmgetAllocateArray#
allocates shared memory accessible by all MPI processes in the server
Synopsis#
PetscErrorCode PetscShmgetAllocateArray(size_t sz, size_t asz, void **addr)
Not Collective, only called on the first MPI process
Input Parameters#
sz - the number of elements in the array
asz - the size of an entry in the array, for example
sizeof(PetscScalar)
Output Parameters#
addr - the address of the array
Notes#
Uses PetscMalloc()
if PETSC_HAVE_SHMGET
is not defined or the MPI linear solver server is not running
Sometimes when a program crashes, shared memory IDs may remain, making it impossible to rerun the program. Use $PETSC_DIR/lib/petsc/bin/petscfreesharedmemory to free that memory
Use the Unix command ipcs -m
to see what memory IDs are currently allocated and ipcrm -m ID
to remove a memory ID
Use the Unix command ipcrm --all
or for i in $(ipcs -m | tail -$(expr $(ipcs -m | wc -l) - 3) | tr -s ' ' | cut -d" " -f3); do ipcrm -M $i; done
to delete all the currently allocated memory IDs.
Under Apple macOS the following file must be copied to /Library/LaunchDaemons/sharedmemory.plist and the machine rebooted before using shared memory
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>shmemsetup</string>
<key>UserName</key>
<string>root</string>
<key>GroupName</key>
<string>wheel</string>
<key>ProgramArguments</key>
<array>
<string>/usr/sbin/sysctl</string>
<string>-w</string>
<string>kern.sysv.shmmax=4194304000</string>
<string>kern.sysv.shmmni=2064</string>
<string>kern.sysv.shmseg=2064</string>
<string>kern.sysv.shmall=131072000</string>
</array>
<key>KeepAlive</key>
<false/>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
Fortran Note#
The calling sequence is PetscShmgetAllocateArray[Scalar,Int](PetscInt start, PetscInt len, Petsc[Scalar,Int], pointer :: d1(:), ierr)
Developer Note#
More specifically this uses PetscMalloc()
if !PCMPIServerUseShmget
|| !PCMPIServerActive
|| PCMPIServerInSolve
where PCMPIServerInSolve
indicates that the solve is nested inside a MPI linear solver server solve and hence should
not allocate the vector and matrix memory in shared memory.
See Also#
Using PETSc’s MPI parallel linear solvers from a non-MPI program, PCMPIServerBegin()
, PCMPI
, KSPCheckPCMPI()
, PetscShmgetDeallocateArray()
Level#
developer
Location#
Index of all Sys routines
Table of Contents for all manual pages
Index of all manual pages