Configuring PETSc FAQ¶
Please obtain PETSc via the repository or download the latest patched tarball. See download documentation for more information.
See quick-start tutorial for a step-by-step walk-through of the installation process.
Table of Contents
There are many example
configure scripts at
config/examples/*.py. These cover a
wide variety of systems, and we use some of these scripts locally for testing. One can
modify these files and run them in lieu of writing one yourself. For example:
If there is a system for which we do not yet have such a
configure script and/or
the script in the examples directory is outdated we welcome your feedback by submitting
your recommendations to firstname.lastname@example.org. See bug report documentation for more information.
If you do not have a Fortran compiler or MPICH installed locally (and want to use PETSc from C only).
> ./configure --with-cc=gcc --with-cxx=0 --with-fc=0 --download-f2cblaslapack --download-mpich
Same as above - but install in a user specified (prefix) location.
> ./configure --prefix=/home/user/soft/petsc-install --with-cc=gcc --with-cxx=0 --with-fc=0 --download-f2cblaslapack --download-mpich
If BLAS/LAPACK, MPI sources (in “-devel” packages in most Linux distributions) are already installed in default system/compiler locations and
mpif90, mpiexec are available via
$PATH- configure does not require any additional options.
If BLAS/LAPACK, MPI are already installed in known user location use:
> ./configure --with-blaslapack-dir=/usr/local/blaslapack --with-mpi-dir=/usr/local/mpich
> ./configure --with-blaslapack-dir=/usr/local/blaslapack --with-cc=/usr/local/mpich/bin/mpicc --with-mpi-f90=/usr/local/mpich/bin/mpif90 --with-mpiexec=/usr/local/mpich/bin/mpiexec
Do not specify
--with-fc etc for the above when using
--with-mpi-dir - so that
mpif90 can be picked up from mpi-dir!
Build Complex version of PETSc (using c++ compiler):
> ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-clanguage=cxx --download-fblaslapack --download-mpich --with-scalar-type=complex
Install 2 variants of PETSc, one with gnu, the other with Intel compilers. Specify different
$PETSC_ARCHfor each build. See multiple PETSc install documentation for further recommendations:
> ./configure PETSC_ARCH=linux-gnu --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich > make PETSC_ARCH=linux-gnu all test > ./configure PETSC_ARCH=linux-gnu-intel --with-cc=icc --with-cxx=icpc --with-fc=ifort --download-mpich --with-blaslapack-dir=/usr/local/mkl > make PETSC_ARCH=linux-gnu-intel all test
If no compilers are specified - configure will automatically look for available MPI or
regular compilers in the user’s
$PATH in the following order:
Specify compilers using the options
--with-fcfor c, c++, and fortran compilers respectively:
> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
It’s best to use MPI compilers as this will avoid the situation where MPI is compiled
with one set of compilers (like
gfortran) and user specified incompatible
compilers to PETSc (perhaps
ifort). This can be done by either specifying
--with-mpi-dir (and not
> ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
or the following (but without
> ./configure --with-mpi-dir=/opt/mpich2-1.1
If a fortran compiler is not available or not needed - disable using:
> ./configure --with-fc=0
If a c++ compiler is not available or not needed - disable using:
> ./configure --with-cxx=0
configure defaults to building PETSc in debug mode. One can switch to optimized
mode with the
--with-debugging=0 (we suggest using a different
$PETSC_ARCH for debug and optimized builds, for example arch-debug and arch-opt, this
way you can switch between debugging your code and running for performance by simply
changing the value of
$PETSC_ARCH). See multiple install documentation for further details.
Additionally one can specify more suitable optimization flags with the options
CXXOPTFLAGS. For example when using gnu compilers with
corresponding optimization flags:
> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-mpich
configure cannot detect compiler libraries for certain set of compilers. In this
case one can specify additional system/compiler libraries using the
> ./configure --LIBS='-ldl /usr/lib/libm.a'
For any external packages used with PETSc we highly recommend you have PETSc download and install the packages, rather than you installing them separately first. This insures that:
The packages are installed with the same compilers and compiler options as PETSc so that they can work together.
A compatible version of the package is installed. A generic install of this package might not be compatible with PETSc (perhaps due to version differences - or perhaps due to the requirement of additional patches for it to work with PETSc).
Some packages have bug fixes, portability patches, and upgrades for dependent packages that have not yet been included in an upstream release, and hence may not play nice with PETSc.
configure has the ability to download and install these external packages. Alternatively if these packages are already installed, then
configure can detect and use them.
If you are behind a firewall and cannot use a proxy for the downloads or have a very slow
network, use the additional option
configure to print the URLs of all the packages you must download. You
may then download the packages to some directory (do not uncompress or untar the files)
and then point
configure to these copies of the packages instead of trying to download
them directly from the internet.
The following modes can be used to download/install external packages with
--download-PACKAGENAME: Download specified package and install it, enabling PETSc to use this package. This is the recommended method to couple any external packages with PETSc:
> ./configure --download-fblaslapack --download-mpich
configurecannot automatically download the package (due to network/firewall issues), one can download the package by alternative means (perhaps wget, curl, or scp via some other machine). Once the tarfile is downloaded, the path to this file can be specified to configure with this option.
configurewill proceed to install this package and then configure PETSc with it:
> ./configure --download-mpich=/home/petsc/mpich2-1.0.4p1.tar.gz
--with-PACKAGENAME-dir=/path/to/dir: If the external package is already installed - specify its location to
configure(it will attempt to detect and include relevant library files from this location). Normally this corresponds to the top-level installation directory for the package:
> ./configure --with-mpi-dir=/home/petsc/software/mpich2-1.0.4p1
--with-PACKAGENAME-lib=LIBRARYLIST: Usually a package is defined completely by its include file location and library list. If the package is already installed one can use these two options to specify the package to
configure. For example:
> ./configure --with-superlu-include=/home/petsc/software/superlu/include --with-superlu-lib=/home/petsc/software/superlu/lib/libsuperlu.a
> ./configure --with-parmetis-include=/sandbox/balay/parmetis/include --with-parmetis-lib="-L/sandbox/balay/parmetis/lib -lparmetis -lmetis"
> ./configure --with-parmetis-include=/sandbox/balay/parmetis/include --with-parmetis-lib=[/sandbox/balay/parmetis/lib/libparmetis.a,libmetis.a]
Generally one would use either one of the above installation modes for any given package - and not mix these. (i.e combining
--with-mpi-includeetc. should be avoided).
Some packages might not support certain options like
--with-PACKAGENAME-dir. Architectures like Microsoft Windows might have issues with these options. In these cases,
--with-PACKAGENAME-liboptions should be preferred.
If you want to download a compatible external package manually, then the URL for this package is listed in configure source for this package. For example, check
config/BuildSystem/config/packages/SuperLU.pyfor the url for download this package.
--with-packages-build-dir=PATH: By default, external packages will be unpacked and the build process is run in
$PETSC_DIR/$PETSC_ARCH/externalpackages. However one can choose a different location where these packages are unpacked and the build process is run.
These packages provide some basic numeric kernels used by PETSc.
automatically look for BLAS/LAPACK in certain standard locations, on most systems you
should not need to provide any information about BLAS/LAPACK in the
One can use the following options to let
configure download/install BLAS/LAPACK
When fortran compiler is present:
> ./configure --download-fblaslapack
Or when configuring without a Fortran compiler - i.e
> ./configure --download-f2cblaslapack
Alternatively one can use other options like one of the following:
> ./configure --with-blaslapack-lib=libsunperf.a > ./configure --with-blas-lib=libblas.a --with-lapack-lib=liblapack.a > ./configure --with-blaslapack-dir=/soft/com/packages/intel/13/079/mkl
Intel provides BLAS/LAPACK via the MKL library. One can specify it
--with-blaslapack-dir=/soft/com/packages/intel/13/079/mkl. If the above option does
not work - one could determine the correct library list for your compilers using Intel
MKL Link Line Advisor and specify with the
Sadly, IBM’s ESSL does not have all the routines of BLAS/LAPACK that some
packages, such as SuperLU expect; in particular slamch, dlamch and xerbla. In this
case instead of using ESSL we suggest
--download-fblaslapack. If you really want
to use ESSL, see https://www.pdc.kth.se/hpc-services.
The Message Passing Interface (MPI) provides the parallel functionality for PETSc.
configure will automatically look for MPI compilers
mpif90 etc and use
them if found in your PATH. One can use the following options to let
download/install MPI automatically:
Using MPI Compilers¶
It’s best to install PETSc with MPI compiler wrappers (often called
mpif90) - this way, the SAME compilers used to build MPI are used to
build PETSc. See the section on compilers above for more
Vendor provided MPI might already be installed. IBM, SGI, Cray etc provide their own:
> ./configure --with-cc=vendor_mpicc --with-fc=vendor_mpif90
If using MPICH which is already installed (perhaps using myrinet/gm) then use (without specifying
--with-cc=gccetc. so that
> ./configure --with-mpi-dir=/absolute/path/to/mpich/install
Installing Without MPI¶
You can build (sequential) PETSc without MPI. This is useful for quickly installing PETSc:
> ./configure --with-mpi=0
However - if there is any MPI code in user application, then its best to install a full MPI implementation - even if the usage is currently limited to uniprocessor mode:
By default, PETSc does an in-place installation, meaning the libraries are kept in the same directories used to compile PETSc. This is particularly useful for those application developers who follow the PETSc git repository main or release branches since rebuilds for updates are very quick and painless.
The libraries and include files are located in
Out-of-place Installation With
To install the libraries and include files in another location use the
> ./configure --prefix=/home/userid/my-petsc-install --some-other-options
The libraries and include files will be located in
Installation in Root Location, Not Recommended (Uncommon)¶
One should never run
configure or make on any package using root access. Do so at
your own risk.
If one wants to install PETSc in a common system location like
that requires root access we suggest creating a directory for PETSc with user privileges,
and then do the PETSc install as a regular/non-root user:
> sudo mkdir /opt/petsc > sudo chown user:group /opt/petsc > cd /home/userid/petsc > ./configure --prefix=/opt/petsc/my-root-petsc-install --some-other-options > make > make install
Installs For Package Managers: Using
DESTDIR (Very uncommon)¶
> ./configure --prefix=/opt/petsc/my-root-petsc-install > make > make install DESTDIR=/tmp/petsc-pkg
/tmp/petsc-pkg. The package should then be installed at
Multiple Installs Using
Specify a different
--prefix location for each configure of different options - at
configure time. For example:
> ./configure --prefix=/opt/petsc/petsc-3.15.0-mpich --with-mpi-dir=/opt/mpich > make > make install [DESTDIR=/tmp/petsc-pkg] > ./configure --prefix=/opt/petsc/petsc-3.15.0-openmpi --with-mpi-dir=/opt/openmpi > make > make install [DESTDIR=/tmp/petsc-pkg]
The PETSc libraries and generated included files are placed in the sub-directory off the
$PETSC_ARCH which is either provided by the user with, for example:
> export PETSC_ARCH=arch-debug > ./configure > make > export PETSC_ARCH=arch-opt > ./configure --some-optimization-options > make
> ./configure PETSC_ARCH=arch-debug > make > ./configure --some-optimization-options PETSC_ARCH=arch-opt > make
If not provided
configure will generate a unique value automatically (for in-place non
--prefix configurations only).
> ./configure > make > ./configure --with-debugging=0 > make
Produces the directories (on an Apple MacOS machine)
On systems where you need to use a job scheduler or batch submission to run jobs use the
--with-batch. On such systems the make check option will not
You must first ensure you have loaded appropriate modules for the compilers etc that you wish to use. Often the compilers are provided automatically for you and you do not need to provide
--with-cc=XXXetc. Consult with the documentation and local support for such systems for information on these topics.
On such systems you generally should not use
--download-fblaslapacksince the systems provide those automatically (sometimes appropriate modules must be loaded first).
--download-packageoptions do not work on these systems, for example HDF5. Thus you must use modules to load those packages and
--with-packageto configure with the package.
Since building external packages on these systems is often troublesome and slow we recommend only installing PETSc with those configuration packages that you need for your work, not extras.
> export TAU_MAKEFILE=/home/balay/soft/linux64/tau-2.20.3/x86_64/lib/Makefile.tau-mpi-pdt > ./configure CC=/home/balay/soft/linux64/tau-2.20.3/x86_64/bin/tau_cc.sh --with-fc=0 PETSC_ARCH=arch-tau
PETSc is able to take adavantage of GPU’s and certain accelerator libraries, however some require additional
On Linux - make sure you have compatible NVIDIA driver installed.
On Windows - Use either Cygwin or WSL the latter of which is entirely untested right now. If you have experience with WSL and/or have successfully built PETSc on windows for use with CUDA we welcome your input at email@example.com. See the bug-reporting documentation for more details.
In most cases you need only pass the configure option
config/examples/arch-ci-linux-cuda-double.py for example usage.
CUDA build of PETSc currently works on Mac OS X, Linux, Microsoft Windows with Cygwin.
Examples that use CUDA have the suffix .cu; see
In most cases you need only pass the configure option
--download-kokkos and one of
--with-pthread (or nothing to use sequential
Kokkos). See the CUDA installation documenation,
OpenMPI installation documentation for further reference on their
Examples that use Kokkos have the suffix .kokkos.cxx; see
Requires the OpenCL shared library, which is shipped in the vendor graphics driver and the OpenCL headers; if needed you can download them from the Khronos Group directly. Package managers on Linux provide these headers through a package named ‘opencl-headers’ or similar. On Apple systems the OpenCL drivers and headers are always available and do not need to be downloaded.
Always make sure you have the latest GPU driver installed. There are several known issues with older driver versions.
config/examples/arch-ci-linux-viennacl.py for example usage.
NERSC - CORI machine¶
Project ID: m3353
PI: Richard Mills
Notes on usage:
ALCF - Argonne National Laboratory - theta machine - Intel KNL based system¶
Notes on usage:
Log into theta.alcf.anl.gov (Use crypto card or MobilePass app for the password)
There are three compiler suites Modules
module load PrgEnv-intel intel
module load PrgEnv-gnu gcc/7.1.0/
module load PrgEnv-cray
List currently loaded modules: module list
List all available modules: module avail
BLAS/LAPACK will automatically be found so you do not need to provide it
It is best not to use built-in modules for external packages (except BLAS/LAPACK because they are often buggy. Most external packages can be built using the
--download-packagenameoption with the intel or Gnu environment but not cray
You can use
config/examples/arch-cray-xc40-knl-opt.pyas a template for running configure but it is outdated
When using the Intel module you may need to use
--download-sowing-cxxpp="icpc -E"since the GNU compilers may not work as they access Intel files
To get an interactive node use
qsub -A CSC250STMS07 -n 1 -t 60 -q debug-flat-quad -I
To run on interactive node using two MPI ranks use
aprun -n 2 ./program options
ALCF - Argonne National Laboratory - thetagpu machine - AMD CPUs with NVIDIA GPUs¶
Notes on usage:
Log into theta.alcf.anl.gov - The GPU front-end and compute nodes do not support git via ssh - so best to use
git clone/fetchetc. (in PETSc clone) on theta.alcf.anl.gov - ssh thetagpusn1 (this is the GPU front end)
Do export http_proxy=http://proxy.tmi.alcf.anl.gov:3128 export https_proxy=http://proxy.tmi.alcf.anl.gov:3128
module load nvhpc (Do not module load any MPI)
module load libtool-2.4.6-gcc-7.5.0-jdxbjft
./configure –with-mpi-dir=$CUDA_DIR/../comm_libs/mpi/ -with-cuda-dir=$CUDA_DIR/11.0 –download-f2cblaslapack=1
to install cmake add –download-cmake –download-cmake-configure-arguments=”– -DCMAKE_USE_OPENSSL=OFF”
to install Kokkos, do export CUDA_ROOT=$CUDA_DIR/11.0
Log into interactive compute nodes with - qsub -I -t TimeInMinutes -n 1 -A AProjectName (for example, gpu_hack) (-q single-gpu will give you access to one GPU, and is often much quicker; otherwise you get access to all eight GPUs on a node) - Run executables with $CUDA_DIR/../comm_libs/mpi/bin/mpirun
OLCF - Oak Ridge National Laboratory - Summit machine - NVIDIA GPUs and IBM Power PC processors¶
Project ID: CSC314
PI: Barry Smith
Notes on usage:
Log into summit.olcf.ornl.gov
> module load cmake hdf5 cuda > module load pgi > module load essl netlib-lapack xl > module load gcc
config/examples/arch-olcf-opt.pyas a template for running
You configure PETSc and build examples in your home directory, but launch them from your “work” directory.
bsubcommand to submit jobs to the queue. See the “Batch Scripts” section here running jobs
Tools for profiling -
-log_viewthat adds GPU communication and computation to the summary table -
nvvpfrom the CUDA toolkit
For iOS see
$PETSC_DIR/systems/Apple/iOS/bin/makeall. A thorough discussion of the
installation procedure is given here.
For Android, you must have your standalone bin folder in the path, so that the compilers are visible.
config/examples/arch-arm64-opt.py for iOS and
config/examples/arch-armv7-opt.py for example usage.