Converts a group of MPI ranks to a runnable command. More...
Converts a group of MPI ranks to a runnable command.
This package and its submodules implement execution of externals MPI programs. This package is not intended to be used directly, instead one should use produtil.run. This package appears to the outside to be a module that implements a common interface to various local MPI implementations. This is done by automatically detecting which MPI implementation is in use, and then importing the entire contents of the corresponding sub-module of produtil.mpi_impl. See the submodules for details on each implementation:
The following subroutines are imported from one of those modules. They are added to the mpi_impl package level to make the mpi_impl look identical to the underlying implementation module:
mpirunner(arg,allranks=False,**kwargs) - Implementation of produtil.run.mpirun(). Given an object that is a subclass of produtil.mpiprog.MPIRanksBase, construct and return a produtil.prog.Runner that will execute that MPI command. The allranks=True option requests that the program use all available MPI ranks. An exception should be raised if the program also requests a specific number of ranks (other than 1).
There are two different types of MPI programs that mpirunner must handle. One is MPI execution of non-MPI programs, which the caller requests via produtil.run.mpiserial. Some MPI implementations support running non-MPI programs directly, while others don't. The external C program "mpiserial" provides an MPI wrapper program to work around that lack of support. It is a simple MPI program that directs each rank to execute a shell command. The other variety of program mpirunner must handle is, of course, MPI programs. These are differentiated via: (serial,parallel)=arg.check_serial() If serial is true, the program is serial, if parallel is true, the program is parallel. If both are true, MPIMixed should be raised.
The mpirunner must also handle the allranks=True vs. False cases. If allranks=True, the caller is requesting that the provided MPI program be run on all available ranks. If the MPI program also provides a rank specification (detected via arg.nranks()!=1) then the MPI_COMM_WORLD is overspecified and the mpirunner must raise MPIAllRanksError.
These are the detection routines imported from each submodule, except for no_mpi. The name of the routine is "detect()" in its module, and is renamed during import to the package-level namespace:
To implement a new MPI implementation, one must create a new submodule of mpi_impl. It is best to examine the existing modules and mimic them when doing this. Most architectures are similar to either the mpirun_lsf (which uses command files) or mpiexec (which provides arguments to mpiexec on the command line). In addition, the external program "mpiserial" provides a means by which to execute a list of serial programs via an MPI invocation for MPI implementations that do not natively support that (such as the problematic SGI MPT). Furthermore, some MPI implementations may have bugs or limitations that one must work around via setting environment variables (such as SGI MPT with its numerous hard-coded limits). The mpirunner and openmp functions should work around those problems.
Note that there are two utilities designed to simplify the implementation of a new MPI module:
Once you have a new MPI implementation module, you must edit produtil/mpi_impl/__init__.py to detect your MPI implementation and correctly import the module. The produtil/mpi_impl/__init__.py must import that module's detect() function, and detect whether the MPI implementation should be used. If it should be, then init.py must import the relevant symbols from your module into the package-level namespace. There are instructions in the code in init.py on how to modify it to achieve these steps.
Namespaces | |
impi | |
Adds Intel MPI support to produtil.run. | |
inside_aprun | |
Adds support for running serial programs when one is inside an aprun execution. | |
lsf_cray_intel | |
Adds support for LSF+aprun with the Intel OpenMP to produtil.run. | |
mpi_impl_base | |
Utilities like CMDFGen to simplify adding new MPI implementations to the produtil.run suite of modules. | |
mpiexec | |
Adds MPICH or MVAPICH2 support to produtil.run. | |
mpiexec_mpt | |
Adds SGI MPT support to produtil.run. | |
mpirun_lsf | |
Adds LSF+IBMPE support to produtil.run. | |
no_mpi | |
Stub funcitons to allow produtil.mpi_impl to run when MPI is unavailable. | |
srun | |
Adds SLURM srun support to produtil.run. | |
Variables | |
list | __all__ |
List of symbols to export by "from produtil.mpi_impl import *". More... | |
produtil.mpi_impl.__all__ |
List of symbols to export by "from produtil.mpi_impl import *".
Definition at line 188 of file __init__.py.