HWRF  trunk@4391
__init__.py
1 
2 ##@namespace produtil.mpi_impl
3 # Converts a group of MPI ranks to a runnable command.
4 #
5 # @section produtil_mpi_impl_overview Produtil MPI Implementation
6 #
7 # This package and its submodules implement execution of externals MPI
8 # programs. This package is not intended to be used directly, instead
9 # one should use produtil.run. This package appears to the outside to
10 # be a module that implements a common interface to various local MPI
11 # implementations. This is done by automatically detecting which MPI
12 # implementation is in use, and then importing the entire contents of
13 # the corresponding sub-module of produtil.mpi_impl. See the submodules
14 # for details on each implementation:
15 #
16 # * produtil.mpi_impl.mpiexec --- MPICH or public MPVAPICH2
17 # * produtil.mpi_impl.impi --- Intel MPI
18 # * produtil.mpi_impl.mpiexec_mpt --- SGI MPT
19 # * produtil.mpi_impl.mpirun_lsf --- LSF wrapped around IBMPE
20 # * produtil.mpi_impl.no_mpi --- For a purely serial environment.
21 #
22 # @section produtil_mpi_impl_subroutines Subroutines Imported from Implementation Modules
23 #
24 # The following subroutines are imported from one of those modules.
25 # They are added to the mpi_impl package level to make the mpi_impl
26 # look identical to the underlying implementation module:
27 #
28 # * openmp(arg,threads) - given a Runner, set it up to use OpenMP
29 # If threads is provided, it is the number of threads to use.
30 # Otherwise, no thread count is specified and it is assumed
31 # that the underlying OpenMP implementation will use the
32 # correct number.
33 #
34 # * can_run_mpi() - does this computer support running MPI programs?
35 #
36 # * bigexe_prepend(arg,**kwargs) - Modifies an executable to run on a
37 # compute node instead of the batch node. This is intended for
38 # future support of the Cray architecture, where the batch script
39 # runs on a batch node, and must call "aprun" to execute a program
40 # on a remote compute node. This is the function that one would
41 # use to prepend "aprun" and its various arguments. This
42 # functionality is not presently tested.
43 #
44 # * mpirunner(arg,allranks=False,**kwargs) - Implementation of
45 # produtil.run.mpirun(). Given an object that is a subclass of
46 # produtil.mpiprog.MPIRanksBase, construct and return a
47 # produtil.prog.Runner that will execute that MPI command. The
48 # allranks=True option requests that the program use all available
49 # MPI ranks. An exception should be raised if the program also
50 # requests a specific number of ranks (other than 1).
51 #
52 # There are two different types of MPI programs that mpirunner
53 # must handle. One is MPI execution of non-MPI programs, which
54 # the caller requests via produtil.run.mpiserial. Some MPI
55 # implementations support running non-MPI programs directly, while
56 # others don't. The external C program "mpiserial" provides an
57 # MPI wrapper program to work around that lack of support. It is
58 # a simple MPI program that directs each rank to execute a shell
59 # command. The other variety of program mpirunner must handle is,
60 # of course, MPI programs. These are differentiated via:
61 # (serial,parallel)=arg.check_serial()
62 # If serial is true, the program is serial, if parallel is true,
63 # the program is parallel. If both are true, MPIMixed should be
64 # raised.
65 #
66 # The mpirunner must also handle the allranks=True vs. False cases.
67 # If allranks=True, the caller is requesting that the provided
68 # MPI program be run on all available ranks. If the MPI program
69 # also provides a rank specification (detected via arg.nranks()!=1)
70 # then the MPI_COMM_WORLD is overspecified and the mpirunner must
71 # raise MPIAllRanksError.
72 #
73 # These are the detection routines imported from each submodule, except
74 # for no_mpi. The name of the routine is "detect()" in its module, and
75 # is renamed during import to the package-level namespace:
76 #
77 # * impi_detect() --- returns True if the Intel MPI should be used
78 #
79 # * mpiexec_detect() - returns True if the MPICH or MVAPICH2 MPI
80 # should be used
81 #
82 # * mpiexec_mpt_detect() - returns True if the SGI MPT should be used
83 #
84 # * mpirun_lsf_detect() - returns True if LSF IBMPE should be used
85 #
86 # @section Adding New MPI Implementations
87 #
88 # To implement a new MPI implementation, one must create a new submodule
89 # of mpi_impl. It is best to examine the existing modules and mimic
90 # them when doing this. Most architectures are similar to either the
91 # mpirun_lsf (which uses command files) or mpiexec (which provides
92 # arguments to mpiexec on the command line). In addition, the external
93 # program "mpiserial" provides a means by which to execute a list of
94 # serial programs via an MPI invocation for MPI implementations that do
95 # not natively support that (such as the problematic SGI MPT).
96 # Furthermore, some MPI implementations may have bugs or limitations
97 # that one must work around via setting environment variables (such as
98 # SGI MPT with its numerous hard-coded limits). The mpirunner and
99 # openmp functions should work around those problems.
100 #
101 # Note that there are two utilities designed to simplify the
102 # implementation of a new MPI module:
103 #
104 # * produtil.mpiprog.MPIRanksBase.to_arglist() -- walks the tree of
105 # objects automatically generating an mpi invocation command
106 # (mpiexec, mpirun, etc.) with arguments, based on a provided set
107 # of rules. This is how the three existing modules make their MPI
108 # commands. It is quite simple to use, and handles the hard work
109 # of walking the object tree for you.
110 #
111 # * produtil.mpi_impl.mpi_impl_base.CMDFGen - provides a way of easily
112 # writing a command file based on produtil.mpiprog.MPISerial
113 # objects. This is for MPI implementations such as IBMPE that
114 # require a file listing the commands to run on each MPI rank. It
115 # is also needed when using mpiserial to execute non-MPI programs
116 # under MPI
117 #
118 # Once you have a new MPI implementation module, you must edit
119 # produtil/mpi_impl/__init__.py to detect your MPI implementation and
120 # correctly import the module. The produtil/mpi_impl/__init__.py must
121 # import that module's detect() function, and detect whether the MPI
122 # implementation should be used. If it should be, then __init__.py
123 # must import the relevant symbols from your module into the
124 # package-level namespace. There are instructions in the code in
125 # __init__.py on how to modify it to achieve these steps.
126 
127 from . import no_mpi
128 
129 ########################################################################
130 # Import the MPI detection functions from all known modules. We
131 # ignore ImportError in case the module is missing (which it will be
132 # for NCO, who only gets the WCOSS-specific modules).
133 
134 try:
135  from .impi import detect as impi_detect
136 except ImportError: pass
137 
138 try:
139  from .mpiexec import detect as mpiexec_detect
140 except ImportError: pass
141 
142 try:
143  from .srun import detect as srun_detect
144 except ImportError: pass
145 
146 try:
147  from .inside_aprun import detect as inside_aprun_detect
148 except ImportError: pass
149 
150 try:
151  from .mpiexec_mpt import detect as mpiexec_mpt_detect
152 except ImportError as e: pass
153 
154 try:
155  from .mpirun_lsf import detect as mpirun_lsf_detect
156 except ImportError as e: pass
157 
158 try:
159  from .lsf_cray_intel import detect as lsf_cray_intel_detect
160 except ImportError as e: pass
161 
162 try:
163  from .inside_aprun import detect as inside_aprun_detect
164 except ImportError as e: pass
165 
166 ########################################################################
167 # Decide what MPI implementation to use
168 
169 if 'mpirun_lsf_detect' in dir() and mpirun_lsf_detect():
170  from .mpirun_lsf import *
171 elif 'srun_detect' in dir() and srun_detect():
172  from .srun import *
173 elif 'inside_aprun' in dir() and inside_aprun_detect():
174  from .inside_aprun import *
175 elif 'lsf_cray_intel_detect' in dir() and lsf_cray_intel_detect():
176  from .lsf_cray_intel import *
177 elif 'impi_detect' in dir() and impi_detect():
178  from .impi import *
179 elif 'mpiexec_mpt_detect' in dir() and mpiexec_mpt_detect():
180  from .mpiexec_mpt import *
181 elif 'mpiexec_detect' in dir() and mpiexec_detect():
182  from .mpiexec import *
183 else:
184  from .no_mpi import *
185 
186 ##@var __all__
187 # List of symbols to export by "from produtil.mpi_impl import *"
188 __all__=['mpirunner','can_run_mpi','bigexe_prepend',
189  'guess_maxmpi','guess_nthreads']