--- myst: substitutions: bloc: python: 43,44 cpp: 35,36 wrapper: python: "mpirun -np N python relauncherCodeFlowrateMPI.py" cpp: "mpirun -np N root -l -b -q relauncherCodeFlowrateMPI.C" --- (use_cases_macro_relauncher_mpi)= # Macro "**relauncherCodeFlowrateMPI.{{extension}}**" ## Objective The goal of this macro is to show how to handle a code run on several threads with another memory paradigm: when the `TThreadedRun` instance is relying on shared memory (leading to possible thread-safe problem, as discussed in [](#relauncher_trun_tthreadedrun)), the MPI implementation is based on the separation of the memory. The communication is made through messages. In order to this, the usual sequential runner will be removed and another runner will be called to do the job. The `flowrate` code is provided with {{uranie}} and has been also used and discussed throughout these macros. (use_cases_macro_relauncher_mpi_macro)= ## Macro {{ "```{literalinclude} " + parent_dir + "/roottest/uranie/doc/relauncher/use_cases/" + language + "/relauncherCodeFlowrateMPI." + extension + "\n" + ":language: " + language + "\n" + "```" }} Here the first difference when comparing this macro to the previous one (see [](#use_cases_macro_relauncher_threaded_macro)) is the runner creation: {{ "```{literalinclude} " + parent_dir + "/roottest/uranie/doc/relauncher/use_cases/" + language + "/relauncherCodeFlowrateMPI." + extension + "\n" + ":language: " + language + "\n" + ":lines: " + bloc[language] + "\n" + "```" }} The `TThreadedRun` object becomes a `TMpiRun` object whose construction only requests a pointer to the assessor. ````{only} cpp Apart from that, the code is very similar, the only difference being the way to call this macro. It should not be run with the usual command: ```bash root -l relauncherCodeFlowrateMPI.C ``` ```` {{ "````{" "only" "} py" + "\n" + "Another line is different as it is specific to the language: because of the way " + root + " and python deal with object destruction (though the garbage collector approach for the latter), there is problem in the way one of the main key method for MPI treatment `MPI_Finalize` is called. To prevent this from happening in python the following line should be added as soon as the runner object is created:\n" + "\n" + "```{literalinclude} " + parent_dir + "/roottest/uranie/doc/relauncher/use_cases/python/relauncherCodeFlowrateMPI.py" + "\n" + ":language: python\n" + ":lines: 61\n" + ":dedent:\n" + "```\n" + "\n" + "It allows " + root + " to destroy the object injected, calling the finalize method in order for every slave to be properly released. Apart from that, the code is very similar, the only difference being the way to call this macro. It should not be run with the usual command:\n" + "\n" + "```bash" + "\n" + "python relauncherCodeFlowrateMPI.py\n" + "```\n" + "````" }} Instead, the command line should start with the `mpirun` command as such: {{ "```{" "code" "} " + "bash" + "\n" + wrapper[language] + "\n" + "```" }} where the *N* part should be replaced by the number of requested threads. Once run, this macro also leads to the following plots. ```{only} py Beware never to use the `-i` argument with the python command line as the macro would never end. ``` ## Graph {{ "```{" "figure" "} " + parent_dir + "/roottest/build/uranie/doc/relauncher/use_cases/" + language + "/mpi/relauncherCodeFlowrateMPI.png\n" ":align: center\n" ":name: usecases_relauncherCodeFlowrateMPI\n" + figure_scale + "\n" "\n" "Representation of the output as a function of the first input with a colZ option\n" "```" }}