--- myst: substitutions: sentence: python: "" cpp: | In the peculiar case of standalone compilation please refer to the provided exemple and the discussion on how to handle this in [](#use_cases_macro_relauncher_mpi_standalone_cpp). sentence2: python: | In the specific case of python, where {{root}} and python (though the garbage collector approach) are arguing to destroy objects, a specific line (`ROOT.SetOwnership(run, True)`) has to be added, as discussed in [](#use_cases_macro_relauncher_mpi_macro). cpp: "" suffix: python: "py" cpp : "cpp" --- # `TMpiRun` In this case, many processes are started on different nodes. MPI uses the distributed memory paradigm: each process have is own address space. All processes run the same macro and define their own objects. If you create a big object in the evaluation/master code section, all processes allocate it (this is why, generally, the main dataserver object is created in the `onMaster` part to prevent from creating as many dataserver as there are slaves). - the constructor calls `MPI_Init` for the initial process synchronisation. This step is automatical, as long as one is running through the on-the-fly C++ compilator thanks to the `root` command or in python. {{sentence[language]}} - `startSlave` either exits immediately for the master process (id=0) or starts evaluation loop for other ones. - depending if we are on the master process or not, `onMaster` is true or false. - `stopSlave` puts fake items for evaluation and then exits. Evaluation processes get it, stop their loop, exit from `startslave`, and usually jump the master bloc instructions. Unlike threads, the master process is not waiting for evaluation processes. - the destructor calls `MPI_Finalize` for the final process synchronisation. {{sentence2[language]}} `TMpiRun` constructor has one argument, a pointer to a `TEval` object. {{ "```{" "include" "} " + "tmpirun_" + suffix[language] + ".md\n" + "```" }} In general, one runs a MPI job on a cluster with a batch scheduler. The previous command is put in a shell script with batch scheduler parameters. The {{root}} macro does not use viewer, but saves results in a file. They will be analysed in a post interactive session using all the {{root}} facilities. ````{only} cpp If one wants to run in a compiled way, this cannot be done just by adding a "+" to the command line. Effectively, if all processes try to compile using the same output file, conflicts occur. One way to do is to run a first {{root}} session without mpirun to compile your macro. Then, if you run a second mpi root session with the single "+", processes will use the pre-compiled macro. You can compile your macro with the command: ```cpp gROOT->LoadMacro("Rosenbrock.C++"); ``` `LoadMacro` compiles it but does not execute it. Another possibility to run a code in a compile way is to consider the standalone compilation which consists in considering {{uranie}} as a set of libraries, as already discussed in [](#overview_root_compilation). ```` ```{warning} The `TMpiRun` implementation requires also at least 2 cores (one being the master and the other one the core on which assessors are run). If only one core is provided, the loop will run infinitely. ``` ```{toctree} tmpirun/tbimpirun_tsubmpirun ```