13.7.7. Macro “relauncherCodeFlowrateMPI.C

13.7.7.1. Objective

The goal of this macro is to show how to handle a code run on several threads with another memory paradigm: when the TThreadedRun instance is relying on shared memory (leading to possible thread-safe problem, as discussed in TThreadedRun), the MPI implementation is based on the separation of the memory. The communication is made through messages. In order to this, the usual sequential runner will be removed and another runner will be called to do the job. The flowrate code is provided with Uranie and has been also used and discussed throughout these macros.

13.7.7.2. Macro

using namespace URANIE::DataServer;
using namespace URANIE::Relauncher;
using namespace URANIE::MpiRelauncher;

void relauncherCodeFlowrateMPI()
{

    TAttribute *rw = new TAttribute("rw");
    TAttribute *r = new TAttribute("r");
    TAttribute *tu = new TAttribute("tu");
    TAttribute *tl = new TAttribute("tl");
    TAttribute *hu = new TAttribute("hu");
    TAttribute *hl = new TAttribute("hl");
    TAttribute *l = new TAttribute("l");
    TAttribute *kw = new TAttribute("kw");

    // Create the output attribute
    TAttribute *yhat = new TAttribute("yhat");
    TAttribute *d = new TAttribute("d");
    
    // Set the reference input file and the key for each input attributes
    TFlatScript fin("flowrate_input_with_values_rows.in");
    fin.setInputs(8, rw, r, tu, tl, hu, hl, l, kw);

    // The output file of the code
    TFlatResult fout("_output_flowrate_withRow_.dat");
    fout.setOutputs(2, yhat, d);

    // Instanciation de mon code
    TCodeEval mycode("flowrate -s -r");
    //mycode.setOldTmpDir();
    mycode.addInputFile(&fin);
    mycode.addOutputFile(&fout);

    // Create the MPI runner
    TMpiRun run(&mycode);
    run.startSlave();
    if (run.onMaster())
    {
	// Define the DataServer
	TDataServer tds("tdsflowrate", "Design of Experiments for Flowrate");
	mycode.addAllInputs(&tds);
	tds.fileDataRead("flowrateUniformDesign.dat", kFALSE, kTRUE);

	TLauncher2 lanceur(&tds, &run);

        // resolution
        lanceur.solverLoop();

        tds.exportData("_output_testFlowrateMPI_.dat");

        run.stopSlave();
    }

    delete rw;
    delete r;
    delete tl;
    delete tu;
    delete hl;
    delete hu;
    delete l;
    delete kw;
    delete yhat;
    delete d;
}

Here the first difference when comparing this macro to the previous one (see Macro) is the runner creation:

    // Create the MPI runner
    TMpiRun run(&mycode);

The TThreadedRun object becomes a TMpiRun object whose construction only requests a pointer to the assessor.

Apart from that, the code is very similar, the only difference being the way to call this macro. It should not be run with the usual command:

root -l relauncherCodeFlowrateMPI.C

Instead, the command line should start with the mpirun command as such:

mpirun -np N root -l -b -q relauncherCodeFlowrateMPI.C

where the N part should be replaced by the number of requested threads. Once run, this macro also leads to the following plots.

13.7.7.3. Graph

../../_images/relauncherCodeFlowrateMPI.png

Figure 13.50 Representation of the output as a function of the first input with a colZ option